• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

The Future of Programming

Vasilisa

Symbolic Herald
Joined
Feb 2, 2010
Messages
3,946
Instinctual Variant
so/sx

I'm no computer scientist, but his message at the end resonated with me.
 

93JC

Active member
Joined
Dec 17, 2008
Messages
3,989
The idea of programming by doing something other than writing a text file isn't entirely dead. I've used LabVIEW, it's a graphical programming 'language'. (I only used it once, and I was not very adept at it.)

The salient point at the end regarding dogma is a very good one. It is very easy to become complacent and forego innovation for the sake of keeping things "the way they've always been". I've especially been thinking about this for the last few weeks because my boss is leaving the firm I work for at month's end and starting his own, and most of the complaints he has seem to centre on the changes at the company that have been made over the years. To him every little change was a change for the worse, despite a lot of them being better business practices.

At the same time I'm afraid I'm trapped in the same kind of thinking. After all, he had been with the company for 27 years; as long as I've been alive. It's pretty much the only place he's ever worked since graduating college. I quit my first job out of university after three years because, well...

... I thought it was changing for the worse.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,996
Wow, [MENTION=9273]Vasilisa[/MENTION].

This gets really into many of the details of computing. I am amazed at your ability to understand the issues here.

I pushed a lot of the ideas mentioned in the video during my (now past) decade long career in both hardware and software.

The video is rather tongue-in-cheek, and it doesn't really go into any of the details of why some of these ideas and models didn't or don't take off. Nor does it acknowledge how much the ideas have already been accepted. I have had first had experience in both sides of the issue regarding many of the ideas involved mentioned in the video, and in fact worked for a long time in one of the companies mentioned.

I can also tell you that people have legitimate reasons for resisting changes. It was not a simple matter of "being stuck in old ways". Frankly, anyone who has managed to effect change anywhere will recognize this thinking about the people resisting change as a cop-out. People have legitimate reasons for resisting, and addressing those reasons are what helps convince people, not calling them "stuck in their ways."

The main reasons people tended to resist changes to "higher level" programming was due to the poor quality of assemblers, compilers, etc. They produced crappy code at the lower level, and thus the professionals balked at the code (not necessarily the idea). Once the assemblers got good enough, they balked far less. Also, what "good" meant started to change. Programs were no longer required to fit in tiny spaces, or execute in very few instructions. In places where programs still need to do that, people still code in binary (I have done this myself).

Direct Visual Manipulation of Programs and Spatial Representations
Although two separate things in the video, and in theory could come separately (one may argue that interpreters are direct manipulation without spatial represnetations), in practice, when people tout the benefits, they are done together.

When I was in high school (1990s) I was obsessed with the notion of CASE tools. I spent a great deal of time thinking through a way to create the ultimate case tool using UML and essentially getting rid of the need to write long blocks of sequential programming. My mom was a computer scientist, and she had lots of books on object oriented programming, information systems, and the like, and I tried to pretty much read all those books. I programmed in Delphi (still among my favorite programming environments, though I haven't used it in a long time).

There are a lot of benefits to this style of programming, and a lot of the ideas have been incorporated into modern IDEs. The main benefit, I think, is that this style make for very pleasurable maintenance, and is almost self-documenting. However, there is one major problem...it can severely hamper productivity instead of enhancing it like it purports to do. The main reason...it takes much longer to draw something than to write something. Psuedocode is perhaps 10 times faster than creating a flow chart for the same algorithm. Some people may argue that this is irrelevant because it is not development time, but maintenance time that dominates the time spent on software projects. Perhaps true. But it ignores the fact that often, especially when doing something new or innovative, keeping a "flow" state while being creative is important, and visual programming is like forcing people to write a final draft without allowing them to make any rough ones.

Also, some visual programming (especially LabVIEW like [MENTION=5837]93JC[/MENTION] mentioned) leads to difficult to maintain programs as well. The paradigm of visual programming here essentially converts programming to the equivalent of building a circuit. Having worked on both, let me tell you that fatal wiring errors are much harder to track down than syntax errors.

Finally, text has the nice feature of being simultaneously human readable and fairly compact in representation. Even if it is loosely so, it is based on natural human language. Visual/spatial representations are significantly less compact unless you have a a specific program that you need to read it. What this means is you get locked-in to vendors for specific visual/spatial programming. Make a good Turing complete, visual/spatial programming language that is free, open source, and available to everyone, and you will see far less resistance to its use. Unfortunately, these languages have a long track record of being proprietary with a single for-profit entity trying to keep exclusive rights.

Goals and Constraint based Programming
Freshman year of college (1997 for me), my friend an I thought through a really involved (but ultimately simple) way of approaching constraint based programming. He is now a hot shot program manager at Microsoft. I don't know if he ever used any of the ideas we thought about, but I did.

My first year as a full time employee at my company (2001), I was tasked with a problem which, after that project, was always given to an engineer 5 grade levels above what I was when I did it. I used constraint based programming using Prolog (and I found some tricks to make things reversible) to take on the project. There were zero errors in the project when we finally implemented in hardware what I was keeping track of in constraint software...this could not be said of the same thing the next go around (on a frankly simpler problem) by a much more experience engineer.

But this is when I learned the biggest downfall to this sort of programming...it takes explicit training and/or deep thought in this style of thinking to get used to it. Anyone not (self)trained to think in constraints will have trouble maintaining code written like this.

Note: There are some aspects of constraint programming incorporated into more advanced databases, and even some features in SQL for this. But I have found that constraints are something people struggle with.

Despite the myriad of programming languages that have come out, they are all a mix of three types of languages:
1) Imperative (computer, do this, then do this, and if this happens, then do this...)
2) Functional (computer, compute this function that is defined in terms of these functions which are in turn defined in terms of these functions)
3) Declarative (computer, make it so that the following is true, as well as the following, oh and this too)

BTW, I think of object oriented languages as incorporating some of the declarative aspects into the compilation and linking of programs.

But the fact remains that declarative programming is more abstract than others, and therefore leaves it more open to ambiguity. There is one thing that computing abhors, and that is ambiguity. The reason that computers, computing systems, or any thing that computes functions is through adequate removal of ambiguity in what to expect in situations. People agree on things. That is the "magic" of computing. At the basis of all of computing is nothing more than a set of agreements that people still follow to this day. It is all semantics.

Beyond that, you need to give that little imperative kick to start off a functional or declarative system.

Concurrent Programming
It should be obvious to anyone that this is something that has been a long time coming and the reason it hasn't become prevalent is that it is difficult to do...at leas more difficult that sequential programming.

Concurrent "programming" has been incorporated into computing since its inception, and will always be there. But here's the deal. It is purposely hidden away at the lowest layers of computing possible, because it confuses people.

Hardware is inherently concurrent. Languages like VHDL and verilog that encode for hardware that will be synthesized are also used in event based simulations.

I know first had that having memory as close to the execution units that make use of that memory saves an incredible about of "floorspace" on a chip (that is even with attempts at "packing" memory well when it is close together). But we do this at the expense of making the next layer of firmware or hardware more complicated. In some sense, this is the reverse impulse of people resisting the changes to higher level languages.

Note, that for years superscalar architectures with deep pipelines dominated not because they were inherently "better", but because instruction level parallelism was the simplest to program for. Now that we have been pushed into chip/core level parallelism due to heat and power density constraints, there is no more free lunch.

There have been software architecture paradigms like RM-ODP that have been around for a long time.

But don't think The Von Neummann bottle neck is gone because we still have Amdhal's Law.
-----

Anyway, I haven't been feeling well due to things happening in my personal life. Ranting about my former profession was strangely therapeutic.
 

Salomé

meh
Joined
Sep 25, 2008
Messages
10,527
MBTI Type
INTP
Enneagram
5w4
Instinctual Variant
sx/sp
He was pretty boring. I think he extended his conceit well beyond the point of its usefulness...

Nothing changes in this field? Really? I can't imagine any field that changes faster. I don't know any practitioner who thinks they know it all. It's literally impossible to stay on top of more than a handful of discrete domains.

There are complete paradigm shifts every few years, mostly driven by decreasing cost / increasing availability of hardware.
In different eras, different constraints have dictated the direction of development models. Starting out, it was economic use of machine resources and availability of storage space. Moore's Law /Kryder's Law saw those considerations recede, and the human resource cost became more important. Then out-sourcing models made those considerations recede so that now CIOs probably worry more about astronomical licensing fees than anything else. Leading to the selection of some pretty ropey platform standards. There is absolutely no shortage of innovators or innovation in IT. But sadly, (or not, depending on your standpoint) as in most other fields, everything is evaluated through its short-term impact on a balance sheet. And the financial cost (and risk) of changing legacy architectures is frequently prohibitive.
Direct Visual Manipulation of Programs and Spatial Representations
There are a lot of benefits to this style of programming, and a lot of the ideas have been incorporated into modern IDEs. The main benefit, I think, is that this style make for very pleasurable maintenance, and is almost self-documenting. However, there is one major problem...it can severely hamper productivity instead of enhancing it like it purports to do. The main reason...it takes much longer to draw something than to write something.
I agree, there's a lot of this stuff currently in use and much of it is worthless. My main gripe is that the generated code is often impossible to performance tune and consequently very inefficient.

Concurrent Programming
It should be obvious to anyone that this is something that has been a long time coming and the reason it hasn't become prevalent is that it is difficult to do...at leas more difficult that sequential programming.

Concurrent "programming" has been incorporated into computing since its inception, and will always be there. But here's the deal. It is purposely hidden away at the lowest layers of computing possible, because it confuses people.
Wut? That's not the reason. It's a design standard to separate application programming (the functional logic - what it does) from systems programming (how it does it) . The former ought not to concern itself with implementation details - it should be platform agnostic, for ease of maintenance and portability. Parallelism should be handled at lower levels, which it almost always is. Only systems programmers need to understand the guts of the hardware to optimise code. Any other solution would not be practical or scalable. The history of computer programming is a history of increasing abstraction. As needs must, given the complexity of modern architectures. Of course, I'm coming at this from a background in clustered/grid computing, whereas I think you were more into embedded systems? So, very different disciplines.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,996
Wut? That's not the reason. It's a design standard to separate application programming (the functional logic - what it does) from systems programming (how it does it) . The former ought not to concern itself with implementation details - it should be platform agnostic, for ease of maintenance and portability. Parallelism should be handled at lower levels, which it almost always is. Only systems programmers need to understand the guts of the hardware to optimise code. Any other solution would not be practical or scalable. The history of computer programming is a history of increasing abstraction. As needs must, given the complexity of modern architectures. Of course, I'm coming at this from a background in clustered/grid computing, whereas I think you were more into embedded systems? So, very different disciplines.

I think we were making the same points, emphasizing different aspects. I did work a little on what people would consider embeded systems, and now I do a lot of cluster related stuff. I have some GPU experience too. But the main perspective that colors my thinking is having designed logic (and some analog circuits) that goes into chips for the majority of my computing career.

Hardware has always been concurrent. The processors on clusters, GPUs, the ARM processors in your phone, or embeded systems have logic on them (the only parts that aren't logic are analog, but even that is concurrent). Most of the functioning in logic circuits here happens concurrently. You really need to have latches (D,S/R,J/K, whatever) to do synchronization.

But, we are now running into an energy density barrier. You simply cannot have as much switching going on without also burning a lot of power, and dissipating a lot of heat. We can no longer increase the depth of an instruction pipleline to increase the clock frequency. Intel referred to this as the "right hand turn", and it happen about 8 years ago. This "right hand turn" is unfortunately breaking many abstractions that we would have liked to have preserved (try implementing a Singleton pattern in modern C++. It is not quite as brain dead a process as it used to be).
 

Salomé

meh
Joined
Sep 25, 2008
Messages
10,527
MBTI Type
INTP
Enneagram
5w4
Instinctual Variant
sx/sp
This "right hand turn" is unfortunately breaking many abstractions that we would have liked to have preserved
So then, it seems you agree that we shouldn't be trying to solve hardware/firmware issues in the software layer, on principle, because it's the wrong thing to do, irrespective of the complexity of the task?
These constraints are obviously much more of an issue for mobile/personal computing. In data centres, where massively parallel processing is the norm, you just add another rack to the array.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,996
Yes. I think it is undesirable to break abstractions. Though it is impossible to violate the laws of nature.

Cluster software, the OS, the utilities, the apps are all already set up to be multithreaded. The people who develop for it are trained to think about concurrency from the beginning. Adding a rack to add performance is built into the system architecture.

The clusters already have an abstraction that works with concurrency.
 

Little Linguist

Striving for balance
Joined
Jun 23, 2008
Messages
6,880
MBTI Type
xNFP
Instinctual Variant
sx/so
Someone should program a robot that cleans my apartment. Then I would be very happy. Please??? :wubbie:
 
F

figsfiggyfigs

Guest
425792_417642361624109_1847128497_n.jpg
 
Top