Wow, [MENTION=9273]Vasilisa[/MENTION].
This gets really into many of the details of computing. I am amazed at your ability to understand the issues here.
I pushed a lot of the ideas mentioned in the video during my (now past) decade long career in both hardware and software.
The video is rather tongue-in-cheek, and it doesn't really go into any of the details of why some of these ideas and models didn't or don't take off. Nor does it acknowledge how much the ideas have already been accepted. I have had first had experience in both sides of the issue regarding many of the ideas involved mentioned in the video, and in fact worked for a long time in one of the companies mentioned.
I can also tell you that people have legitimate reasons for resisting changes. It was not a simple matter of "being stuck in old ways". Frankly, anyone who has managed to effect change anywhere will recognize this thinking about the people resisting change as a cop-out. People have legitimate reasons for resisting, and addressing those reasons are what helps convince people, not calling them "stuck in their ways."
The main reasons people tended to resist changes to "higher level" programming was due to the poor quality of assemblers, compilers, etc. They produced crappy code at the lower level, and thus the professionals balked at the code (not necessarily the idea). Once the assemblers got good enough, they balked far less. Also, what "good" meant started to change. Programs were no longer required to fit in tiny spaces, or execute in very few instructions. In places where programs still need to do that, people still code in binary (I have done this myself).
Direct Visual Manipulation of Programs and Spatial Representations
Although two separate things in the video, and in theory could come separately (one may argue that
interpreters are direct manipulation without spatial represnetations), in practice, when people tout the benefits, they are done together.
When I was in high school (1990s) I was obsessed with the notion of
CASE tools. I spent a great deal of time thinking through a way to create the ultimate case tool using
UML and essentially getting rid of the need to write long blocks of sequential programming. My mom was a computer scientist, and she had lots of books on object oriented programming, information systems, and the like, and I tried to pretty much read all those books. I programmed in Delphi (still among my favorite programming environments, though I haven't used it in a long time).
There are a lot of benefits to this style of programming, and
a lot of the ideas have been incorporated into modern IDEs. The main benefit, I think, is that this style make for very pleasurable maintenance, and is almost self-documenting. However, there is one major problem...
it can severely hamper productivity instead of enhancing it like it purports to do. The main reason...
it takes much longer to draw something than to write something. Psuedocode is perhaps 10 times faster than creating a flow chart for the same algorithm. Some people may argue that this is irrelevant because it is not development time, but maintenance time that dominates the time spent on software projects. Perhaps true. But it ignores the fact that often, especially when doing something new or innovative, keeping a "flow" state while being creative is important, and
visual programming is like forcing people to write a final draft without allowing them to make any rough ones.
Also, some visual programming (especially LabVIEW like [MENTION=5837]93JC[/MENTION] mentioned) leads to difficult to maintain programs as well. The paradigm of visual programming here essentially converts programming to the equivalent of building a circuit. Having worked on both, let me tell you that fatal wiring errors are much harder to track down than syntax errors.
Finally, text has the nice feature of being simultaneously human readable and fairly compact in representation. Even if it is loosely so, it is based on natural human language. Visual/spatial representations are significantly less compact unless you have a a specific program that you need to read it. What this means is you get locked-in to vendors for specific visual/spatial programming.
Make a good Turing complete, visual/spatial programming language that is free, open source, and available to everyone, and you will see far less resistance to its use. Unfortunately, these languages have a long track record of being proprietary with a single for-profit entity trying to keep exclusive rights.
Goals and Constraint based Programming
Freshman year of college (1997 for me), my friend an I thought through a really involved (but ultimately simple) way of approaching constraint based programming. He is now a hot shot program manager at Microsoft. I don't know if he ever used any of the ideas we thought about, but I did.
My first year as a full time employee at my company (2001), I was tasked with a problem which, after that project, was always given to an engineer 5 grade levels above what I was when I did it. I used constraint based programming using Prolog (and I found some tricks to make things reversible) to take on the project. There were zero errors in the project when we finally implemented in hardware what I was keeping track of in constraint software...this could not be said of the same thing the next go around (on a frankly simpler problem) by a much more experience engineer.
But this is when I learned the biggest downfall to this sort of programming...
it takes explicit training and/or deep thought in this style of thinking to get used to it. Anyone not (self)trained to think in constraints will have trouble maintaining code written like this.
Note: There are some aspects of
constraint programming incorporated into more advanced databases, and even some features in SQL for this. But I have found that constraints are something people struggle with.
Despite the myriad of programming languages that have come out, they are all a mix of three types of languages:
1) Imperative (computer, do this, then do this, and if this happens, then do this...)
2) Functional (computer, compute this function that is defined in terms of these functions which are in turn defined in terms of these functions)
3) Declarative (computer, make it so that the following is true, as well as the following, oh and this too)
BTW, I think of
object oriented languages as incorporating some of the declarative aspects into the
compilation and
linking of programs.
But the fact remains that
declarative programming is more abstract than others, and therefore leaves it more open to ambiguity. There is one thing that computing abhors, and that is ambiguity. The reason that computers, computing systems, or any thing that computes functions is through adequate removal of ambiguity in what to expect in situations. People agree on things. That is the "magic" of computing. At the basis of all of computing is nothing more than a set of agreements that people still follow to this day. It is all semantics.
Beyond that, you need to give that little imperative kick to start off a functional or declarative system.
Concurrent Programming
It should be obvious to anyone that this is something that has been a long time coming and the reason it
hasn't become prevalent is that it is difficult to do...at leas more difficult that sequential programming.
Concurrent "programming" has been incorporated into computing since its inception, and will always be there. But here's the deal. It is
purposely hidden away at the lowest layers of computing possible, because it confuses people.
Hardware is inherently concurrent. Languages like
VHDL and
verilog that encode for hardware that will be synthesized are also used in event based simulations.
I know first had that having memory as close to the execution units that make use of that memory saves an incredible about of "floorspace" on a chip (that is even with attempts at "packing" memory well when it is close together). But we do this at the expense of making the next layer of firmware or hardware more complicated. In some sense, this is the reverse impulse of people resisting the changes to higher level languages.
Note, that for years
superscalar architectures with deep
pipelines dominated not because they were inherently "better", but because
instruction level parallelism was the simplest to program for. Now that we have been pushed into chip/core level parallelism due to heat and power density constraints, there is
no more free lunch.
There have been software architecture paradigms like
RM-ODP that have been around for a long time.
But don't think The Von Neummann bottle neck is gone because we still have
Amdhal's Law.
-----
Anyway, I haven't been feeling well due to things happening in my personal life. Ranting about my former profession was strangely therapeutic.