It is very likely that whoever landed on this page doesn’t need the first few paragraphs, but bear with me a second, because I am at a point where I really think that we need to go back to where it all started.
Programming languages are tools developed in order to tell computers what we want them to do for us. More precisely, they are tools developed to speed up the process of telling a computer what to do. It is called “computer software“, or, for brevity, just “software“.
In order to use those languages, we rely on someone writing translators from the programming language we are using to the set of instructions that a processor can actually execute. This is because the different processors’ sets of instructions are in the vast majority of cases considered too primitive to be used in the production of usable software needed to be delivered within an acceptable time frame.
However, programming languages are still considered too technical by the vast majority of the population who can benefit from the use of computer software and for this reason there’s a community of technicians working to interpret the desires for computer programs coming from the population at large. Those technicians write sequences of orders to computer (kind of “speeches“) in those languages, also called “computer programs“, to satisfy the needs arisen from the rest of the community. It should also be discussed how those “needs” are created, but this might be a subject for another discussion.
So, here we are. In any software production activity we have three entities
Customers want Computers to do something for them, but only Technicians can tell Computers what to do.
And here is where the problems begin.
Customers can only see the result of what Technicians have told Computers to do, but Customers can’t understand the language used by Technicians.
The vast majority of time in creating software is spent making what Technicians tells Computers to do match what Customers want. Several methodologies have been used over time to achieve the goal as soon as possible, with different degree of success.
At the times when the demand for software started increasing exponentially, let’s say with the advent of personal computers in the mid ’80s, the (still few) Technicians were saying to Customers, though implicitly, of course, “You have to trust me on this, because I am the expert”.
I have seen some kind of renaissance of that attitude in recent times. Yes, there is perhaps more discussion with Customers, but most of the times it just comes down to a generic specification of what is needed, from which the Technicians, normally in their own world, derive what needs to be told to Computers. Technicians also go a long way to prove that Computers do what they told them to do. Unfortunately, this is of little help to Customers, because the “proofs” are still written in the programming languages they can’t understand; therefore, it’s still another, possibly more polite, way of saying “You have to trust me”.
It is quite clear that there was (and still is) a communication problem between Customers and Technicians. They needed to find a common language to describe what needed to be created.
After discussions, proposals and various attempts, in the ’90s the Unified Modeling Language (UML) was defined by the Object Management Group. I am not going to be dragged into the “war of religion” on whether it is the best solution as a language between Customers and Technicians, it just worked for me in the period I have worked in projects using it.
UML is a graphic language, trying to capture and formalize the “syntax” and “semantic” of pictures normally drawn on paper while discussing requirements in informal discussions. Eventually, the result of the discussions is what we call a UML model of the system to be created. This way Customers and Technicians have a common language to describe what Computers have to do.
Of course, Technicians still needed to “translate” UML to those languages that they use to talk to computers, but here is where Model Driven Architecture enters the scene. The UML models were seen as a potential start for the development of software and some companies implemented new types of “translators” that could create software, that is, text in programming languages, from the picture of the UML model agreed with the Customer. “Skeletons” of the set of instructions to be given to Computers were automatically generated from the model.
In the early days, we are talking about late ’90s, possibilities were limited and a lot of (self or imposed) discipline was needed in development to follow that approach. Contrary to what we have nowadays, “Model Driven” meant exactly that, everything had to start from the UML model, because of the shortcomings of the commercial tools of the times, more on this shortly. Still, software could already be discussed in detail before being implemented, documentation could also be generated from the model and the time needed to complete writing the missing part of the software was much less than it was before.
It has to be said that the approach might not the best for every type of software development, but there is strong evidence that in specific (Customer) domains like embedded and mission critical systems this should be the only approach considered.
Today UML tools offer far better features than before with the improvements of the “reverse-engineering” capabilities definitely among the more useful ones. Reverse-engineering can be considered just as another “translator”, this time from programming languages to UML models.
These improvements lead to the dual possibility of either starting from the creation of a model of the system under development and then generate the software, or starting from existing software and creating a model out of it.
Discipline is still needed, especially in the domains mentioned above, but the results are well worth the effort, both from a commercial and a technical point of view.
From a research point of view the approach to software development described here evolved later to the far more ambitious Model Driven Engineering, a methodology that allows to translate directly the UML model of a system to instructions executable by a processor.
I am planning to post on these pages some guides and examples about tools and procedures to develop software using these technologies.
I hope that they are going to be useful.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.