Software is a generic term to describe the applications that a general-purpose processor can execute. A program is similar to a recipe book: a list of ingredients (data) and steps to perform (instructions) in order to get your food (work) ready.
Before we go in, let me clarify the following concepts. I will be using these concepts a lot in this series. These terms vary a lot between platforms, but I will stick to the classical meaning in the desktop world.
- Source-code: is a text-format file written by the programmer in one of the programming or scripting languages
- Application (or program): is a generated binary code, and basically represents the source-code that has been translated into a binary format that the microprocessor understands. The process of translating it is usually called compiling. When the source-code is split over several files, there is a second step to generating the application, which is usually called linking – when the various compiled sources are linked together into a single application.
- Process (or task): is an executing instance of an application. You can launch twice the Notepad, and each instance is a separate process. There are applications that prevent launching multiple instances, but that is a feature of that program, not a limitation from the OS.
In the early days, computers (basically the operating systems) were only able to run a single task at a time. Several processes could be in execution at the same time, but when a new process would start, all the other processes would be paused. There was only a single active process. This was inefficient from the standpoint of the CPU and the user, because the user could not run several programs in parallel and when the active process got out of control or crashed, it took down the entire system with it. In addition, there were plenty of wasted computing resources.
Today, you would simply launch your media player and listen to your favorite music while playing a game of chess. But in the 80’s and early 90’s you couldn’t do this easily if you used an MS-DOS PC, since MS-DOS was not a multi-tasking operating system. As such, you could either listen to music (not really – but that was due to the fact that audio expansion cards were rather expensive then) or play your game of chess. But not both at the same time. At that time Windows was only an MS-DOS app, and Linux wasn’t born yet.
This inefficiency was identified early on during development of the operating systems, and multi-tasking was introduced with UNIX in the 70’s. But UNIX was prohibitively expensive and thus used mostly by large companies or academic and governmental institutions.
Multi-tasking means that a processor executes a series of instructions from task A, then executes another series of instructions from task B, before returning back to continue task A (or maybe continue to task C). This switchover between the various processes happens so fast (by human standards) that it gives the appearance that all processes do indeed run at the same time.
Before multi-tasking entered the stage, a naïve implementation of it – called multi-programming – was the first attempt at running multiple processes on the same machine. In multi programming, the decision to interrupt the running program was when it was executing an instruction waiting for a peripheral. Then it would simply switch execution to another program. However, if the running program never accessed a peripheral, it would never be interrupted.
In either case, the operating system decides which process to give control to, and that is achieved by a more or less cleverly implemented component called task scheduler. The task scheduler uses one of several available algorithms to decide which process should be executed next. There are many different scheduling strategies and algorithms that the task schedulers can employ. This topic is so rich and fascinating, that going into details would hijack this entire mini-series, so we will not delve too deep into it here.
There are two main strategies which multi-tasking operating systems can employ.
Cooperative multi-tasking means that the processes themselves would be written in such a way that they had to cede control back to the operating system every so often. The task scheduler would then allow another process to run until it would also cede control back, and so on. This approach has many challenges. If a process is badly or maliciously written, it could simply steal all the CPU time and the operating system would have no power over it. When coding an application which should run in such an environment, it is always difficult to estimate when to release control back to the operating system. Early versions of Windows and MacOS were using this strategy, and they were notoriously unstable and unusable (by today’s standards), mostly because of badly written applications, not necessarily because the OS-es themselves.
The other – and superior – multi-tasking strategy is the preemptive strategy. Here, the operating system takes advantage of the processor’s interrupt mechanism to switch control between processes. The added advantage here is that this mechanism is completely transparent to the application developers. Taking the context switching responsibility away from the application developers allowed the OS to be more robust, and not to be at the mercy of the most poorly implemented application under execution.
Then, a next step was identified: splitting a single application in several execution threads. This would allow applications to become more responsive and to utilize more efficiently the system resources. Threads are also managed by the operating system via the scheduler, not directly by the application. In fact, the operating system schedulers treat single-threaded applications as regular threads.
If you have two applications, one printing a “+” symbol, the other printing a “-” symbol and you would run them in a modern operating system, you would see a random sequence of those symbols mixed together, depending how the scheduler gave control to those two processes. Every time you would rerun those applications, you would get a different random sequence of “+” and “-” symbols.
Similarly, if you would have a single application having two threads – each of those threads printing one of those symbols – and you would run it, you will get another random sequence of plusses and minuses. In addition, every time you would restart the application, you will get a different random sequence. Exactly as above.
But if it’s all is the same and two threads behave like two separate tasks, what’s the point in spending effort to develop multi-threading, when multi-tasking already exists?
To be continued…
Previous: Episode I – Processors and Cores
Next: Episode III – Processes and Threads