Afgeronde rechthoek: Top of book Afgeronde rechthoek: Managers track Afgeronde rechthoek: Technical track Afgeronde rechthoek: Previous page Afgeronde rechthoek: Home





Multi-threaded systems

Time slicing

In multi-threaded systems, the available processor is time sliced between threads of execution. This has important consequences for static assets and for the attributes in the instances of classes of modules. Automatic assets that are local to a method have their own set of asset implementations. There is no risk, that these automatic assets will be changed by other threads in an uncontrolled way. However, static items contain asset values that are shared with other threads. This also holds for the attributes of a module.

For example, if a method wants to increment an integer value of a non-automatic asset, then in most cases the value is first read into an accumulator. After that, the value of the accumulator is incremented and finally the value is restored at its original position. In the mean time, another thread may have decided to do other things with the value, but its actions are erased by the restore of the incremented value. In this way, the other thread does not behave as expected. It means that the increment action had to be included in a so-called critical section. Other threads are not allowed to do anything during that critical section. In particular, a task must not be stopped when it uses a critical section. Thus, whenever a thread encounters a section of an operation or a memory location where other threads can be obstructed, or when it can be obstructed by other threads, then it must install a critical section or it must assure that other means are used to stop access to that location by other threads. An implementation can effectuate a critical section by prohibiting the thread switcher to switch between threads until the critical section is passed.


Threads are wires of execution. At any instant, only one operation is active in each thread. Methods that are called by another method that belongs to a given thread will also belong to that thread. If all loops inside the methods of a thread have stopped and if none of the thread messages calls another method, this does not mean that the live of the thread ends. It only means that the life of the task ends. Only if there are no new tasks that must start in this thread, then the life of the thread should end. Ending the life of a thread must be done explicitly. Tasks may initiate new threads. This must also be done explicitly via a request to a task manager. The new thread starts with a new task and a new task starts with a method call. Threads serve a sequence of mutually exclusive tasks. During its lifetime, the priority of a thread may change upward and downward with the priority of the task that currently runs in that thread.

Multiple threads may concurrently enter a single instance of a module. Dynamic instances of modules have the advantage that a single task may decide to keep the whole instance in its own control.


A task is not the same thing as a thread. A thread has no target. It is a service provided by the thread switcher. A sequence of mutual exclusive tasks may run on the same thread. A task may start another task. But, it will always do this indirectly by invoking a task scheduler. There is one exception: the task scheduler is itself a task. A task scheduler can directly start another task.

Threads and tasks are costly. Their handling takes processor time and their administration takes memory.

Task scheduling

Starting, pausing, stopping and tuning a task

A task can only run on an existing thread. Each task has a dataset that contains the assets, which characterize the task. A task is started by initializing this dataset. This includes the setting of the memory area where the thread switcher expects the data of the method that becomes active when the task gets its turn. This automatically starts the first method of the task when the turn of its thread has arrived. Tuning the activity in a thread is done by adapting the priority of the currently running task. Usually this is done on instigation of the task itself. A task is paused by setting its priority to the lowest possible value or the scheduler must accept a special pause state of a task. Pausing a task corresponds to pausing the thread on which the task runs. After the pause, the task can be reactivated or it will be stopped. Stopping is like pausing, but in addition, the dataset that contains the assets of task is cleared. This dataset can then be used for another task that acts on the same thread, or the thread may be removed as well. Thus stopping a task does not necessarily mean that the corresponding thread is stopped as well. Both stopping and pausing a thread may leave one or more modules in an unexpected state. A stopped task can be followed by a repair task. This is not sensible for a paused task. A task must not be paused when other threads may find modules that are crossed by that task in an unexpected state.

Scheduling and priorities

If two or more tasks share the highest priority, then the scheduler may take one out of three policies.

        The scheduler may decide to let each task run to its completion and then switch to the next task.

        The scheduler may divide processor time equally between the tasks by switching tasks at regular instances.

        The scheduler may give privilege to the tasks with the earliest deadline. This offers a high degree of readiness for random sporadic tasks.

Task priorities play an important role in the scheduling activity. Therefore, the task priorities are part of the current conditions that together determine what triggers will be fired. Another influence is formed by the need to access shared resources. A task scheduler uses the thread switcher to instigate its influence.

Scheduling algorithms

If a task is known to the scheduler that has a higher priority than the task that is currently running, then the currently running task is halted and the higher priority task gets its chance to run. Thus scheduling comes down to setting priorities.

If the priorities of tasks are kept fixed from their start, then the rate monotonic priority assignment is the optimal priority setting scheme. By using this scheme, the priorities of tasks increase monotonically with decreasing task deadlines. If task start times are uniformly distributed, the achievable processor utilization equals 83 % for a two-task occupation and close to ln 2 for a system that employs many tasks.

Dynamic adaptation of task priorities can bring the achievable processor utilization closer to 100 %. The corresponding scheduling algorithm is much more complex than a fixed priority based algorithm. Increasing complexity usually endangers robustness. In any case, only trustworthy implementations of scheduling algorithms must be used.

Handling resource restrictions

A complex situation occurs when operations represent tasks that have different priority and that both require one or more of the same resources to perform their actions. The result may be that the low priority task must free the resource when the high priority task needs it. The low priority task may be implemented by a network of methods. The need for the resource of the high priority task may occur during the run of one of the methods of the low priority trail. The release of the resource is not done by that method. It is done by the scheduling mechanism after a critical section in the low priority method finishes or when the low priority task is paused or stopped. Nevertheless, it does happen before the original task was completed. From this example it can be derived that the scheduler determines what is going to happen next. It may be so that the rest of the original low priority trail is finished when the resource is freed by the higher priority tasks, but it quite possible that at that instance the conditions have changed such that a quite different execution path is taken. For example, the scheduler may decide to restart the low priority task from its begin. In many cases, after stopping a task, the scheduler must start a repair task.

Thread switching versus scheduling

When the access to a processor switches from one thread to another, then the values that characterize the old thread are stored temporarily until that thread is activated again. The thread switcher divides processor time between running methods. Only the data that are relevant for the method that is active when thread switching occurs are stored. This includes its automatic values. If a method was running at switching time, then after turning back to the thread, the method data are restored and the same method proceeds as if nothing happened. However, static values in the domain of the current task may have changed. This change may influence the route of the execution path.

The thread switcher is not identical with the scheduler. It does not call messages. The scheduler is just another task. Apart from the thread switcher, it has the highest priority.

Switching connections

The decision to let every system mode correspond to a scheduling plan an a connection scheme defines the context such that it can be controlled exactly. However, tasks may decide to request and release connections dynamically. Thus, if the thread-switcher switches tasks it must also switch the corresponding dynamically created connections. This must be done without resetting or releasing the connected instances. Thread switching must not cost too much time. This means that switching of the connections must also be performed quickly. On its turn, this means that the number of connections to be switched must be kept small.



Afgeronde rechthoek: Top of page Afgeronde rechthoek: Next page