Interrupt handling must be done immediately. For that reason, it usually constituted as a separate service. Interrupts set the conditions that produce internal triggers on which task schedulers and tasks may act. Interrupts can be caused by hardware, but it is also possible to let software generate interrupts. This latter approach can be interpreted as a communication between a module and the interrupt handler.
Interrupt handling is partly performed by the hardware of a processing unit. The program pointer that points to the current instruction is stored and replaced by a pointer to an instruction that represents the start of an interrupt handling routine. When that routine completes, the original program pointer is restored and the original program continues. However, it is possible that in the mean time the stored program pointer is replaced by another value. This possibility is exploited by the thread-switcher.
If a centralized scheduler must schedule the threads that run inside a module, then it must have sufficient knowledge of all potential conditions that can generate triggers. This is in conflict with the hiding of intellectual property that is one of the major features of proper modules. This means that a module must indirectly control the scheduling of the tasks that pass through that module. Indirect scheduling is done by invoking a centralized task scheduler. In contrast, time slicing must always occur by a central service.
Modules cannot perform their scheduling task in isolation. They must communicate with a central scheduler and with zero or more resource brokers to perform their (indirect) scheduling task such that all modules work in proper concordance.
Traditionally an RTOS is used to perform the time slicing as well as the scheduling tasks. In a traditional RTOS the scheduler is always a centralized service that often knows in detail how all tasks must be handled. Due to IP hiding, the internals of modules are hidden for other modules, including centralized task schedulers. The centralized task scheduler may have direct access to start methods that are part of an interface of a module, but it will usually have no direct access to task scheduling criteria. A centralized scheduler does not know how to handle the internals of modules. It means that modules must perform their own (indirect) task scheduling. A module that indirectly starts, stops, pauses, reactivates or otherwise tunes a task will do this by using the services of a central scheduler.
Conclusion: In modularized systems that operate under resource-restricted real-time conditions, each of the modules must implement its internal scheduling. When a module starts new tasks, its implementation may use the services of a centralized scheduler and when necessary, it may cooperate with one or more resource brokers. The centralized scheduler must be able to support paired tasks that are a combination of a primary task and a repair task. A traditional RTOS might not suit this requirement.
Routines that implement parts of the actions of a task must regularly observe the assets in the domain of the task. If these values and the target of the task give rise to a fine grain scheduling action, then the routine must adapt its internal execution path. Programming languages contain several items that support such fine grain scheduling.
The routine may decide to put the current task on low priority because it cannot get access to a required resource. When the resource becomes available, it may increase its priority again. It could also delegate the management of this fact to a resource broker.
The routine can call another method. If that method returns, the routine can proceed with its execution. However, it is also possible that the called method does not return.
Apart from the initial main task, all tasks in a modular system are started inside one of the modules. This can be accomplished by invoking a task scheduler. The initiative can be taken by one of the methods of a module, or the initiative comes from a centralized task scheduler. This is especially the case when the task is a periodic real-time task or when the task is a repair task. The fact that a task is started inside a module means, that the task starts with a method that is a public member of an interface of that that module.
Before a task starts, the task scheduler installs the required connection scheme that accompanies the task schedule. However, this might not cover all connections that the participating modules need. Modules can request new or different connections. The targets of these connections are resources that the modules need to perform their purpose. Shared resources may handle their own access, but often a client may want access to more than one resource at the same time. In that case, the application of a resource broker can be more efficient. The client puts a request with a given priority to the broker and the broker tries to get the required but occupied resources by convincing lower priority tasks to free that resource. This may involve a pause or even a later restart of the low priority task. If all resources are free, then the client can proceed with its task. In order to perform its task, the broker must communicate with its clients and with the current users of the resource. The implementation becomes relatively simple when all relevant resources that are managed by this broker are only applied via this broker. A rather intrusive implementation of a resource broker may interfere with the scheduler in order to control the priorities of its clients.
In some cases, tasks must communicate. This can be done by exchanging values of static assets. These assets belong to modules that are approached or passed by the tasks. Sometimes two or more tasks must synchronize their activity. In principle, this can be achieved using a Boolean value of a static asset. It is also possible to make use of semaphores. Access to shared resources is a frequent reason for requiring task synchronization. Careless design of task synchronization can cause severe accidents such as deadlocks and race conditions.
Distributed control makes use of intelligence and data that are local to the modules in addition to some dedicated system wide data. The local fine grain control can be handled relatively easy. The control of modules is done course grain. So, the additional central control can also be handled easily.
In large and complex systems, compared to distributed control, fine grain central control would be orders of magnitude more complex.
Designing dynamic behavior is the most delicate part of the design of complex systems. The behavior is implemented via tasks that are initiated and controlled by one or more centralized task schedulers. A task scheduler works via a scheduling plan. The plan uses a selection of scheduling algorithms. The plan is implemented in a scheduling plan. The schedulers perform course grain scheduling. Inside the modules, the tasks can indirectly influence the course grain scheduling by invoking a centralized scheduler. Further, also inside the modules, the tasks perform fine grain scheduling. The fine grain scheduling is implemented by a program. This program consists of a network of program blocks. The scheduler starts a routine that is the start block of the task. The central scheduler accesses the start method of a task via the task interface of the module that contains this method. Routines may call other routines. With the exception of some system functions, the entry point of a program block is always part of a module. When a routine calls another routine, then the entry point of the called routine may coincide with the entry point of an interface method of another module. Otherwise, the called routine must belong to the same module.
The program that corresponds to a task plans several possible trajectories that may pass through several modules. Inside such a module, the task may initiate other tasks. Generally, this will be done in cooperation with a centralized task scheduler. This is done by communicating the data of the new tasks and a corresponding command to the centralized scheduler. The module that contains the centralized scheduler must provide an interface method that implements this service.
A task is vulnerable when after being stopped, it may leave modules that are passed by this task in an unsafe state. Vulnerable tasks must always be implemented together with a paired repair task.
Low-level programs contain operands, path selectors, block separators and local assets. The operands in a low-level program are indivisible. Methods are special forms of programs that represent the communication between the sender that calls the method and a receiver that executes the corresponding program. In the communication, some asset values may be sent together with the command that triggers its program block. The method may return a method result back to the caller. The method result is also an asset value. High-level languages use constructs that may represent complete low-level program blocks or complex structures of assets. The operators and other constructs in a high-level program are dividable into a low-level program. Thus, every high-level program implies a corresponding low-level program. Programmers use high-level programs. Processing units work with low-level programs. Compilers translate high-level programs into low-level programs. Compilers may optimize their code. The result may be that statements in the high level program are no longer represented in the low level program. This might raise problems when memory locations are accessed that are also accessed by other actors, such as hardware. Special high level language constructs can be used to prevent these problems.
A system loader is the first task that takes action. When the program code and the asset data are loaded, an initialization action is started during which the system registry is adjusted. After this, the central services are initiated and the main application is started. This task may initiate several concurrent tasks that together implement the behavior of the system.