Microprocessor Design/Real-Time Operating System

Real-Time Operating System
Real-Time Operating System (RTOS) is a multitasking operating system intended for serving real-time application requests. It must be able to process data as it comes in, typically without buffering delays. RTOS is implemented in products all around us, ranging from military, and consumer to scientific applications. RTOS is the operating system used in many of the embedded systems as it will be effective in allowing the real-time applications to be designed and expanded more easily whilst meeting the performances required.

It comprises of two components, namely, “Real-Time” and “Operating System”. "Real-Time" indicates that the operating system must respond in a definite time for the critical operations that it performs along with the high reliability. RTOS is therefore an operating system that supports real-time applications and embedded systems by providing logically correct result within the deadline required. Such capabilities define its deterministic timing behavior and limited resource utilization nature.

Classification of RTOS
RTOS’s are broadly classified into three types:


 * 1) Hard real-time: The degree of tolerance for missed deadlines is extremely small or zero. A missed deadline has catastrophic results for the system.
 * 2) Firm real-time: Missing a deadline might result in an unacceptable quality reduction.
 * 3) Soft real-time: The deadlines may be missed and can be recovered from. Reduction in system quality is acceptable.

Features of RTOS
A basic RTOS will be equipped with the following features:

An RTOS must be multi-tasked and preemptible to support multiple tasks in real-time applications. The scheduler should be able to preempt any task in the system and allocate the resource to the task that needs it most even at peak load. Preemption defines the capability to identify the task that needs a resource the most and allocates it the control to obtain the resource. In RTOS, such capability is achieved by assigning individual task with the appropriate priority level. For multiple tasks to communicate in a timely manner and to ensure data integrity among each other, reliable and sufficient inter-task communication and synchronization mechanisms are required. To allow applications with stringent priority requirements to be implemented, RTOS must have a sufficient number of priority levels when using priority scheduling. An RTOS needs to have accurately defined short timing of its system calls. To ensure predictable response to an interrupt, an RTOS should provide way for task to lock its code and data into real memory.
 * Multitasking and Preemptibility
 * Task Priority
 * Reliable and Sufficient Inter Task Communication Mechanism
 * Priority Inheritance
 * Predefined Short Latencies
 * Control of Memory Management

RTOS Architecture
The architecture of an RTOS is dependent on the complexity of its deployment. Good RTOSs are scalable to meet different sets of requirements for different applications. For simple applications, an RTOS usually comprises only a kernel. For more complex embedded systems, an RTOS can be a combination of various modules, including the kernel, networking protocol stacks, and other components.

RTOS vs. General Purpose OS
Many non-real-time operating systems also provide similar kernel services. The key difference between general-computing operating systems and real-time operating systems is the need for “deterministic” timing behavior in the real-time operating systems. Formally, “deterministic” timing means that operating system services consume only known and expected amounts of time. In theory, these service times could be expressed as mathematical formulas. These formulas must be strictly algebraic and not include any random timing components. Random elements in service times could cause random delays in application software and could then make the application randomly miss real-time deadlines – a scenario clearly unacceptable for a real-time embedded system.

General-computing non-real-time operating systems are often quite non-deterministic. Their services can inject random delays into application software and thus cause slow responsiveness of an application at unexpected times. If you ask the developer of a non-real-time operating system for the algebraic formula describing the timing behavior of one of its services (such as sending a message from task to task), you will invariably not get an algebraic formula. Instead the developer of the non-real-time operating system (such as Windows, Unix or Linux) will just give you a puzzled look. Deterministic timing behavior was simply not a design goal for these general-computing operating systems.

On the other hand, real-time operating systems often go a step beyond basic determinism. For most kernel services, these operating systems offer constant load-independent timing.

Kernel in RTOS
“kernel” – the part of an operating system that provides the most basic services to application software running on a processor.

The “kernel” of a real-time operating system (“RTOS”) provides an “abstraction layer” that hides from application software the hardware details of the processor (or set of processors) upon which the application software will run. In providing this “abstraction layer” the RTOS kernel supplies five main categories of basic services to application software.


 * 1) The most basic category of kernel services, at the very center, is Task Management.


 * 1) The second category of kernel services, is Intertask Communication and Synchronization.


 * 1) Many RTOS kernels provide Dynamic Memory Allocation services.


 * 1) Many RTOS kernels also provide a “Device I/O Supervisor” category of services.


 * 1) In addition to kernel services, many RTOSs offer a number of optional add-on operating system components for such high-level services as file system organization, network communication, network management, database management, user-interface graphics, etc.

Task Scheduling
Most RTOSs do their scheduling of tasks using a scheme called “priority-based preemptive scheduling”. It basically assign each process a priority and if at any point in time, scheduler runs highest priority process ready to run. Every process runs to completion unless preempted. Scheduler is responsible for time-sharing of CPU among tasks. Each time the priority-based preemptive scheduler is alerted by an external world trigger (such as a switch closing) or a software trigger (such as a message arrival), it must go through the following 5 steps: These 5 steps together are called " task switching "
 * 1) Determine whether the currently running task should continue to run.
 * 2) If not, determine which task should run next.
 * 3) Save the environment of the task that was stopped (so it can continue later).
 * 4) Set up the running environment of the task that will run next.
 * 5) Allow this task to run.

Fixed Time Task Switching
The time it takes to do task switching is of interest when evaluating an operating system. A simple general-computing (non-preemptive) operating system might do task switching only at timer tick times, which might for example be ten milliseconds apart. Then if the need for a task switch arises anywhere within a 10-millisecond timeframe, the actual task switch would occur only at the end of the current 10-millisecond period. Such a delay would be unacceptable in most real-time embedded systems.

For, in fact, the term “real-time” does not mean “as fast as possible”; but rather “real-time” demands consistent, repeatable, known timing performance. Although a non-real-time operating system might do some faster task switching for small numbers of tasks, it might equally well introduce a long time delay the next time it does the same task switch. The strength of a real-time operating system is in its known, repeatable timing performance, which is also typically faster than that of a non-deterministic task scheduler in situations of large numbers of tasks in a software system. Most often, the real-time operating system will exhibit task-switching times much faster than its non-real-time competitor when the number of tasks grows above 5 or 10.

Intertask Communication And Synchronization
Most operating systems, including RTOSs, offer a variety of mechanisms for communication and synchronization between tasks. These mechanisms are necessary in a preemptive environment of many tasks, because without them the tasks might well communicate corrupted information or otherwise interfere with each other.

For instance, a task might be preempted when it is in the middle of updating a table of data. If a second task that preempts it reads from that table, it will read a combination of some areas of newly-updated data plus some areas of data that have not yet been updated. These updated and old data areas together may be incorrect in combination, or may not even make sense. An RTOS’s mechanisms for communication and synchronization between tasks are provided to avoid these kinds of errors. Most RTOSs provide several mechanisms, with each mechanism optimized for reliably passing a different kind of information from task to task. Probably the most popular kind of communication between tasks in embedded systems is the passing of data from one task to another. Most RTOSs offer a message passing mechanism for doing this. Each message can contain an array or buffer of data.

Determine And High Speed Message Passing
Intertask message communication is another area where different operating systems show different timing characteristics. Most operating systems actually copy messages twice as they transfer them from task to task via a message queue. The first copying is from the message-sender task to an operating system-owned “secret” area of RAM memory and the second copying is from the operating system’s "secret" RAM area to the message-receiver task. Clearly this is non-deterministic in its timing, as these copying activities take longer as message length increases An approach that avoids this non-determinism and also accelerates performance, is to have the operating system copy a pointer to the message and deliver that pointer to the message-receiver task without moving the message contents at all. In order to avoid access collisions, the operating system then needs to go back to the message-sender task and obliterate its copy of the pointer to the message. For large messages, this eliminates the need for lengthy copying and eliminates non-determinism.

Dynamic Memory Allocation
Dynamic memory allocation is when an executing program requests that the operating system give it a block of main memory. The program then uses this memory for some purpose. Determinism of service times is also an issue in the area of dynamic allocation of RAM memory. Many general-computing non-real-time operating systems offer memory allocation services from what is termed a “Heap”. Heaps suffer from a phenomenon called “External Memory Fragmentation” that may cause the heap services to degrade. External fragmentation arises when free memory is separated into small blocks and is interspersed by allocated memory. It is a weakness of certain storage allocation algorithms, when they fail to order memory used by programs efficiently. The result is that, although free storage is available, it is effectively unusable because it is divided into pieces that are too small individually to satisfy the demands of the application.

This fragmentation problem can be solved by “garbage collection” (defragmentation) software. Unfortunately, “garbage collection” algorithms are often wildly non-deterministic – injecting randomly-appearing random-duration delays into heap services. These are often seen in the memory allocation services of general-computing non-real-time operating systems.

Real-time operating systems, solve this problem of delay by altogether avoiding both memory fragmentation and “garbage collection”, and their consequences. RTOSs offer non-fragmenting memory allocation techniques instead of heaps. They do this by limiting the variety of memory chunk sizes they make available to application software. While this approach is less flexible than the approach taken by memory heaps, they do avoid external memory fragmentation and avoid the need for defragmentation.