Rate-monotonic scheduling
Encyclopedia
In computer science
Computer science
Computer science or computing science is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems...

, rate-monotonic scheduling is a scheduling algorithm used in real-time operating system
Real-time operating system
A real-time operating system is an operating system intended to serve real-time application requests.A key characteristic of a RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is jitter...

s with a static-priority scheduling class. The static priorities are assigned on the basis of the cycle duration of the job: the shorter the cycle duration is, the higher is the job's priority.

These operating systems are generally preemptive
Preemption (computing)
In computing, preemption is the act of temporarily interrupting a task being carried out by a computer system, without requiring its cooperation, and with the intention of resuming the task at a later time. Such a change is known as a context switch...

 and have deterministic guarantees with regard to response times. Rate monotonic analysis is used in conjunction with those systems to provide scheduling guarantees for a particular application.

Introduction

A simple version of rate-monotonic analysis assumes that threads have the following properties:
  • No resource sharing (processes do not share resources, e.g. a hardware
    Computer hardware
    Personal computer hardware are component devices which are typically installed into or peripheral to a computer case to create a personal computer upon which system software is installed including a firmware interface such as a BIOS and an operating system which supports application software that...

     resource, a queue, or any kind of semaphore
    Semaphore (programming)
    In computer science, a semaphore is a variable or abstract data type that provides a simple but useful abstraction for controlling access by multiple processes to a common resource in a parallel programming environment....

     blocking or non-blocking (busy-waits
    Busy waiting
    In software engineering, busy-waiting or spinning is a technique in which a process repeatedly checks to see if a condition is true, such as whether keyboard input is available, or if a lock is available. Spinning can also be used to generate an arbitrary time delay, a technique that was necessary...

    ))
  • Deterministic deadlines are exactly equal to periods
  • Static priorities (the task with the highest static priority that is runnable immediately preempts all other tasks)
  • Static priorities assigned according to the rate monotonic conventions (tasks with shorter periods/deadlines are given higher priorities)
  • Context switch times and other thread operations are free and have no impact on the model


It is a mathematical model that contains a calculated simulation of periods in a closed system, where round-robin
Round-robin scheduling
Round-robin is one of the simplest scheduling algorithms for processes in an operating system. As the term is generally used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority . Round-robin scheduling is simple, easy to...

 and time-sharing
Time-sharing
Time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major technological shift in the history of computing.By allowing a large...

 schedulers fail to meet the scheduling needs otherwise. Rate monotonic scheduling looks at a run modeling of all threads in the system and determines how much time is needed to meet the guarantees for the set of threads in question.

proved that for a set of n periodic tasks with unique periods, a feasible schedule that will always meet deadlines exists if the CPU
Central processing unit
The central processing unit is the portion of a computer system that carries out the instructions of a computer program, to perform the basic arithmetical, logical, and input/output operations of the system. The CPU plays a role somewhat analogous to the brain in the computer. The term has been in...

 utilization is below a specific bound (depending on the number of tasks). The schedulability test for RMS is:


where Ci is the computation time, Ti is the release period (with deadline one period later), and n is the number of processes to be scheduled. For example U ≤ 0.8284 for n = 2. When the number of processes tends towards infinity
Infinity
Infinity is a concept in many fields, most predominantly mathematics and physics, that refers to a quantity without bound or end. People have developed various ideas throughout history about the nature of infinity...

 this expression will tend towards:


So a rough estimate is that RMS in the general case can meet all the deadlines if CPU utilization is 69.3%. The other 30.7% of the CPU can be dedicated to lower-priority non real-time tasks. It is known that a randomly generated periodic task system will meet all deadlines when the utilization is 85% or less, however this fact depends on knowing the exact task statistics (periods, deadlines) which cannot be guaranteed for all task sets.

The rate monotonic priority assignment is optimal meaning that if any static priority scheduling algorithm can meet all the deadlines, then the rate monotonic algorithm can too. The deadline-monotonic scheduling
Deadline-monotonic scheduling
Deadline-monotonic priority assignment is a priority assignment policy used with fixed priority pre-emptive scheduling.With deadline-monotonic priority assignment, tasks are assigned priorities according to their deadlines; the task with the shortest deadline being assigned the highest...

 algorithm is also optimal with equal periods and deadlines, in fact in this case the algorithms are identical; in addition, deadline monotonic scheduling is optimal when deadlines are less than periods.

An optimal static-priority scheduling algorithm when deadlines are greater than periods is an open problem
Open problem
In science and mathematics, an open problem or an open question is a known problem that can be accurately stated, and has not yet been solved . Some questions remain unanswered for centuries before solutions are found...

.

Avoiding priority inversion

In many practical applications, resources are shared and the unmodified RMS will be subject to priority inversion
Priority inversion
In computer science, priority inversion is a problematic scenario in scheduling when a higher priority task is indirectly preempted by a lower priority task effectively "inverting" the relative priorities of the two tasks....

 and deadlock
Deadlock
A deadlock is a situation where in two or more competing actions are each waiting for the other to finish, and thus neither ever does. It is often seen in a paradox like the "chicken or the egg"...

 hazards. In practice, this is solved by introducing priority inheritance
Priority inheritance
In real-time computing, priority inheritance is a method for eliminating priority inversion problems. Using this programming method, a process scheduling algorithm will increase the priority of a process to the maximum priority of any process waiting for any resource on which the process has a...

. Alternative methods are to use lock free algorithms or avoid the sharing of a mutex/semaphore across threads with different priorities. This is so that resource conflicts cannot result in the first place.

Disabling of preemption

  • The OS_ENTER_CRITICAL and OS_EXIT_CRITICAL primitives that lock CPU interrupts in a real-time kernel (MicroC/OS-II
    MicroC/OS-II
    MicroC/OS-II , is a low-cost priority-based pre-emptive real-time multitasking operating system kernel for microprocessors, written mainly in the C programming language...

     (commonly written as uC/OS-II) kernel),
  • The splx family of primitives which nest the locking of device interrupts (FreeBSD 5.x/6.x),

Priority inheritance

Examples of priority inheritance algorithms include (from simplest to most complex):
  • http://rt.wiki.kernel.org/index.php/Main_Page contains an implementation of this algorithm which is a part of the in-kernel Linux real time effort to make the entire Linux kernel fully preemptive
  • The basic priority inheritance protocol which waits until a high priority task requests a lock held by a low-priority task, and then boosts the low priority task priority up to the high priority level until the lock is released
  • The highest locker protocol which requires off-line analysis of all task conflicts to find a "ceiling priority"
  • The priority ceiling protocol
    Priority ceiling protocol
    In real-time computing, the priority ceiling protocol is a synchronization protocol for shared resources to avoid unbounded priority inversion and mutual deadlock due to wrong nesting of critical sections...

     which is an enhancement of basic priority inheritance which assigns a "ceiling priority" to each semaphore, which is the priority of the highest job that will ever access that semaphore. A job then cannot preempt a lower priority critical section if its priority is lower than the ceiling priority for that section.


Priority inheritance algorithms can be characterized by two parameters. First, is the inheritance lazy (only when essential) or immediate (boost priority before there is a conflict). Second is the inheritance optimistic (boost a minimum amount) or pessimistic (boost by more than the minimum amount):
pessimistic optimistic
immediate OS_ENTER_CRITICAL / OS_EXIT_CRITICAL splx, highest locker
lazy priority ceiling protocol, basic priority inheritance protocol


In practice there is no mathematical difference (in terms of the Liu-Layland system utilization bound) between the lazy and immediate algorithms, and the immediate algorithms are more efficient to implement, and so they are the ones used by most practical systems.

The basic priority inheritance protocol can produce chained blocking, that is, an almost arbitrarily long delay from the time a high priority task requests a critical section, until it is served. The other protocols guarantee that at most one lower priority critical section must finish before the high priority task gets its critical section.

The priority ceiling protocol is available in the VxWorks
VxWorks
VxWorks is a real-time operating system developed as proprietary software by Wind River Systems of Alameda, California, USA. First released in 1987, VxWorks is designed for use in embedded systems.- History :...

 real-time kernel but is seldom enabled.

An example of usage of basic priority inheritance is related to the "Mars Pathfinder
Mars Pathfinder
Mars Pathfinder was an American spacecraft that landed a base station with roving probe on Mars in 1997. It consisted of a lander, renamed the Carl Sagan Memorial Station, and a lightweight wheeled robotic rover named Sojourner.Launched on December 4, 1996 by NASA aboard a Delta II booster a...

 reset bug" which was fixed on Mars by changing the creation flags for the semaphore so as to enable the priority inheritance.

Example

Process Execution Time Period
P1 1 8
P2 2 5
P3 2 10


The utilization will be:


The sufficient condition for processes, under which we can conclude that the system is schedulable is:


Since the system is surely schedulable.

But remember, this condition is not a necessary one. So we cannot say that a system with higher utilization is not schedulable with this scheduling algorithm.

See also

  • Deos, a time and space partitioned real-time operating system containing a working Rate Monotonic Scheduler.
  • Deadline-monotonic scheduling
    Deadline-monotonic scheduling
    Deadline-monotonic priority assignment is a priority assignment policy used with fixed priority pre-emptive scheduling.With deadline-monotonic priority assignment, tasks are assigned priorities according to their deadlines; the task with the shortest deadline being assigned the highest...

  • Dynamic priority scheduling
    Dynamic priority scheduling
    Dynamic priority scheduling is a type of scheduling algorithm in which the priorities are calculated during the execution of the system. The goal of dynamic priority scheduling is to adapt to dynamically changing progress and form an optimal configuration in self-sustained manner...

  • Earliest deadline first scheduling
    Earliest deadline first scheduling
    Earliest deadline first or least time to go is a dynamic scheduling algorithm used in real-time operating systems. It places processes in a priority queue. Whenever a scheduling event occurs the queue will be searched for the process closest to its deadline...

  • RTEMS
    RTEMS
    RTEMS is a free open source real-time operating system designed for embedded systems....

    , an open source real-time operating system containing a working Rate Monotonic Scheduler.
  • Scheduling (computing)
    Scheduling (computing)
    In computer science, a scheduling is the method by which threads, processes or data flows are given access to system resources . This is usually done to load balance a system effectively or achieve a target quality of service...


External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK