The Linux Kernel/Multitasking

Linux kernel is a preemptive operating system. As a multitasking OS, it allows multiple processes to share processors (CPUs) and other system resources. Each CPU executes a single task at a time. However, multitasking allows each processor to switch between tasks that are being executed without having to wait for each task to finish. For that, the kernel can, at any time, temporarily interrupt a task being carried out by the processor, and replace it by another task that can be new or a previously suspended one. The operation involving the swapping of the running task is called .

Processes
Process is a running user space program. Kernel starts the first process /sbin/init in function using. Processes occupy system resources, like memory, CPU time. System calls and  are used to create new processes from user space. The process exit with an system call.

Linux inherits from Unix its basic process management system calls (⚲ API ↪ ⚙️ implementations):

↪ creates a new process by  the process invoking it.

↪ terminates the calling process "immediately". Any open file descriptors belonging to the process are closed.

↪ suspends the execution of the calling process until one of its children processes terminates.

↪ runs an executable file in the context of current process, replacing the previous executable. This system call is used by family of functions of libc

Linux enhances the traditional Unix process API with its own system calls. Clone creates a child process that may share parts of its execution context with the parent. It is often used to implement threads (though programmers will typically use a higher-level interface such as, implemented on top of clone).

PID - defined as  is unique sequential number. -A lists current processes. Syscall ↪  return PID of the current process which internally is called TGID - thread group id. A process can contain many threads. ↪ returns thread id. Which internal historically is called PID. ⚠️ Warning: confusion. User space PID ≠ kernel space PID. -AF lists current processes and thread as. For a single thread process all these IDs are equal.

⚲ API
 * unistd.h
 * sys/types.h
 * sys/wait.h

⚙️ Internals
 * syscalls:
 * – set pointer to thread ID
 * – create a child process
 * – create a child process and block parent
 * – create a child process
 * – disassociate parts of the process execution context
 * syscalls:
 * – operations on a process or thread
 * syscalls:
 * – obtain a file descriptor that refers to a process
 * – obtain a duplicate of another process's file descriptor
 * syscalls:
 * – obtain a file descriptor that refers to a process
 * – obtain a duplicate of another process's file descriptor
 * syscalls:
 * – terminate the calling process
 * – exit all threads in a process
 * – wait for process to change state
 * – wait for process to change state
 * syscalls:
 * – terminate the calling process
 * – exit all threads in a process
 * – wait for process to change state
 * – wait for process to change state
 * – wait for process to change state



📖 References

Inter-process communication
Inter-process communication (IPC) refers specifically to the mechanisms an operating system provides to allow processes it manages to share data. Methods for achieving IPC are divided into categories which vary based on software requirements, such as performance and modularity requirements, and system circumstances. Linux inherited from Unix the following IPC mechanisms:

Signals (⚲ API ↪ ⚙️ implementations):


 * sends signal to a process
 * ↪ sends a signal to a thread
 * ↪ - zero-copy data transfer between process address spaces

🔧 TODO:


 * Anonymous pipes and named pipes (FIFOs) ↪
 * Memory-mapped files ⤑
 * Sys V IPC:
 * Message queues
 * Semaphores
 * Shared memory:, , ,
 * Semaphores
 * Shared memory:, , ,

📖 References

Threads or tasks
In Linux kernel "thread" and "task" are almost synonyms.

💾 History: Till 2.6.39, kernel mode has only one thread protected by .

⚲ API


 * - the main scheduler API
 * and  return current
 * per-task statistics
 * function  returns
 * - interface between the scheduler and various task lifetime (fork/exit) functionality
 * - simple interface for creating and stopping kernel threads without mess.
 * creates and wake a thread
 * - interface between the scheduler and various task lifetime (fork/exit) functionality
 * - simple interface for creating and stopping kernel threads without mess.
 * creates and wake a thread

⚙️ Internals


 * ↯ hierarchy:

Scheduler
The  is the part of the operating system that decides which process runs at a certain point in time. It usually has the ability to pause a running process, move it to the back of the running queue and start a new process.

Active processes are placed in an array called a , or runqueue -. The run queue may contain priority values for each process, which will be used by the scheduler to determine which process to run next. To ensure each program has a fair share of resources, each one is run for some time period (quantum) before it is paused and placed back into the run queue. When a program is stopped to let another run, the program with the highest priority in the run queue is then allowed to execute. Processes are also removed from the run queue when they ask to sleep, are waiting on a resource to become available, or have been terminated.

Linux uses the (CFS), the first implementation of a fair queuing process scheduler widely used in a general-purpose operating system. CFS uses a well-studied, classic scheduling algorithm called "fair queuing" originally invented for packet networks. The CFS scheduler has a scheduling complexity of O(log N), where N is the number of tasks in the runqueue. Choosing a task can be done in constant time, but reinserting a task after it has run requires O(log N) operations, because the run queue is implemented as a.

In contrast to the previous, the CFS scheduler implementation is not based on run queues. Instead, a red-black tree implements a "timeline" of future task execution. Additionally, the scheduler uses nanosecond granularity accounting, the atomic units by which an individual process' share of the CPU was allocated (thus making redundant the previous notion of timeslices). This precise knowledge also means that no specific heuristics are required to determine the interactivity of a process, for example.

Like the old O(1) scheduler, CFS uses a concept called "sleeper fairness", which considers sleeping or waiting tasks equivalent to those on the runqueue. This means that interactive tasks which spend most of their time waiting for user input or other events get a comparable share of CPU time when they need it.

The data structure used for the scheduling algorithm is a red-black tree in which the nodes are scheduler specific structures, entitled. These are derived from the general task_struct process descriptor, with added scheduler elements. These nodes are indexed by processor execution time in nanoseconds. A maximum execution time is also calculated for each process. This time is based upon the idea that an "ideal processor" would equally share processing power amongst all processes. Thus, the maximum execution time is the time the process has been waiting to run, divided by the total number of processes, or in other words, the maximum execution time is the time the process would have expected to run on an "ideal processor".

When the scheduler is invoked to run a new processes, the operation of the scheduler is as follows:
 * 1) The left most node of the scheduling tree is chosen (as it will have the lowest spent execution time), and sent for execution.
 * 2) If the process simply completes execution, it is removed from the system and scheduling tree.
 * 3) If the process reaches its maximum execution time or is otherwise stopped (voluntarily or via interrupt) it is reinserted into the scheduling tree based on its new spent execution time.
 * 4) The new left-most node will then be selected from the tree, repeating the iteration.

If the process spends a lot of its time sleeping, then its spent time value is low and it automatically gets the priority boost when it finally needs it. Hence such tasks do not get less processor time than the tasks that are constantly running.

An alternative to CFS is the (BFS) created by Con Kolivas. The objective of BFS, compared to other schedulers, is to provide a scheduler with a simpler algorithm, that does not require adjustment of heuristics or tuning parameters to tailor performance to a specific type of computation workload.

Con Kolivas also maintains another alternative to CFS, the MuQSS scheduler.

The Linux kernel contains different scheduler classes (or policies). The Completely Fair Scheduler used nowadays by default is  scheduler class aka SCHED_OTHER. The kernel also contains two additional classes and, and another two real-time scheduling classes named  (realtime first-in-first-out) and  (realtime round-robin), with a third realtime scheduling policy known as  that implements the  added later. Any realtime scheduler class takes precedence over any of the "normal" &mdash;i.e. non realtime&mdash; classes. The scheduler class is selected and configured through the ↪  system call.

Properly balancing latency, throughput, and fairness in schedulers is an open problem.

⚲ API


 * – priority of running processes
 * – run a program with modified scheduling priority
 * – manipulate the real-time attributes of a process
 * ↪ – get scheduling policy and attributes
 * – the main scheduler API

⚙️ Internals


 * is called from
 * is the main scheduler function.
 * implements, ,
 * :: and other members with less unique identifiers
 * implements, ,
 * :: and other members with less unique identifiers
 * :: and other members with less unique identifiers
 * :: and other members with less unique identifiers

🛠️ Utilities


 * Understanding Scheduling Behavior with SchedViz
 * Understanding Scheduling Behavior with SchedViz
 * Understanding Scheduling Behavior with SchedViz
 * Understanding Scheduling Behavior with SchedViz

📖 References


 * CFS
 * Tuning the task scheduler
 * stop using CPU limits on Kubernetes
 * Completely fair scheduler LWN
 * Tuning the task scheduler
 * stop using CPU limits on Kubernetes
 * Completely fair scheduler LWN
 * Completely fair scheduler LWN

📚 Further reading about the scheduler
 * Scheduler tracing
 * bcc/ebpf CPU and scheduler tools

Wait queues
A wait queue in the kernel is a data structure that allows one or more processes to wait (sleep) until something of interest happens. They are used throughout the kernel to wait for available memory, I/O completion, message arrival, and many other things. In the early days of Linux, a wait queue was a simple list of waiting processes, but various scalability problems (including the ) have led to the addition of a fair amount of complexity since then.

⚲ API

consists of double linked list of and a spinlock.

Waiting for simple events:


 * Use one of two methods for initialization:
 * initializes in function context
 * - actually defines in global context
 * Wait alternatives:
 * - preferable wait
 * - uninterruptible wait. Can cause deadlock ⚠
 * etc
 * etc

👁 For example usage see references to unique.

Explicit use of add_wait_queue instead of simple wait_event for complex cases:


 * actually defines wait_queue_entry with
 * inserts process in the first position of a wait queue

⚙️ Internals

📖 References
 * Handling wait queues

Synchronization
Thread synchronization is defined as a mechanism which ensures that two or more concurrent processes or threads do not simultaneously execute some particular program segment known as  (mutex). When one thread starts executing the critical section (serialized segment of the program) the other thread should wait until the first thread finishes. If proper synchronization techniques are not applied, it may cause a race condition where, the values of variables may be unpredictable and vary depending on the timings of context switches of the processes or threads.

Futex
A ↪  (short for "fast userspace mutex") is a kernel system call that programmers can use to implement basic locking, or as a building block for higher-level locking abstractions such as semaphores and POSIX mutexes or condition variables.

A futex consists of a kernelspace wait queue that is attached to an aligned integer in userspace. Multiple processes or threads operate on the integer entirely in userspace (using atomic operations to avoid interfering with one another), and only resort to relatively expensive system calls to request operations on the wait queue (for example to wake up waiting processes, or to put the current process on the wait queue). A properly programmed futex-based lock will not use system calls except when the lock is contended; since most operations do not require arbitration between processes, this will not happen in most cases.

The basic operations of futexes are based on only two central operations and  though implementation has a more operations for more specialized cases.


 * WAIT (addr, val) checks if the value stored at the address addr is val, and if it is puts the current thread to sleep.
 * WAKE (addr, val) wakes up val number of threads waiting on the address addr.

⚲ API

⚙️ Internals:

📖 References

File locking
⚲ API:

Semaphore
💾 History: Semaphore is part of System V IPC 

⚲ API

⚙️ Internals:

Kernel space synchronization
For kernel mode synchronization Linux provides three categories of locking primitives: sleeping, per CPU local locks and spinning locks.

Read-Copy-Update
Common mechanism to solve the readers–writers problem is the (RCU) algorithm. Read-copy-update implements a kind of mutual exclusion that is wait-free (non-blocking) for readers, allowing extremely low overhead. However, RCU updates can be expensive, as they must leave the old versions of the data structure in place to accommodate pre-existing readers.

💾 ''History: RCU was added to Linux in October 2002. Since then, there are thousandths uses of the RCU API within the kernel including the networking protocol stacks and the memory-management system. The implementation of RCU in version 2.6 of the Linux kernel is among the better-known RCU implementations.''

⚲ The core API in is quite small:


 * marks an RCU-protected data structure so that it won't be reclaimed for the full duration of that critical section.


 * is used by a reader to inform the reclaimer that the reader is exiting an RCU read-side critical section. Note that RCU read-side critical sections may be nested and/or overlapping.


 * blocks until all pre-existing RCU read-side critical sections on all CPUs have completed. Note that  will not necessarily wait for any subsequent RCU read-side critical sections to complete.

👁 For example, consider the following sequence of events: CPU 0                 CPU 1                 CPU 2 - - ---	 1. rcu_read_lock 2.                   enters synchronize_rcu 3.                                              rcu_read_lock 4. rcu_read_unlock 5.                    exits synchronize_rcu 6.                                             rcu_read_unlock
 * Since  is the API that must figure out when readers are done, its implementation is key to RCU. For RCU to be useful in all but the most read-intensive situations,  's overhead must also be quite small.


 * Alternatively, instead of blocking, synchronize_rcu may register a callback to be invoked after all ongoing RCU read-side critical sections have completed. This callback variant is called in the Linux kernel.


 * - The updater uses this function to assign a new value to an RCU-protected pointer, in order to safely communicate the change in value from the updater to the reader. This function returns the new value, and also executes any memory barrier instructions required for a given CPU architecture. Perhaps more importantly, it serves to document which pointers are protected by RCU.


 * - The reader uses this function to fetch an RCU-protected pointer, which returns a value that may then be safely dereferenced. It also executes any directives required by the compiler or the CPU, for example, a volatile cast for gcc, a memory_order_consume load for C/C++11 or the memory-barrier instruction required by the old DEC Alpha CPU. The value returned by   is valid only within the enclosing RCU read-side critical section.  As with , an important function of   is to document which pointers are protected by RCU.

The RCU infrastructure observes the time sequence of,  ,  , and   invocations in order to determine when (1)   invocations may return to their callers and (2)   callbacks may be invoked. Efficient implementations of the RCU infrastructure make heavy use of batching in order to amortize their overhead over many uses of the corresponding APIs.

⚙️ Internals

📖 References
 * RCU initialization
 * RCU initialization
 * RCU initialization

Mutexes
⚲ API
 * has owner and usage constrains, more easy to debug then semaphore
 * blocking mutual exclusion locks with priority inheritance (PI) support
 * Wound/Wait mutexes: blocking mutual exclusion locks with deadlock avoidance
 * readers–writer semaphores
 * - use completion for synchronization task with ISR and task or two tasks.
 * readers–writer semaphores
 * - use completion for synchronization task with ISR and task or two tasks.
 * - use completion for synchronization task with ISR and task or two tasks.

💾 Historical
 * - use mutex instead semaphore if possible

📖 References
 * LWN: completion events
 * LWN: completion events
 * LWN: completion events

per CPU local lock

 * etc
 * etc
 * etc

⚲ API

📖 References
 * Local locks in the kernel
 * Local locks in the kernel

💾 ''History: Prior to kernel version 2.6, Linux disabled interrupt to implement short critical sections. Since version 2.6 and later, Linux is fully preemptive.''

s
a spinlock is a lock which causes a thread trying to acquire it to simply wait in a loop ("spin") while repeatedly checking if the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of busy waiting. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on (that which holds the lock) blocks, or "goes to sleep".

Spinlocks are commonly used inside kernels because they are efficient if threads are likely to be blocked for only short periods. However, spinlocks become wasteful if held for longer durations, as they may prevent other threads from running and require rescheduling. 👁 For example uses spinlock to protect assess to the linked list.

Enabling and disabling of kernel preemption replaced spinlocks on uniprocessor systems (disabled ). Most spinning locks becoming sleeping locks in the kernels.

📖 References
 * Introduction to spinlocks
 * Queued spinlocks
 * Introduction to spinlocks
 * Queued spinlocks
 * Introduction to spinlocks
 * Queued spinlocks

s
A seqlock (short for "sequential lock") is a special locking mechanism used in Linux for supporting fast writes of shared variables between two parallel operating system routines. It is a special solution to the readers–writers problem when the number of writers is small.

It is a reader-writer consistent mechanism which avoids the problem of writer starvation. A consists of storage for saving a sequence counter /seqcount_spinlock_t in addition to a lock. The lock is to support synchronization between two writers and the counter is for indicating consistency in readers. In addition to updating the shared data, the writer increments the sequence counter, both after acquiring the lock and before releasing the lock. Readers read the sequence counter before and after reading the shared data. If the sequence counter is odd on either occasion, a writer had taken the lock while the data was being read and it may have changed. If the sequence counters are different, a writer has changed the data while it was being read. In either case readers simply retry (using a loop) until they read the same even sequence counter before and after.

💾 ''History: The semantics stabilized as of version 2.5.59, and they are present in the 2.6.x stable kernel series. The seqlocks were developed by Stephen Hemminger and originally called frlocks, based on earlier work by Andrea Arcangeli. The first implementation was in the x86-64 time code where it was needed to synchronize with user space where it was not possible to use a real lock.''

⚲ API



👁 Example:, defined in

📖 References
 * SeqLock
 * SeqLock

Spinning or sleeping locks

 * {| class="wikitable"

! !! on server !! on preempt RT
 * spinlock_t, || raw_spinlock_t || rt_mutex_base, rt_spin_lock, sleeping
 * rwlock_t || spinning || sleeping
 * local_lock || preempt_disable|| migrate_disable, rt_spin_lock, sleeping
 * }
 * local_lock || preempt_disable|| migrate_disable, rt_spin_lock, sleeping
 * }
 * }

Time
⚲ UAPI
 * &mdash; nanosecond resolution
 * &mdash; microsecond resolution
 * &mdash; nanosecond resolution, used in syscalls
 * &mdash; nanosecond resolution, used in syscalls
 * &mdash; nanosecond resolution, used in syscalls
 * &mdash; nanosecond resolution, used in syscalls
 * &mdash; nanosecond resolution, used in syscalls

⚲ API
 * &mdash; nanosecond scalar representation for kernel time values
 * &mdash; nanosecond scalar representation for kernel time values
 * &mdash; nanosecond scalar representation for kernel time values
 * &mdash; nanosecond scalar representation for kernel time values
 * &mdash; nanosecond scalar representation for kernel time values
 * &mdash; nanosecond scalar representation for kernel time values

⚙️ Internals

📖 References

⚙️ Locking internals
 * atomic operations
 * – module-based torture test facility for locking
 * atomic operations
 * – module-based torture test facility for locking
 * – module-based torture test facility for locking
 * – module-based torture test facility for locking

📚 Locking references
 * on preempt RT: local_lock, spinlock_t, rwlock_t
 * raw_spinlock_t, bit spinlocks
 * on non preempt RT: spinlock_t, rwlock_t
 * Synchronization primitives
 * Tickless (Full dynticks),
 * raw_spinlock_t, bit spinlocks
 * on non preempt RT: spinlock_t, rwlock_t
 * Synchronization primitives
 * Tickless (Full dynticks),
 * Synchronization primitives
 * Tickless (Full dynticks),

Interrupts
An is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing. The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor resumes normal activities.

There are two types of interrupts: hardware interrupts and software interrupts. Hardware interrupts are used by devices to communicate that they require attention from the operating system. For example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that cause the processor to read the keystroke or mouse position. Unlike the software type, hardware interrupts are asynchronous and can occur in the middle of instruction execution, requiring additional care in programming. The act of initiating a hardware interrupt is referred to as an interrupt request - IRQ (⚙️ ).

A software interrupt is caused either by an exceptional condition in the processor itself, or a special instruction in the instruction set which causes an interrupt when it is executed. The former is often called a  (⚙️ ) or exception and is used for errors or events occurring during program execution that are exceptional enough that they cannot be handled within the program itself. For example, if the processor's arithmetic logic unit is commanded to divide a number by zero, this impossible demand will cause a divide-by-zero exception (⚙️ ), perhaps causing the computer to abandon the calculation or display an error message. Software interrupt instructions function similarly to subroutine calls and are used for a variety of purposes, such as to request services from low-level system software such as device drivers. For example, computers often use software interrupt instructions to communicate with the disk controller to request data be read or written to the disk.

Each interrupt has its own interrupt handler. The number of hardware interrupts is limited by the number of interrupt request (IRQ) lines to the processor, but there may be hundreds of different software interrupts.

⚲ API


 * /proc/interrupts
 * – utility to display kernel interrupt information
 * irqbalance – distribute hardware interrupts across processors on a multiprocessor system
 * There are many ways to request ISR, two of them
 * - preferable function to allocate an interrupt line for a managed device with a threaded ISR
 * , - old and common functions to add and remove a handler for an interrupt line

⚙️ Internals



📖 References
 * Interrupts
 * Interrupts
 * Interrupts
 * Interrupts
 * Interrupts

IRQ affinity
⚲ API
 * /proc/irq/default_smp_affinity
 * /proc/irq/*/smp_affinity and /proc/irq/*/smp_affinity_list

Common functions:

🛠️ Utilities
 * irqbalance – distributes hardware interrupts across CPUs

📖 References
 * IRQ affinity, LF
 * managed_irq kernel parameter, @LKML
 * irqaffinity kernel parameter, @LKML
 * irqaffinity kernel parameter, @LKML

Threaded IRQ
⚲ API

,

ISR should return IRQ_WAKE_THREAD to run thread function

⚙️ Internals



📖 References



Work
work is a workqueue wrapper

⚲ API

👁 Example usage

⚙️ Internals:

Workqueue
⚲ API

⚙️ Internals

📖 References

Interrupt context

 * – framework for enqueueing and running callbacks from hardirq context

softirq timer
This timer is a softirq for periodical tasks with jiffies resolution

⚲ API
 * &mdash; sets expiration time in jiffies.
 * &mdash; sets expiration time in jiffies.
 * &mdash; sets expiration time in jiffies.

⚙️ Internals

👁 Examples
 * and

High-resolution timer
⚲ API
 * , hrtimer.function &mdash; callback
 * starts a timer with nanosecond resolution
 * starts a timer with nanosecond resolution
 * starts a timer with nanosecond resolution

👁 Example

⚙️ Internals

📚 HR timers references

📚 Timers references
 * Better CPU selection for timer expiration
 * Better CPU selection for timer expiration

Tasklet
tasklet is a softirq, for time critical operations

⚲ API is deprecated in favor of threaded IRQs:

⚙️ Internals: HI_SOFTIRQ, TASKLET_SOFTIRQ

Softirq
softirq is internal system facility and should not be used directly. Use tasklet or threaded IRQs

⚲ API
 * cat /proc/softirqs
 * registers

⚙️ Internals

⚲ API

📖 References
 * Introduction to deferred interrupts (Softirq, Tasklets and Workqueues)
 * Softirq, Tasklets and Workqueues
 * Timers and time management
 * Deferred work, linux-kernel-labs
 * Chapter 7. Time, Delays, and Deferred Work

CPU specific
🖱️ GUI
 * tuna – program for tuning running processes

⚲ API
 * cat /proc/cpuinfo
 * /sys/devices/system/cpu/
 * /sys/cpu/
 * /sys/fs/cgroup/cpu/
 * grep -i cpu /proc/self/status
 * rdmsr – tool for reading CPU machine specific registers (MSR)
 * – display information about the CPU architecture


 * – arch specific cpu topology information
 * – generic cpu definition
 * – CPU hotplug states
 * – a generic framework for CPU idle power management
 * – Interface between cpufreq drivers and the scheduler
 * – cputime accounting APIs
 * – CPU hotplug states
 * – a generic framework for CPU idle power management
 * – Interface between cpufreq drivers and the scheduler
 * – cputime accounting APIs
 * – cputime accounting APIs

⚙️ Internals



Cache


⚙️ Internals



📖 References
 * https://www.thinkwiki.org/wiki/How_to_use_cpufrequtils
 * https://www.thinkwiki.org/wiki/How_to_use_cpufrequtils

This chapter is about multiprocessing and aspects of Linux kernel.

Key concepts and features of Linux SMP include:
 * Symmetry: In an SMP system, all processors are considered the same without hardware hierarchy in contradiction to use of s.
 * Load balancing: The Linux kernel employs load balancing mechanisms to distribute tasks evenly among available CPU cores. This prevents any one core from becoming overwhelmed while others remain underutilized.
 * Parallelism: SMP enables parallel processing, where multiple threads or processes can execute simultaneously on different CPU cores. This can significantly improve the execution speed of applications that are designed to take advantage of multiple threads.
 * Thread scheduling: The Linux kernel scheduler is responsible for determining which threads or processes run on which CPU cores and for how long. It aims to optimize performance by minimizing contention and maximizing CPU utilization.
 * Shared memory: In an SMP system, all CPU cores typically share the same physical memory space. This allows processes and threads running on different cores to communicate and share data more efficiently.
 * NUMA – : In larger SMP systems, memory access times might not be uniform due to the physical arrangement of memory banks and processors. Linux has mechanisms to handle NUMA architectures efficiently, allowing processes to be scheduled on CPUs closer to their associated memory.
 * Cache coherency: SMP systems require mechanisms to ensure that all CPU cores have consistent views of memory. Cache coherency protocols ensure that changes made to shared memory locations are correctly propagated to all cores.
 * Scalability: SMP systems can be scaled up to include more CPU cores, enhancing the overall computing power of the system. However, as the number of cores increases, challenges related to memory access, contention, and communication between cores may arise.
 * Kernel and user space: Linux applications running in user space can take advantage of SMP without needing to be aware of the underlying hardware details. The kernel handles the management of CPU cores and resource allocation.

🗝️ Key terms
 * Affinity refers to assigning a process or thread to specific CPU cores. This helps control which CPUs execute tasks, potentially improving performance by reducing data movement between cores. It can be managed using system calls or commands. Affinity can be represented as CPU bitmask: or CPU affinity list:.

⚲ API
 * – set or retrieve a process's CPU affinity
 * – determine CPU and NUMA node on which the calling thread is running
 * – confine processes to processor and memory node subsets
 * – configure CPUs
 * – macros for manipulating CPU sets
 * grep Cpus_allowed /proc/self/status
 * – set and get a thread's CPU affinity mask
 * – common kernel function to change a task's affinity mask
 * – cpuset interface
 * – CPU affinity reverse-map support
 * – Cpumasks provide a bitmap suitable for representing the set of CPU's in a system, one bit position per CPU number
 * – basic definitions for percpu areas
 * /sys/bus/cpu
 * – basic definitions for percpu areas
 * /sys/bus/cpu
 * /sys/bus/cpu
 * /sys/bus/cpu
 * /sys/bus/cpu
 * /sys/bus/cpu
 * /sys/bus/cpu
 * /sys/bus/cpu
 * /sys/bus/cpu

⚙️ Internals
 * – affinity of
 * IPI –
 * – CPU driver model subsystem support
 * IPI –
 * – CPU driver model subsystem support
 * IPI –
 * – CPU driver model subsystem support
 * IPI –
 * – CPU driver model subsystem support
 * – CPU driver model subsystem support
 * – CPU driver model subsystem support
 * – CPU driver model subsystem support
 * – CPU driver model subsystem support

🛠️ Utilities
 * irqbalance – distributes hardware interrupts across CPUs
 * – controls NUMA policy for processes or shared memory

📖 References

📚 Further reading
 * CPU Partitioning
 * – the Scheduler balances CPUs (scheduling groups) within a sched domain
 * CPU Isolation
 * isolcpus @LKML
 * Functionalities of the scheduler TuneD plugin
 * Functionalities of the scheduler TuneD plugin

s
Memory barriers (MB) are synchronization mechanisms used to ensure proper ordering of memory operations in a SMP environment. They play a crucial role in maintaining the consistency and correctness of data shared among different CPU cores or processors. MBs prevent unexpected and potentially harmful reordering of memory access instructions by the compiler or CPU, which can lead to data corruption and race conditions in a concurrent software system.

⚲ API

⚙️ Internals



📖 References

Architectures
Linux CPU architectures refer to the different types of central processing units (CPUs) that are compatible with the Linux operating system. Linux is designed to run on a wide range of CPU architectures, which allows it to be utilized on various devices, from smartphones to servers and supercomputers. Each architecture has its own unique features, advantages, and design considerations.

Architectures are classified by family (e.g. x86, ARM), or  size (e.g., ).

Some functions with different implementations for different CPU architectures:

⚲ API

⚙️ Arch internals
 * x86
 * https://lwn.net/Kernel/Index/#Architectures-x86
 * ARM
 * https://lwn.net/Kernel/Index/#Architectures-ARM
 * architecture-specific initialization
 * https://lwn.net/Kernel/Index/#Architectures-x86
 * ARM
 * https://lwn.net/Kernel/Index/#Architectures-ARM
 * architecture-specific initialization
 * https://lwn.net/Kernel/Index/#Architectures-ARM
 * architecture-specific initialization
 * architecture-specific initialization

📖 References

📚 Further reading about multitasking, scheduling and CPU
 * bcc/ebpf CPU and scheduler tools