The Linux Kernel/System

This article describes infrastructures used to support and manage other kernel functionalities. This functionality is named after system calls and sysfs.

communication
User space communication refers to the exchange of data and messages between user space applications and the kernel. User space applications are programs that run in the user space of the operating system, which is a protected area of memory that provides a safe and isolated environment for applications to run in.

There are several mechanisms available in Linux for user space communication with the kernel. One of the most common mechanisms is through s, which are functions that allow user space applications to request services from the kernel, such as opening files, creating processes, and accessing system resources.

Another mechanism for user space communication is through s, which are special files that represent physical or virtual devices, such as storage devices, network interfaces, and various peripheral devices. User space applications can communicate with these devices by reading from and writing to their corresponding device files.

In summary, Linux kernel provides several mechanisms for user space communication, including system calls, device files,, , and devtmpfs. These mechanisms enable user space applications to communicate with the kernel and access system resources in a safe and controlled manner.

⚲ APIs:
 * kernel space API for user space
 * System calls
 * Device files
 * user space API for kernel space
 * System calls
 * Device files
 * user space API for kernel space

📖 References
 * ULK3 Chapter 11. Signals
 * ULK3 Chapter 11. Signals
 * ULK3 Chapter 11. Signals
 * ULK3 Chapter 11. Signals

System calls
System calls are the fundamental interface between user space applications and the Linux kernel. They provide a way for programs to request services from the operating system, such as opening a file, allocating memory, or creating a new process. In the Linux kernel, system calls are implemented as functions that can be invoked by user space programs using a software interrupt mechanism.

The Linux kernel provides hundreds of system calls, each with its own unique functionality. These system calls are organized into categories such as process management, file management, network communication, and memory management. User space applications can use these system calls to interact with the kernel and access the underlying system resources.

⚲ API


 * Table of syscalls

⚙️ Internals


 * installs
 * ↯ call hierarchy:
 * ↯ call hierarchy:
 * ↯ call hierarchy:

📖 References
 * Directory of system calls, man section 2
 * Anatomy of a system call, part 1 and part 2
 * Anatomy of a system call, part 1 and part 2

💾 Historical
 * ULK3 Chapter 10. System Calls

Device files
Classic UNIX devices are Char devices used as byte streams with.

⚲ API

ls /dev cat /proc/devices cat /proc/misc

Examples:


 * - actually byte stream devices
 * Chapter 13. I/O Architecture and Device Drivers
 * Chapter 13. I/O Architecture and Device Drivers

hiddev
⚠️ Warning: confusion. hiddev isn't real human interface device! It reuses USBHID infrastructure. hiddev is used for example for monitor controls and Uninterruptible Power Supplies. This module supports these devices separately using a separate event interface on /dev/usb/hiddevX (char 180:96 to 180:111) (⚙️ )

⚲ API

⚙️ Internals
 * CONFIG_USB_HIDDEV

📖 References

📖 References

Administration
🔧 TODO

📖 References

procfs
The proc filesystem (procfs) is a special filesystem that presents information about processes and other system information in a hierarchical file-like structure, providing a more convenient and standardized method for dynamically accessing process data held in the kernel than traditional tracing methods or direct access to kernel memory. Typically, it is mapped to a mount point named  at boot time. The proc file system acts as an interface to internal data structures in the kernel. It can be used to obtain information about the system and to change certain kernel parameters at runtime.

includes a directory for each running process &mdash;including kernel threads&mdash; in directories named, where   is the process number. Each directory contains information about one process, including the command that originally started the process, the names and values of its environment variables , a symlink to its working directory , another symlink to the original executable file &mdash;if it still exists&mdash; , a couple of directories with symlinks to each open file descriptor and the status &mdash;position, flags, ...&mdash; of each of them , information about mapped files and blocks like heap and stack , a binary image representing the process's virtual memory , a symlink to the root path as seen by the process , a directory containing hard links to any child process or thread , basic information about a process including its run state and memory usage  and much more.

📖 References

sysfs
sysfs is a pseudo-file system that exports information about various kernel subsystems, hardware devices, and associated device drivers from the kernel's device model to user space through virtual files. In addition to providing information about various devices and kernel subsystems, exported virtual files are also used for their configuring. Sysfs is designed to export the information present in the device tree, which would then no longer clutter up procfs.

Sysfs is mounted under the  mount point.

⚲ API



📖 References

devtmpfs
devtmpfs is a hybrid kernel/userspace approach of a device filesystem to provide nodes before udev runs for the first time.

📖 References

Containerization
is a powerful technology that has revolutionized the way software applications are developed, deployed, and run. At its core, containerization provides an isolated environment for running applications, where the application has all the necessary dependencies and can be easily moved from one environment to another without worrying about any compatibility issues.

Containerization technology has its roots in the command, which was introduced in the Unix operating system in the 1979. Chroot provided a way to change the root directory of a process, effectively creating a new isolated environment with its own file system hierarchy. However, this early implementation of containerization had limited functionality, and it was difficult to manage and control the various processes running within the container.

In the early 2000s, the Linux kernel introduced and  to provide a more robust and scalable containerization solution. Namespaces allow processes to have their own isolated view of the system, including the file system, network, and process ID space, while control groups provide fine-grained control over the resources allocated to each container, such as CPU, memory, and I/O.

Using these kernel features, containerization platforms such as and  have emerged as popular solutions for building and deploying containerized applications at scale. Containerization has become an essential tool for modern software development, allowing developers to easily package applications and deploy them in a consistent and predictable manner across different environments.

Resources usage and limits
⚲ API
 * – change root directory
 * – return system information
 * – get resource usage
 * get/set resource limits:

📖 References

Namespaces
provide the way to to isolate and virtualize different aspects of the operating system. Namespaces allow multiple instances of an application to run in isolation from each other, without interfering with the host system or other instances.

🔧 TODO

⚲ API
 * /proc/self/ns
 * namespaces definition
 * - struct net
 * namespaces definition
 * - struct net
 * namespaces definition
 * - struct net
 * namespaces definition
 * - struct net
 * - struct net
 * - struct net
 * - struct net
 * - struct net

⚙️ Internals
 * - struct of namespaces

📖 References

Control groups
are used to limit and control the resource usage of groups of processes. They allow administrators to set limits on CPU usage, memory usage, disk I/O, network bandwidth, and other resources, which can be useful for managing system performance and preventing resource contention.

There are two versions of cgroups. Unlike v1, cgroup v2 has only a single process hierarchy and discriminates between processes, not threads.

Here are some of the key differences between cgroups v1 and v2:

Cgroups v2 is not backward compatible with cgroups v1, which means that migrating from v1 to v2 can be challenging and requires careful planning.

🔧 TODO

⚲ API
 * – holds set of reference-counted pointers to objects
 * – list of cgroup subsystems
 * – holds set of reference-counted pointers to objects
 * – list of cgroup subsystems
 * – list of cgroup subsystems

⚙️ Internals
 * – list of in task_struct

📖 References
 * – slice unit configuration
 * – slice unit configuration
 * – slice unit configuration

📚 Further reading
 * https://github.com/containers

Driver Model
The Linux driver model (or Device Model, or just DM) is a framework that provides a consistent and standardized way for device drivers to interface with the kernel. It defines a set of rules, interfaces, and data structures that enable device drivers to communicate with the kernel and perform various operations, such as managing resources, livecycle and more.

DM core structure consists of DM classes, DM buses, DM drivers and DM devices.

kobject
In the Linux kernel, a is a fundamental data structure used to represent kernel objects and provide a standardized interface for interacting with them. A kobject is a generic object that can represent any type of kernel object, including devices, files, modules, and more.

The kobject data structure contains several fields that describe the object, such as its name, type, parent, and operations. Each kobject has a unique name within its parent object, and the parent-child relationships form a hierarchy of kobjects.

Kobjects are managed by the kernel's sysfs file system, which provides a virtual file system that exposes kernel objects as files and directories in the user space. Each kobject is associated with a sysfs directory, which contains files and attributes that can be read or written to interact with the kernel object.

⚲ Infrastructure API
 * 🔧 TODO
 * 🔧 TODO
 * 🔧 TODO

Classes
A class is a higher-level view of a device that abstracts out low-level implementation details. Drivers may see a NVME storage or a SATA storage, but, at the class level, they are all simply devices. Classes allow user space to work with devices based on what they do, rather than how they are connected or how they work. General DM classes structure match.

⚲ API
 * ls /sys/class/
 * registers

👁 Examples: ,

Buses
A is a channel between the processor and one or more peripheral devices. A DM bus is for a peripheral bus. General DM buses structure match. For the purposes of the device model, all devices are connected via a bus, even if it is an internal, virtual,. Buses can plug into each other. A USB controller is usually a PCI device, for example. The device model represents the actual connections between buses and the devices they control. A bus is represented by the structure. It contains the name, the default attributes, the bus' methods, PM operations, and the driver core's private data.

⚲ API
 * ls /sys/bus/
 * registers

👁 Examples:, , , ,


 * Peripheral buses

Drivers
⚲ API
 * ls /sys/bus/:/drivers/
 * - simple common driver initializer, 👁 for example used in
 * registers - basic device driver structure, one per all device instances.

👁 Examples:

Platform drivers
 * registers (platform wrapper of ) with

👁 Examples:

Devices
⚲ API
 * ls /sys/devices/
 * registers - the basic device structure, per each device instance

👁 Examples: mousedev_create

Platform devices
 * - platform wrapper of, contains resources associated with the devie
 * it is can be created dynamically automatically by or . Or registered with.
 * - releases device and associated resources

👁 Examples:

⚲ API 🔧 TODO
 * platform_device_info platform_device_id platform_device_register_full platform_device_add
 * platform_device_add_data platform_device_register_data platform_device_add_resources
 * attribute_group dev_pm_ops

⚙️ Internals

📖 References
 * Linux Device Model, by linux-kernel-labs
 * Linux Device Model, by linux-kernel-labs
 * Linux Device Model, by linux-kernel-labs
 * Linux Device Model, by linux-kernel-labs
 * Linux Device Model, by linux-kernel-labs
 * Linux Device Model, by linux-kernel-labs

Modules

 * Article about modules

⚲ API
 * lsmod
 * cat /proc/modules

⚙️ Internals

📖 References
 * LDD3: Building and Running Modules
 * http://www.xml.com/ldd/chapter/book/ch02.html
 * http://www.tldp.org/LDP/tlk/modules/modules.html
 * http://www.tldp.org/LDP/lkmpg/2.6/html/ The Linux Kernel Module Programming Guide

es
Peripheral buses are the communication channels used to connect various peripheral devices to a computer system. These buses are used to transfer data between the peripheral devices and the system's processor or memory. In the Linux kernel, peripheral buses are implemented as drivers that enable communication between the operating system and the hardware.

Peripheral buses in the Linux kernel include USB, PCI, SPI, I2C, and more. Each of these buses has its own unique characteristics, and the Linux kernel provides support for a wide range of peripheral devices.

The PCI (Peripheral Component Interconnect) bus is used to connect internal hardware devices in a computer system. It is commonly used to connect graphics cards, network cards, and other expansion cards. The Linux kernel provides a PCI bus driver that enables communication between the operating system and the devices connected to the bus.

The USB (Universal Serial Bus) is one of the most commonly used peripheral buses in modern computer systems. It allows devices to be hot-swapped and supports high-speed data transfer rates.

🔧 TODO: device enumeration

⚲ API
 * Shell interface: ls /proc/bus/ /sys/bus/

See also Buses of Driver Model

See Input: keyboard, mouse etc

PCI

⚲ Shell API
 * lspci -vv
 * column -t /proc/bus/pci/devices

Main article: PCI

USB

⚲ Shell API
 * lsusb -v
 * ls /sys/bus/usb/
 * cat /proc/bus/usb/devices

⚙️ Internals

📖 References
 * LDD3:USB Drivers
 * LDD3:USB Drivers

Other buses

Buses for 🤖 embedded devices:
 * https://i2c.wiki.kernel.org
 * https://i2c.wiki.kernel.org

SPI

⚲ API

⚙️ Internals

📖 References

Hardware interfaces
Hardware interfaces are basic part of any operating, enabling communication between the processor and other HW components of a computer system: memory, peripheral devices and buses, various controllers.

Interrupts

I/O ports and registers
I/O ports and registers are electronic components in computer systems that enable communication between CPU and other electronic controllers and devices.

⚲ API

&mdash; register map access API

&mdash; generic I/O port emulation.








 * The {in,out}[bwl] macros are for emulating x86-style PCI/ISA IO space:



&mdash; definitions of routines for detecting, reserving and allocating system resources.



Functions for memory mapped registers:

...

Hardware Device Drivers
Keywords: firmware, hotplug, clock, mux, pin

⚙️ Internals

📖 References
 * https://hwmon.wiki.kernel.org/
 * LDD3:The Linux Device Model
 * http://www.tldp.org/LDP/tlk/dd/drivers.html
 * http://www.xml.com/ldd/chapter/book/
 * http://examples.oreilly.com/linuxdrive2/
 * LDD3:The Linux Device Model
 * http://www.tldp.org/LDP/tlk/dd/drivers.html
 * http://www.xml.com/ldd/chapter/book/
 * http://examples.oreilly.com/linuxdrive2/

Kernel booting
This is loaded in two stages - in the first stage the kernel (as a compressed image file) is loaded into memory and decompressed, and a few fundamental functions such as essential hardware and basic memory management (memory paging) are set up. Control is then switched one final time to the main kernel start process calling, which then performs the majority of system setup (interrupts, the rest of memory management, device and driver initialization, etc.) before spawning separately, the idle process and scheduler, and the init process (which is executed in user space).

Kernel loading stage

The kernel as loaded is typically an image file, compressed into either zImage or bzImage formats with zlib. A routine at the head of it does a minimal amount of hardware setup, decompresses the image fully into high memory, and takes note of any RAM disk if configured. It then executes kernel startup via startup_64 (for x86_64 architecture).


 * - linker script defines entry in
 * - assembly of extractor
 * - extractor in language C
 * prints

Decompressing Linux... done. Booting the kernel.

Kernel startup stage

The startup function for the kernel (also called the swapper or process 0) establishes memory management (paging tables and memory paging), detects the type of CPU and any additional functionality such as floating point capabilities, and then switches to non-architecture specific Linux kernel functionality via a call to.

↯ Startup call hierarchy:
 * – linker script
 * – assembly of uncompressed startup code
 * – platform depended startup:
 * – main initialization code
 * 200 SLOC
 * - deferred kernel thread #1
 * This and following functions are defied with attribute
 * obviously runs the first process
 * – deferred kernel thread #2
 * - deferred kernel thread #1
 * This and following functions are defied with attribute
 * obviously runs the first process
 * – deferred kernel thread #2
 * - deferred kernel thread #1
 * This and following functions are defied with attribute
 * obviously runs the first process
 * – deferred kernel thread #2
 * obviously runs the first process
 * – deferred kernel thread #2
 * – deferred kernel thread #2

executes a wide range of initialization functions. It sets up interrupt handling (IRQs), further configures memory, starts the process (the first user-space process), and then starts the idle task via. Notably, the kernel startup process also mounts the (initrd) that was loaded previously as the temporary root file system during the boot phase. The initrd allows driver modules to be loaded directly from memory, without reliance upon other devices (e.g. a hard disk) and the drivers that are needed to access them (e.g. a SATA driver). This split of some drivers statically compiled into the kernel and other drivers loaded from initrd allows for a smaller kernel. The root file system is later switched via a call to /  which unmounts the temporary root file system and replaces it with the use of the real one, once the latter is accessible. The memory used by the temporary root file system is then reclaimed.

⚙️ Internals

📖 References
 * Article about booting of the kernel
 * Linux (U)EFI boot process
 * Kernel booting process
 * Kernel initialization process
 * Linux (U)EFI boot process
 * Kernel booting process
 * Kernel initialization process
 * Kernel booting process
 * Kernel initialization process

💾 Historical
 * http://tldp.org/HOWTO/Linux-i386-Boot-Code-HOWTO/
 * http://www.tldp.org/LDP/lki/lki-1.html
 * http://www.tldp.org/HOWTO/KernelAnalysis-HOWTO-4.html

Halting or rebooting
🔧 TODO

⚲ API
 * calls
 * or
 * or
 * or
 * or

⚙️ Internals
 * ../Softdog Driver/
 * ../Softdog Driver/
 * ../Softdog Driver/

Power management
Keyword: suspend, alarm, hibernation.

⚲ API
 * /sys/power/
 * /sys/kernel/debug/wakeup_sources
 * ⌨️ hands-on:
 * sudo awk '{gsub("^ ","?")} NR>1 {if ($6) {print $1}}' /sys/kernel/debug/wakeup_sources
 * suspends the system
 * Suspend and wakeup depend on
 * and with clock ids  or  will wake the system if it is suspended.
 * with flag blocks suspend
 * See also ,
 * suspends the system
 * Suspend and wakeup depend on
 * and with clock ids  or  will wake the system if it is suspended.
 * with flag blocks suspend
 * See also ,
 * suspends the system
 * Suspend and wakeup depend on
 * and with clock ids  or  will wake the system if it is suspended.
 * with flag blocks suspend
 * See also ,

⚙️ Internals

📖 References
 * https://lwn.net/Kernel/Index/#Power_management
 * cpupower
 * tlp – apply laptop power management settings
 * – Advanced Configuration and Power Interface
 * https://lwn.net/Kernel/Index/#Power_management
 * cpupower
 * tlp – apply laptop power management settings
 * – Advanced Configuration and Power Interface
 * – Advanced Configuration and Power Interface

Runtime PM
Keywords: runtime power management, devices power management opportunistic suspend, autosuspend, autosleep.

⚲ API
 * /sys/devices/.../power/:
 * async autosuspend_delay_ms  control  runtime_active_kids  runtime_active_time  runtime_enabled  runtime_status runtime_suspended_time  runtime_usage
 * – asynchronous get
 * – preferable synchronous get
 * – just decrement usage counter
 * – asynchronous get
 * – preferable synchronous get
 * – just decrement usage counter
 * – preferable synchronous get
 * – just decrement usage counter
 * – just decrement usage counter

👁 Example:

⚙️ Internals

📖 References
 * CPU idle power saving methods for real-time workloads
 * Sysfs devices PM API
 * Power Management for USB
 * Opportunistic suspend
 * Opportunistic suspend

Building and Updating

 * ../Updating/