Aros/Developer/Docs/Resources/Kernel

Purpose
kernel.resource contains AROS microkernel. It's the lowest level component, which is responsible for handling CPU and motherboard. For hosted ports kernel.resource contains a virtual machine.

API
kernel.resource currently offers the following functions:


 * Task scheduling
 * KrnSetScheduler
 * KrnGetScheduler
 * KrnCause
 * KrnDispatch
 * KrnSwitch
 * KrnSchedule


 * Interrupt management
 * KrnAddIRQHandler
 * KrnRemIRQHandler
 * KrnCli
 * KrnSti


 * CPU management
 * KrnIsSuper
 * KrnAddExceptionHandler
 * KrnRemExceptionHandler
 * KrnCreateContext
 * KrnDeleteContext


 * MMU management
 * KrnMapGlobal
 * KrnUnmapGlobal
 * KrnVirtualToPhysical
 * KrnAllocPages
 * KrnFreePages


 * Debugging
 * KrnBug
 * KrnPutChar
 * KrnMayGetChar
 * KrnRegisterModule
 * KrnUnregisterModule
 * KrnDecodeLocation


 * System information and control
 * KrnGetBootInfo
 * KrnGetSystemAttr
 * KrnSetSystemAttr

Most of these functions are intended for system's own purposes, not for user software. They are called by various other AROS components and provide a hardware abstraction layer for it. All the functions are described in autodocs. More detailed explanation is given below, where implementation details are discussed.

Scheduler basics
Currently kernel.resource implements original AmigaOS(tm)-compatible prioritized round-robin scheduler. The scheduler consists of three functions which run on supervisor privilege level:


 * core_Schedule - check the list of ready tasks and inform if the current task needs to be switched
 * core_Switch - switch away from the current task
 * core_Dispatch - pick up a new task for execution

Normally they are executed after processing CPU interrupts in the specified sequence. Scheduler can also be called explicitly by means of three corresponding API entry points: KrnSchedule, KrnSwitch and KrnDispatch.

KrnSchedule is the main manual entry point for the scheduler. When an AROS task thinks that it should give up the CPU time to other tasks, it calls this function. The function causes core_Schedule to run on the supervisor level. If core_Schedule sees that current task may continue to run, if returns FALSE. Otherwise it places the task into list of ready tasks (SysBase->TaskReady) and returns TRUE. This means that core_Switch and core_Dispatch should come into play next.

KrnSwitch and KrnDispatch are middle-point entries into the scheduler. KrnSwitch is called when the current task is already placed into one of exec.library lists and needs to be instantly stopped. This is currently done inside exec.library/Wait which places the task into SysBase->TaskWait list and sets the task to waiting state. KrnDispatch is called by exec.library/RemTask in order to finish the execution of current task when it is removed.

core_Switch and core_Dispatch actually perform task switching. core_Switch saves current task's state, core_Dispatch selects a new task and restores its state. These routines are actually CPU-dependent, so they are never used directly. Instead a CPU-specific wrappers called cpu_Switch and cpu_Dispatch should be implemented in CPU-specific code.

cpu_Switch is simple. It performs saving task's context (CPU registers) into internal ETask structure and then jumps to core_Switch. core_Switch is just the common part which includes saving SysBase->IDNestCnt and calling task's tc_Switch notification vector if requested.

cpu_Dispatch is a tricky place. Additionally to CPU context restore it is expected to handle CPU idle state and task exceptions. Idle state should be entered when core_Dispatch returns NULL. This means that there are no ready to run tasks left. Either all existing tasks are waiting for some signals or there are no more tasks at all. The only thing to do in idle state is handling interrupts.

Here is an example of cpu_Dispatch with typical idle loop implementation (for x86-64 CPU, adapted from old code):

void cpu_Dispatch(regs_t *regs) {   struct Task *task; struct AROSCPUContext *ctx; IPTR sse_ctx;

while (!(task = core_Dispatch)) {	/*	 * We enter here every time when there are no ready tasks. * All accounting (ExecBase state maintenance and IdleCount increment) * is already done by core_Dispatch */

/* Explicitly enable interrupts and halt the CPU */ __asm__ __volatile__("sti; hlt; cli"); /*	 * At this point interrupt(s) have been already processed, but * the scheduler was not called since interrupted context was * a supervisor one (see description of core_ExitInterrupt below). * Interrupt handlers could possibly call exec.library/Cause in * order to queue some software interrupts. In AmigaOS(tm) (and in AROS) * software interrupts have the lowest priority and are processed after * all hardware ones. So, we need to pick up queued software interrupts * here. Software interrupts are described below in details. */	if (SysBase->SysFlags & SFF_SoftInt) core_Cause(INTB_SOFTINT);

/* Repeat everything over and over again until some interrupt wakes up some task */ }

/* We've got a new task, restore CPU context now */ ctx = GetIntETask(task)->iet_Context;

/*    * TODO: This example misses task exception handling. * More about task exceptions see below. */

/* Restore CPU registers */ bcopy(ctx, regs, sizeof(regs_t)); /* Restore FPU and SSE */ sse_ctx = ((IPTR)ctx + sizeof(regs_t) + 15) & ~15; asm volatile("fxrstor (%0)"::"D"(sse_ctx)); /*    * Jump to the new context. core_LeaveInterrupt is assemblef function which * sets stack pointer to the specified context, pops all registers and returns * from the interrupt. */   core_LeaveInterrupt(regs); /* core_LeaveInterrupt does not return here */ }

Note that we use two types for registers frame: regs_t and struct AROSCPUContext. What's the difference? regs_t is what was saved on the stack upon interrupt entry. In this example regs_t contains only CPU registers. In interrupts we usually don't play with SSE or MMX, so there's no reason to deal with them there. However struct AROSCPUContext is more complete, it includes SSE buffer because different AROS tasks may use floating point or vector math.

The same applies to hosted ports. There regs_t will be exception frame offered by host OS, and struct AROSCPUContext is what we store in AROS.

In future struct AROSCPUContext will be unified per CPU family. This will enable manipulating CPU context from external software (like debuggers). Currently it's still a private definition.

The three described API entry points (KrnSchedule, KrnSwitch and KrnDispatch) are internal. The only more or less safe to call function is KrnSchedule, however it is wrapped by exec.library/Reschedule into more exec-friendly form. KrnSwitch and KrnDispatch are purely internal and it's not a good idea at all to call them from within applications. If you call one of them, current task will be simply lost and will not get the control back again.

Also note that exec.library/Reschedule can be not portable across different systems of Amiga(tm) family. Originally it was present in AmigaOS(tm) as a private function. There is no guarantee that it will stay there in MorphOS and/or AmigaOS(tm) v4.

The long story short: these functions are internal, stay away from them. The OS has a good task scheduler by itself and you don't need to teach it what to do with your task. Well, from AROS-specific application you can call Reschedule(FindTask(NULL)) if you really want to give up the CPU time. But nothing more.

Running a scheduler
The described scheduling sequence can be run: (1) case is straightforward. Here is a typical implementation of syscall handler: switch(num) { case SC_SCHEDULE: /* Call scheduler. Simply return if it doesn't want to change anything */ if (!core_Schedule) break;
 * 1) Explicitly from within software interrupt (syscall)
 * 2) At the end of interrupt processing.

/* core_Schedule returned TRUE, fallthrough here */ case SC_SWITCH: /* The task is already in some list. Switch to another one no matter what */ cpu_Switch(regs); /* Fallthrough again, SysBase->ThisTask is invalid here */ case SC_DISPATCH: /* Select the new task to run */ cpu_Dispatch(regs); /* Done, so break here */ break;

case SC_CAUSE: /*    * A software interrupt is requested by exec via KrnCause. * Call the scheduler no matter what. Explained below. */   if (regs->ds != KERNEL_DS) core_ExitInterrupt(regs); break;

/* Other syscalls can be handled here if needed */ }

(2) is a little bit more complex. First, interrupts can be (and usually are) nested (prioritized), and you should call the scheduler only once. Second, you need to be sure that task switching is actually permitted by exec.library.

For this purpose there's a utility function core_ExitInterrupt in kernel_intr.c. It performs almost all needed checks and runs scheduling sequence only if it is actually permitted. Additionally it processes pending exec software interrupts (if any).

This function misses only CPU-dependent interrupt nest check. You should use anything in order to check that from this interrupt you're actually returning to user mode. For example, x86-64 kernel can check DS register of the exception frame in order to figure out into what mode it will return:

if (regs->ds != KERNEL_DS) core_ExitInterrupt(regs); core_LeaveInterrupt(regs);

Handling exec.library software interrupts
Software interrupts processing in exec.library is done by INTB_SOFTINT vector in SysBase. kernel.resource is responsible for calling it. Software interrupts may change state of different tasks, so rescheduling is required after processing them. This is why we need to perform complete scheduling sequence while processing SC_CAUSE.

exec software interrupts are different from CPU software interrupts (syscalls). Please make a clear difference and don't be confused by similar names. In order to avoid confusions from now on, let's call these interrupts SoftInts. Queuing a SoftInt means just adding a node to internal list. Then SFF_SoftInt bit is set in SysBase->SysFlags and KrnCause is called.

Calling KrnCause can be deferred by exec.library if interrupts are disabled at the moment of SoftInt queuing. KrnCause will be called only when interrupts are enabled.

KrnCause in its turn invokes a software interrupt on the CPU. This interrupt should be processed in the same way as real hardware interrupts. core_ExitInterrupt needs to be called if appropriate.

The actual SoftInt processing is done inside core_ExitInterrupt function, because exec software interrupts need to run after all real hardware interrupts were processed (as said above, core_ExitInterrupt is called only once upon exit from all interrupts). It looks at SFF_SoftInt flag and calls exec's SoftInt vector if it is set. The rest (including flag reset) is expected to be done by exec.library.

Note that an explicit check for SFF_SoftInt flag needs to be done inside idle loop in cpu_Dispatch, after an interrupt arrives. This is needed because idle loop runs in supervisor mode, so core_ExitInterrupt will not be called after interrupts are processed (according to the requirement above).

Handling exec.library task exceptions
Task exceptions in exec.library provide a way to asynchronously react on some signal. You don't need to explicitly wait for an exception. When the signal arrives, your task will be interrupted and jump to an exception handler. After the handler returns, the task resumes its normal execution, like if nothing happened.

Again, please don't confuse task exceptions vs CPU exceptions. They are totally different things. CPU exceptions in exec are called traps. Task exceptions are run in user mode, with scheduling and interrupts enabled, like normal code.

Task exceptions need support on kernel.resource side in cpu_Dispatch function.

When exec throws an exception to a task, it raises TF_EXCEPT bit in its tc_Flags and causes rescheduling. The scheduler should notice this and direct the task to a special routine. Current scheduler implementation sets the task to READY state independently of its remaining time quantum and does the rest of processing in cpu_Dispatch.

cpu_Dispatch should check TF_EXCEPT bit, and if it is set, do the following:


 * 1) Save task's context somewhere.
 * 2) Adjust task's context to point to exception handler routine.
 * 3) Jump to the adjusted context.

After this exception handler comes into play. Generally it needs to call exec.library/Exception. This function does all the processing itself. After Exception returns the handler should pick up original task's context saved in step (1) and jump to it.

Currently only Windows-hosted port implements task exceptions correctly. Its code is a bit complex and uses some tricks (because it is hosted), so can not serve as a good example.

Scheduler support functions
The scheduler in kernel.resource was designed to be future-proof and expandable. There are two functions provided by this purpose: KrnGetScheduler and KrnSetScheduler. Their purpose is to provide a support for different task scheduling algorithms. These functions are safe to call from within user applications. However remember that they affect the whole system, so a web browser (for example) is not a good thing to use them. They fit much better into some system maintenance utility (like Executive for AmigaOS(tm)).

KrnSetScheduler changes current scheduling algorythm. KrnGetScheduler informs what algorithm is currently selected. Currently these functions are reserved. KrnSetScheduler actually does nothing and KrnGetScheduler always returns SCHED_RR which is the only available scheduler.

Implementing a new scheduling algorithm
In order to implement a new scheduling algorithm the following parts should be modified:
 * core_Schedule - it is 99% of the scheduler code. This function actually decides how long the task will run and when it will be run again. It can use whatever it wants to make decisions, including the following information generally available from exec.library:
 * SFF_QuantumOver - this flag is raised in SysBase->SysFlags when task's time slice (quantum) is over. Quantum counting is done by exec.library as part of VBlank interrupt processing.
 * TF_EXCEPT - this flag is raised in task's tc_Flags by exec when the task has pending exception. It is general practice to set the task to READY state in reponse to this flag and leave the actual exception processing to cpu_Dispatch. However theoretically it can be done in any other way.
 * Task's priority in tc_Node.ln_Pri.


 * core_Dispatch - this routine is responsible for picking up a new task from TaskReady list and assigning a time slice to it. A time slice is assigned by setting a value of SysBase->Elapsed. This value is actually a number of VBlank ticks determining how long the task will run if no external event disturbs it. Original RoundRobin scheduler puts there a fixed default value specified in SysBase->Quantum. In fact this value is a user's preference.

Overview
exec.library was not designed to run on other systems that m68k-based Amiga(tm). It is very problematic to adapt it to a different hardware while keeping a uniform way of handling it. In order to overcome these problems kernel.resource provides own API for handling and controlling machine's hardware interrupts. Note that "Interrupts" here refer to real interrupts, which CPU receives from the actual hardware. Don't confuse them with exec SoftInts described in the previous chapter. For clear difference from now on we will use widely adopted IRQ (Interrupt Request) abbreviation. Also make a clear difference between IRQs and CPU exceptions. These are not the same things in kernel.resource, despite IRQs are really implemented on top of CPU exceptions and have close relation. In fact IRQs are mapped to one (or more) CPU exceptions by means of interrupt controller.

Hosted AROS ports also use IRQs to represent various external events. For example in UNIX-hosted AROS IRQs represent UNIX signals, and in Windows-hosted AROS IRQs are simulated interrupts used for communication with host I/O threads.

Using interrupts
IRQs are designated by numbers, from 0 to 255. Their assignment is machine-specific. You can't tell what the particular IRQ means unless you exactly know what hardware generates it. Because of this API described here is generally useful only for hardware drivers.

In order to make use of an IRQ the driver needs two kernel.resource functions:


 * KrnAddIRQHandler
 * This function installs an IRQ handler. It takes an IRQ number, pointer to handler function, and two arbitrary values which will be passed to
 * the handler. It returns some opaque value which is actually a pointer to an internal structure describing the handler. Don't try to poke it
 * somehow! In some implementations of kernel.resource internal data may reside in protected memory, so you'll get only trash or crash.


 * KrnRemIRQHandler
 * This function removes a previously installed handler. It takes a pointer returned earlier by KrnRemIRQHandler.

Note that every IRQ may have an arbitrary number of handlers installed. Installing a new handler doesn't remove and/or disable a previous one. Handlers do not have any influence on each other, they are called in turn when the IRQ arrives. Handlers are not prioritized.

Controlling interrupts
Under certain circumstances the software needs control over interrupts. Two exec.library functions exist for this purpose: Disable and Enable. In AROS these functions have their low-level sisters in kernel.resource: KrnCli and KrnSti. They do the same, except kernel functions operate directly on the CPU. They do their action instantly and do not modify exec.library state somehow. Consequently, they do not have nesting count.

These two functions are there to provide abstraction layer for exec.library. Do not call them from within user applications, they will not give any advantage over using exec.library functions. Since they do not have any state tracking, they will upset the system easily.

Interrupt handling implementation
Interrupt handling is both CPU- and hardware-dependent and varies across different machines. Base kernel.resource code provides only one function - krnRunIRQHandlers. It takes a single argument of IRQ number and executes all installed handlers. The rest of code is machine-dependent and is written separately for every supported machine.

Depending on whether particular IRQ is used or not, you may want to enable or disable an individual IRQ, if the hardware provides such a possibility. The base code expects you to implement two functions for this:


 * ictl_enable_irq(n)
 * Enable IRQ number n


 * ictl_disable_irq(n)
 * Disable IRQ number n

In order to implement these two functions you need to declare their prototypes in your kernel_arch.h (see Porting for details). By default they are #define'd to empty code, so they are effectively omitted.

The base code does not expect any return values from these functions.

System timer handling and VBlank emulation
AROS requires at least one periodic timer interrupt to run. It is used to run task scheduler at regular intervals and measuring system time. There are no actual requirements to timer's frequency from AROS side. However many Amiga(tm) applications expect exec.library to serve VBlank (display vertical blank) interrupt with a frequency of 50 or 60 Hz. They use it for delays. So it's absolutely necessary to implement exec VBlank interrupt (unlike others, which are specific to a classic Amiga(tm) hardware). Also exec.library uses VBlank internally for measuring task time slices (SysBase->Elapsed is decremented in VBlank handler).

AROS runs on wide variety of different hardware, not all of which (actually none as of 30.09.2010) are capable of generating display vertical blank interrupt at the given frequency. Because of this exec VBlank interrupt in AROS is emulated. Since actually VBlank may come from various sources, it is expected to be driven externally by calling exec's INTB_VBLANK vector. kernel.resource may do this using core_Cause(INTB_VBLANK) call.

There's a second AROS component which also needs hardware timer, it's timer.device. But certain machines has only one hardware timer, which needs to be shared between these two components. This means that only one of them can control the timer. There are two scenarios possible:


 * 1) Timer is controlled by kernel.resource. In such a case kernel.resource sets the hardware timer to a fixed frequency and calls exec VBlank vector every timer interrupt. The timer frequency is specified by exec.library in SysBase->VBlankFrequency, it defaults to 50 Hz but can be overridden by kernel command line "vblank=XX" argument. Two common values expected by Amiga(tm) software are 50 (for PAL Amiga) and 60 (for NTSC Amiga) Hz, but experimenting user may try any other values here. timer.device can then hook itself to the VBlank interrupt and use it for time accounting. This scenario is usually implemented during development of new port, when timer.device is not ported to the hardware yet.
 * 2) Timer is controlled by timer.device. In this case it's likely reprogrammed, and kernel.resource stops having correct idea about its frequency. In this scenario timer.device is responsible for driving VBlank interrupt itself. In order to tell kernel.resource that the timer is taken over, it should use KrnSetSystemAttr(KATTR_VBlankEnable, FALSE) call. This shuts off kernel's built-in VBlank emulation. Also timer.device should take care about setting correct value in SysBase->ex_EClockFrequency; this will likely reflect hardware's master clock frequency.

On some hardware (like classic Amiga(tm) or clones like MiniMig) real video blanking interrupt can be used. In this case kernel.resource may not implement VBlank emulation at all, and just initialize appropriate hardware instead. Of course it still should take care about calling exec VBlank interrupt vector when the corresponding IRQ arrives. Remember that exec is still running on top of kernel.resource, and we still need to drive all its interrupt vectors!

High frequency timer
timer.device running on top of VBlank interrupt may suffer from low precision. Sometimes interrupts are disabled, and if they are disabled long enough, timer interrupts get lost. This causes degradation and slowdown of system time. In order to mitigate this effect kernel.resource may provide a high frequency periodic timer. This timer is expected to run at a frequency which is an integer multiple of VBlank. This has positive effect on system time granularity, but may have different effect on time precision (it heavily depends on the actual system).

In order to provide a high frequency timer the kernel must provide KATTR_TimerIRQ attribute with correct IRQ number. Also kernel.resource is expected to put the actual timer frequency into SysBase->ex_EClockFrequency. If timer.device does not work with actual hardware, it will use this frequency as master clock.

Base kernel.resource code contains support for high frequency timer in core_TimerTick function. You should call it every timer IRQ. It will take care about counting ticks and calling exec VBlank vector at right period itself.

The user then can specify wanted EClock frequency by means of "eclock=XX" command line argument. This argument is processed inside exec.library/PrepareExecBase and fine-tuned in kernel's init code.

Note that high frequency timer is purely internal thing! It is intended only to support generic timer.device implementation. Do not rely on it in any third-party components and/or applications. There's no way to know timer's frequency after it was taken over and reprogrammed by timer.device. There's also no way to know for sure if timer.device uses this or some other timer. Nobody promises you that SysBase->ex_EClockFrequency will contain frequency of this timer since timer.device may change this field. You may only query about status of kernel's own VBlank emulation by querying value of KATTR_VBlankEnable attribute, but that's all. So it is better to stay away from it.

Overview
CPU support in kernel.resource can be considered incomplete and experimental.

Privilege level manipulation
Currently kernel.resource offers only one function: KrnIsSuper. It tells whether the caller is running in a supervisor mode or not. The returned value should be considered boolean and tested against zero. Nonzero tells that you are in supervisor mode at the moment.

This function is widely used by exec.library for deferring task scheduler calls.

Some kernel.resource implementations may allocate internal data in protected memory. Currently this includes IRQ handler nodes, exception handler nodes and CPU context data. In order for this to work the following internal functions need to be implemented: in variable of cpumode_t type.
 * goSuper - Set current CPU privilege level to supervisor and return something that describes original privilege level. Original level is stored
 * goUser - Set current CPU privilege level to user
 * goBack(mode) - Set current CPU privilege level to what was returned by goSuper

Also (despite not directly related) krnAllocMem and krnFreeMem need to be redefined to allocate memory from the protected area. By default they are macros expanding to exec's AllocMem and FreeMem (see kernel_memory.h). This is also discussed in Porting chapter.

Exceptions
exec.library's CPU exception handling is oriented only on m68k CPU family. While AROS still supports exec traps, there can be a need to install a separate handler for a particular CPU exception. On API side CPU exceptions support is represented by two functions:
 * KrnAddExceptionHandler
 * KrnRemExceptionHandler

These functions work in a similar way to KrnAddIRQHandler and KrnRemIRQHandler, however they operate on raw CPU exceptions. Exception handlers get a pointer to saved CPU context as a third argument, and are expected to return a value. If some handler returns nonzero value, kernel.resource will not call exec trap handler (which is expected to halt the task and bring up software failure requester). Task's execution will be continued.

These functions can be very useful for implementing a debugger for AROS. However this part of kernel.resource can be considered incomplete and under development. The specification of these functions may change at any moment. Also CPU context structure should be unified per CPU family and made public in order to make these functions really useful.

On the implementation side a single krnRunExceptionHandlers function is offered by base code. Similar to krnRunIRQHandlers, it will run installed handlers for a specified exception, but will return a value. It will be zero if all handlers return zero.

Exec traps are in fact emulated. The CPU-specific code is responsible for translating native CPU exception number to exec trap number and calling SysBase->ThisTask->tc_TrapCode. The same rule applies to hosted AROS, with the exception of that CPU exceptions are in fact emulated too, they are reconstructed from the information provided by the host OS. One of side effects of this is that not all CPU exceptions can be caught and correctly processed. Due to the incompleteness of exception handling in kernel.resource, exception handling code on hosted AROS ports is one of the most experimental and unfinished parts.

Context manipulation
The actual format of CPU context varies depending on the used CPU, even from the same family. For example 32-bit x86 CPUs may use different context format depending on available extended registers set (FPU, SSE, etc.). Hosted ports may also store host-specific information as part of the context (like host OS errno value).

In order to help exec.library to deal with these varieties kernel.resource offers two API functions:
 * KrnCreateContext
 * Allocate ant initialize CPU context storage area


 * KrnDeleteContext
 * Free CPU context storage area

These two functions are used by exec.library while creating and deleting tasks. There's generally no need to call them from user applications.

If memory protection is used, these functions will allocate memory in protected area. Access to this area is permitted only with supervisor privileges. Consider the context storage area private, especially taking into account that context structure is currently private too. When this issue is resolved, documentation will be updated.

Missing functionality
To sum things up, CPU management API in kernel.resource can be considered incomplete. Here is a brief list of features that can be considered missing:
 * CPU context modification
 * Possible solution: unify CPU context format across various CPU families. Possibly introduce functions like KrnGetTaskContext and KrnSetTaskContext for accessing CPU context from within user applications (like debuggers).


 * CPU exception handling
 * Review exception handlers mechanism. Possibly handlers should be made prioritized. Possibly they should somehow know about each other.


 * CPU privilege level switching
 * exec.library provide three calls: Supervisor, SuperState and UserState for this purpose. Perhaps they should be made somehow working on top of API exposed by kernel.resource. Current kernel's implementation is based on original PPC native code and may not fit well for other CPUs.


 * CPU cache manipulation
 * Way of clearing CPU cache depends on the used CPU model. Needed actions before and/or after DMA depend both on CPU and motherboard hardware. Perhaps kernel.resource should have some cache control API on top of which exec.library calls should work.

This list does not represent any final decisions, and even final opinions, in any way. These topics are open to discussion.

MMU management
This part of kernel.resource is under active development. Documentation will follow.

Debugging support
This part represents various support functions for low-level system debugging. The related functions can be divided into two subgroups:

Debug I/O
KrnPutChar and KrnMayGetChar do the same things as their counterparts from exec.library: RawPutChar and RawMayGetChar. They are designed just to provide a hardware abstraction for exec.library. The third function, KrnBug, was added mainly to serve kernel's own needs. The format string it accepts is C-compliant (not RawDoFmt-compliant). KrnBug has one speciality: it should be able to work with NULL KernelBase. This is because it can be called from within kernel startup code, when no KernelBase is set up yet.

Default implementation of debug output end up in krnPutC, which is empty in the base code. In your kernel implementation you are expected to replace it with some low-level function which outputs the given character to some reliable place, for example to serial port. This function should be as robust as possible: it should not rely on any libraries, it should be able to work inside interrupts, and, as it was noticed before, it should work even without KernelBase. This is one of little amount of places in AROS code where it is permitted to use busy loops for waiting, because you have no other functioning mechanism. This function is called even in system crash state, to output alert information.

Similar requirements apply to KrnMayGetChar, except it does not have to take NULL KernelBase.

Hosted AROS may choose to reroute KrnBug to host's vprintf for simplicity.

Debug information registry
This is the most interesting and innovative part of AROS. This API allows applications to obtain information about program layout in memory.


 * KrnRegisterModule
 * This function has to be called when a new executable file was loaded into memory. Normally it is called by dos.library/InternalLoadSeg, so you should not bother about it. However if you are writing some compiler which stores code directly into memory, you may want to call this function for the module you've just constructed.
 * Currently only ELF files are fully supported by AROS, so only DEBUG_ELF type information is supported. However it is expected that at some point support for debug information found in AmigaDOS hunk files is implemented.


 * KrnUnregisterModule
 * This function has to be called before the executable module is unloaded from the memory. Again, normally this is done by dos.library/UnLoadSeg.


 * KrnDecodeLocation
 * This is the main function of interest. It allows the caller to look up an address in the list of registered modules. Currently the following information is provided:
 * Module name
 * Segment number, name, start and end addresses
 * Symbol (function) name, start and end addresses
 * This list is not final. In future more attributes can be supported (for example it might be possible to query source file name and line number).
 * KrnDecodeLocation is currently used by built-in exec.library crash handler for displaying crash location data.

System information and control
This is the last group of public kernel functions, currently consisting of three entries:
 * KrnGetBootInfo
 * This function returns a pointer to a taglist supplied to the kickstart by the bootstrap. Tags for this list are defined in aros/kernel.h:
 * KRN_KernelBase
 * To be documented
 * KRN_KernelLowest
 * Starting address of the loaded kickstart image in memory
 * KRN_KernelHighest
 * End address of the loaded kickstart image in memory
 * KRN_KernelBss
 * A pointer to an array of BSS section descriptors (struct KernelBSS). The array is terminated with an entry with NULL value in addr field. All recorded BSS sections need to be cleared when the kickstart performs a warm reboot. __clear_bss utility function can be used for this.
 * KRN_GDT
 * To be documented
 * KRN_IDT
 * To be documented
 * KRN_PL4
 * To be documented
 * KRN_VBEModeInfo
 * A pointer to VESA video mode information structure (struct vbe_mode described in aros/multiboot.h) describing current video mode. Needed by VESA display driver.
 * KRN_VBEControllerInfo
 * A pointer to VESA video display controller information structure (struct vbe_controller). This tag comes in pair with KRN_VBEModeInfo.
 * KRN_MMAPAddress
 * A pointer to machine's memory map in Multiboot format (struct mb_mmap).
 * KRN_MMAPLength
 * Length of the supplied multiboot memory map.
 * KRN_CmdLine
 * A pointer to a zero-terminated string of kernel command line arguments.
 * KRN_ProtAreaStart
 * To be documented
 * KRN_ProtAreaEnd
 * To be documented
 * KRN_VBEMode
 * Specifies real VESA video mode number if the bootstrap performed mode switch. If this tag is specified, its value overrides vbe_mode field in struct vbe_controller supplied with KRN_VBEControllerInfo.
 * This tag is likely redundant and is subject to removal. If you are writing own bootstrap with VESA support, please consider setting vbe_mode to a valid value instead.
 * KRN_VBEPaletteWidth
 * Width of VESA palette in bits (6 or 8). May be absent for direct-color video modes (24 and 32 bits).
 * KRN_MEMLower
 * To be documented
 * KRN_MEMUpper
 * To be documented
 * KRN_OpenFirmwareTree
 * A pointer to a flattened OpenFirmware device tree. Data format documentation needs to be added.
 * KRN_HostInterface
 * A pointer to host OS interface structure for hosted AROS. This structure specifies callbacks necessary for AROS to communicate with host OS (for example, to link to its shared libraries). The structure itself is considered private and port-specific, it is shared only between the bootstrap, kernel and hostlib.resource.
 * KRN_DebugInfo
 * A pointer to a single-linked list of kickstart debug data structures (struct debug_seginfo or dbg_seg_t). These structures describe layout of kickstart modules in memory and are used by KrnDecodeLocation function to provide data. They are considered persistent and read-only, and kept across warm reboots.
 * Kickstart debug information is optional, omitting it can save memory, but in this case KrnDecodeLocation will be unable to provide detailed information about the crash happened inside kickstart.
 * KRN_BootLoader
 * A pointer to an ASCII string describing the primary bootstrap itself. Serves for informational purposes.
 * It is allowed to read and examine contents of this taglist, however some of its information can be accessed through bootloader.resource in a more abstract and suitable form. It is advised to use the bootloader.resource when possible.
 * Note also that on some architectures (like classic Amiga(tm)) the kernel may be booted up directly, without the help of an external bootstrap. In this case KrnGetBootInfo may return NULL.


 * KrnGetSystemAttr
 * This function provides some information about the system AROS is running on. This information is mostly internal and was discussed earlier. However there is one KATTR_Architecture attribute which can be handy in various places. Its value is an ASCII string in the form of "system-cpu", like "pc-x86_64" or "mingw32-i386". It can be useful in various system information utilities, and also in hardware drivers which rely on particular architecture (especially hosted drivers, which heavily depend on the host they were compiled for, despite they are disk-based). For example tap.device compiled for Linux is not binary compatible with tap.device compiled for FreeBSD (because the underlying systems are not binary compatible), and it is adviced to check system's architecture in such components in order to avoid confusing crashes.


 * KrnSetSystemAttr
 * This function provides reverse functionality to KrnGetSystemAttr and allows to change value of some attributes. Currently only KATTR_VBlankEnable is settable and it was discussed in details in System timer handling and VBlank emulation chapter.

Internal utility functions
These functions are not represented in kernel.resource API. They are internal functions provided by the base code in order to provide code reuse and aid porting AROS to other architectures.

Memory management
kernel.resource code provides two functions for allocating and freeing memory which can be used only by kernel.resource: These functions are described in replaceable kernel_memory.h header file. By default they are macros which wrap to exec AllocMem and FreeMem. But if you are implementing memory protection, you may reimplement these functions to allocate memory from protected area. Remember that CPU context will also be stored there by KrnCreateContext function.
 * krnAllocMem(length)
 * krnFreeMem(address)

System startup support
These functions provide useful aid during system early startup.
 * krnRomTagScanner (declared in kernel_romtags.h)
 * Scans given address ranges for valid ROMTags and builds a list of discovered ROMTags. The returned value needs to be placed into SysBase->ResModules. Note that this function itself relies on working AllocMem and FreeMem, this means that it needs to be called after PrepareExecBase.


 * __clear_bss (declared in kernel_base.h)
 * Clears BSS segments listed in the supplied arrays of descriptors. Actually this should be the first thing you do when your kernel is started up. Remember that global variables are likely stored in BSS section themselves, to store their values only after you made this call! Typically you will call this function during initial boot taglist parsing.


 * PrepareExecBase
 * In fact this is exec.library function, but it has so close relationship to the kernel.resource that it fits in here very well. This function takes an address of the very first MemHeader, and constructs SysBase inside it. However it's your job to set up the MemHeader itself. It's entirely up to you to discover the available memory.
 * This function is the only one which is statically linked from exec.library at the moment. One day this may change, allowing complete separation of kernel.resource and exec.library, in this case the specification will be updated.
 * This function needs to be declared manually:

extern struct ExecBase *PrepareExecBase(struct MemHeader *, char *, struct HostInterface *);
 * The second and third parameters are pointer to kernel command line (passed via KRN_CmdLine tag) and HostInterface structure. The third parameter is optional, whether it is needed or not is determined by architecture-specific code in exec.library. On some ports (like Windows-hosted) an access to the host OS is needed by exec.library at this early stage.