Chapter 3. The system initialization

Table of Contents

3.1. An overview of the boot strap process
3.1.1. Stage 1: the UEFI
3.1.2. Stage 2: the boot loader
3.1.3. Stage 3: the mini-Debian system
3.1.4. Stage 4: the normal Debian system
3.2. Systemd
3.2.1. Systemd init
3.2.2. Systemd login
3.3. The kernel message
3.4. The system message
3.5. System management
3.6. Other system monitors
3.7. System configuration
3.7.1. The hostname
3.7.2. The filesystem
3.7.3. Network interface initialization
3.7.4. Cloud system initialization
3.7.5. Customization example to tweak sshd service
3.8. The udev system
3.9. The kernel module initialization

It is wise for you as the system administrator to know roughly how the Debian system is started and configured. Although the exact details are in the source files of the packages installed and their documentations, it is a bit overwhelming for most of us.

Here is a rough overview of the key points of the Debian system initialization. Since the Debian system is a moving target, you should refer to the latest documentation.

The computer system undergoes several phases of boot strap processes from the power-on event until it offers the fully functional operating system (OS) to the user.

For simplicity, I limit discussion to the typical PC platform with the default installation.

The typical boot strap process is like a four-stage rocket. Each stage rocket hands over the system control to the next stage one.

Of course, these can be configured differently. For example, if you compiled your own kernel, you may be skipping the step with the mini-Debian system. So please do not assume this is the case for your system until you check it yourself.

The Unified Extensible Firmware Interface (UEFI) defines a boot manager as part of the UEFI specification. When a computer is powered on, the boot manager is the 1st stage of the boot process which checks the boot configuration and based on its settings, then executes the specified OS boot loader or operating system kernel (usually boot loader). The boot configuration is defined by variables stored in NVRAM, including variables that indicate the file system paths to OS loaders or OS kernels.

An EFI system partition (ESP) is a data storage device partition that is used in computers adhering to the UEFI specification. Accessed by the UEFI firmware when a computer is powered up, it stores UEFI applications and the files these applications need to run, including operating system boot loaders. (On the legacy PC system, BIOS stored in the MBR may be used instead.)

The boot loader is the 2nd stage of the boot process which is started by the UEFI. It loads the system kernel image and the initrd image to the memory and hands control over to them. This initrd image is the root filesystem image and its support depends on the bootloader used.

The Debian system normally uses the Linux kernel as the default system kernel. The initrd image for the current 5.x Linux kernel is technically the initramfs (initial RAM filesystem) image.

There are many boot loaders and configuration options available.


[Warning] Warning

Do not play with boot loaders without having bootable rescue media (USB memory stick, CD or floppy) created from images in the grub-rescue-pc package. It makes you boot your system even without functioning bootloader on the hard disk.

For UEFI system, GRUB2 first reads the ESP partition and uses UUID specified for search.fs_uuid in "/boot/efi/EFI/debian/grub.cfg" to determine the partition of the GRUB2 menu configuration file "/boot/grub/grub.cfg".

The key part of the GRUB2 menu configuration file looks like:

menuentry 'Debian GNU/Linux' ... {
        load_video
        insmod gzio
        insmod part_gpt
        insmod ext2
        search --no-floppy --fs-uuid --set=root fe3e1db5-6454-46d6-a14c-071208ebe4b1
        echo    'Loading Linux 5.10.0-6-amd64 ...'
        linux   /boot/vmlinuz-5.10.0-6-amd64 root=UUID=fe3e1db5-6454-46d6-a14c-071208ebe4b1 ro quiet
        echo    'Loading initial ramdisk ...'
        initrd  /boot/initrd.img-5.10.0-6-amd64
}

For this part of /boot/grub/grub.cfg, this menu entry means the following.


[Tip] Tip

You can enable to see kernel boot log messages by removing quiet in "/boot/grub/grub.cfg". For the persistent change, please edit "GRUB_CMDLINE_LINUX_DEFAULT="quiet"" line in "/etc/default/grub".

[Tip] Tip

You can customize GRUB splash image by setting GRUB_BACKGROUND variable in "/etc/default/grub" pointing to the image file or placing the image file itself in "/boot/grub/".

See "info grub" and grub-install(8).

The mini-Debian system is the 3rd stage of the boot process which is started by the boot loader. It runs the system kernel with its root filesystem on the memory. This is an optional preparatory stage of the boot process.

[Note] Note

The term "the mini-Debian system" is coined by the author to describe this 3rd stage boot process for this document. This system is commonly referred as the initrd or initramfs system. Similar system on the memory is used by the Debian Installer.

The "/init" program is executed as the first program in this root filesystem on the memory. It is a program which initializes the kernel in user space and hands control over to the next stage. This mini-Debian system offers flexibility to the boot process such as adding kernel modules before the main boot process or mounting the root filesystem as an encrypted one.

  • The "/init" program is a shell script program if initramfs was created by initramfs-tools.

    • You can interrupt this part of the boot process to gain root shell by providing "break=init" etc. to the kernel boot parameter. See the "/init" script for more break conditions. This shell environment is sophisticated enough to make a good inspection of your machine's hardware.

    • Commands available in this mini-Debian system are stripped down ones and mainly provided by a GNU tool called busybox(1).

  • The "/init" program is a binary systemd program if initramfs was created by dracut.

    • Commands available in this mini-Debian system are stripped down systemd(1) environment.

[Caution] Caution

You need to use "-n" option for mount command when you are on the readonly root filesystem.

The normal Debian system is the 4th stage of the boot process which is started by the mini-Debian system. The system kernel for the mini-Debian system continues to run in this environment. The root filesystem is switched from the one on the memory to the one on the real hard disk filesystem.

The init program is executed as the first program with PID=1 to perform the main boot process of starting many programs. The default file path for the init program is "/usr/sbin/init" but it can be changed by the kernel boot parameter as "init=/path/to/init_program".

"/usr/sbin/init" is symlinked to "/lib/systemd/systemd" after Debian 8 Jessie (released in 2015).

[Tip] Tip

The actual init command on your system can be verified by the "ps --pid 1 -f" command.


[Tip] Tip

See Debian wiki: BootProcessSpeedup for the latest tips to speed up the boot process.

When the Debian system starts, /usr/sbin/init symlinked to /usr/lib/systemd is started as the init system process (PID=1) owned by root (UID=0). See systemd(1).

The systemd init process spawns processes in parallel based on the unit configuration files (see systemd.unit(5)) which are written in declarative style instead of SysV-like procedural style.

The spawned processes are placed in individual Linux control groups named after the unit which they belong to in the private systemd hierarchy (see cgroups and Section 4.7.5, “Linux security features”).

Units for the system mode are loaded from the "System Unit Search Path" described in systemd.unit(5). The main ones are as follows in the order of priority:

  • "/etc/systemd/system/*": System units created by the administrator

  • "/run/systemd/system/*": Runtime units

  • "/lib/systemd/system/*": System units installed by the distribution package manager

Their inter-dependencies are specified by the directives "Wants=", "Requires=", "Before=", "After=", … (see "MAPPING OF UNIT PROPERTIES TO THEIR INVERSES" in systemd.unit(5)). The resource controls are also defined (see systemd.resource-control(5)).

The suffix of the unit configuration file encodes their types as:

  • *.service describes the process controlled and supervised by systemd. See systemd.service(5).

  • *.device describes the device exposed in the sysfs(5) as udev(7) device tree. See systemd.device(5).

  • *.mount describes the file system mount point controlled and supervised by systemd. See systemd.mount(5).

  • *.automount describes the file system auto mount point controlled and supervised by systemd. See systemd.automount(5).

  • *.swap describes the swap device or file controlled and supervised by systemd. See systemd.swap(5).

  • *.path describes the path monitored by systemd for path-based activation. See systemd.path(5).

  • *.socket describes the socket controlled and supervised by systemd for socket-based activation. See systemd.socket(5).

  • *.timer describes the timer controlled and supervised by systemd for timer-based activation. See systemd.timer(5).

  • *.slice manages resources with the cgroups(7). See systemd.slice(5).

  • *.scope is created programmatically using the bus interfaces of systemd to manages a set of system processes. See systemd.scope(5).

  • *.target groups other unit configuration files to create the synchronization point during start-up. See systemd.target(5).

Upon system start up (i.e., init), the systemd process tries to start the "/lib/systemd/system/default.target (normally symlinked to "graphical.target"). First, some special target units (see systemd.special(7)) such as "local-fs.target", "swap.target" and "cryptsetup.target" are pulled in to mount the filesystems. Then, other target units are also pulled in by the target unit dependencies. For details, read bootup(7).

systemd offers backward compatibility features. SysV-style boot scripts in "/etc/init.d/rc[0123456S].d/[KS]name" are still parsed and telinit(8) is translated into systemd unit activation requests.

[Caution] Caution

Emulated runlevel 2 to 4 are all symlinked to the same "multi-user.target".

The kernel error message displayed to the console can be configured by setting its threshold level.

# dmesg -n3

Under systemd, both kernel and system messages are logged by the journal service systemd-journald.service (a.k.a journald) either into a persistent binary data below "/var/log/journal" or into a volatile binary data below "/run/log/journal/". These binary log data are accessed by the journalctl(1) command. For example, you can display log from the last boot as:

$ journalctl -b

Under systemd, the system logging utility rsyslogd(8) may be uninstalled. If it is installed, it changes its behavior to read the volatile binary log data (instead of pre-systemd default "/dev/log") and to create traditional permanent ASCII system log data. This can be customized by "/etc/default/rsyslog" and "/etc/rsyslog.conf" for both the log file and on-screen display. See rsyslogd(8) and rsyslog.conf(5). See also Section 9.3.2, “Log analyzer”.

The systemd offers not only init system but also generic system management operations with the systemctl(1) command.

Table 3.6. List of typical systemctl command snippets

Operation Command snippets
List all available unit types "systemctl list-units --type=help"
List all target units in memory "systemctl list-units --type=target"
List all service units in memory "systemctl list-units --type=service"
List all device units in memory "systemctl list-units --type=device"
List all mount units in memory "systemctl list-units --type=mount"
List all socket units in memory "systemctl list-sockets"
List all timer units in memory "systemctl list-timers"
Start "$unit" "systemctl start $unit"
Stop "$unit" "systemctl stop $unit"
Reload service-specific configuration "systemctl reload $unit"
Stop and start all "$unit" "systemctl restart $unit"
Start "$unit" and stop all others "systemctl isolate $unit"
Switch to "graphical" (GUI system) "systemctl isolate graphical"
Switch to "multi-user" (CLI system) "systemctl isolate multi-user"
Switch to "rescue" (single user CLI system) "systemctl isolate rescue"
Send kill signal to "$unit" "systemctl kill $unit"
Check if "$unit" service is active "systemctl is-active $unit"
Check if "$unit" service is failed "systemctl is-failed $unit"
Check status of "$unit|$PID|device" "systemctl status $unit|$PID|$device"
Show properties of "$unit|$job" "systemctl show $unit|$job"
Reset failed "$unit" "systemctl reset-failed $unit"
List dependency of all unit services "systemctl list-dependencies --all"
List unit files installed on the system "systemctl list-unit-files"
Enable "$unit" (add symlink) "systemctl enable $unit"
Disable "$unit" (remove symlink) "systemctl disable $unit"
Unmask "$unit" (remove symlink to "/dev/null") "systemctl unmask $unit"
Mask "$unit" (add symlink to "/dev/null") "systemctl mask $unit"
Get default-target setting "systemctl get-default"
Set default-target to "graphical" (GUI system) "systemctl set-default graphical"
Set default-target to "multi-user" (CLI system) "systemctl set-default multi-user"
Show job environment "systemctl show-environment"
Set job environment "variable" to "value" "systemctl set-environment variable=value"
Unset job environment "variable" "systemctl unset-environment variable"
Reload all unit files and daemons "systemctl daemon-reload"
Shut down the system "systemctl poweroff"
Shut down and reboot the system "systemctl reboot"
Suspend the system "systemctl suspend"
Hibernate the system "systemctl hibernate"

Here, "$unit" in the above examples may be a single unit name (suffix such as .service and .target are optional) or, in many cases, multiple unit specifications (shell-style globs "*", "?", "[]" using fnmatch(3) which will be matched against the primary names of all units currently in memory).

System state changing commands in the above examples are typically preceded by the "sudo" to attain the required administrative privilege.

The output of the "systemctl status $unit|$PID|$device" uses color of the dot ("●") to summarize the unit state at a glance.

  • White "●" indicates an "inactive" or "deactivating" state.

  • Red "●" indicates a "failed" or "error" state.

  • Green "●" indicates an "active", "reloading" or "activating" state.

Here are a list of other monitoring command snippets under systemd. Please read the pertinent manpages including cgroups(7).


The mount options of normal disk and network filesystems are set in "/etc/fstab". See fstab(5) and Section 9.6.7, “Optimization of filesystem by mount options”.

The configuration of the encrypted filesystem is set in "/etc/crypttab". See crypttab(5)

The configuration of software RAID with mdadm(8) is set in "/etc/mdadm/mdadm.conf". See mdadm.conf(5).

[Warning] Warning

After mounting all the filesystems, temporary files in "/tmp", "/var/lock", and "/var/run" are cleaned for each boot up.

The cloud system instance may be launched as a clone of "Debian Official Cloud Images" or similar images. For such system instance, personalities such as hostname, filesystem, networking, locale, SSH keys, users and groups may be configured using functionalities provided by cloud-init and netplan.io packages with multiple data sources such as files placed in the original system image and external data provided during its launch. These packages enable the declarative system configuration using YAML data.

See more at "Cloud Computing with Debian and its descendants", "Cloud-init documentation" and Section 5.4, “The modern network configuration for cloud”.

With default installation, many network services (see Chapter 6, Network applications) are started as daemon processes after network.target at boot time by systemd. The "sshd" is no exception. Let's change this to on-demand start of "sshd" as a customization example.

First, disable system installed service unit.

 $ sudo systemctl stop sshd.service
 $ sudo systemctl mask sshd.service

The on-demand socket activation system of the classic Unix services was through the inetd (or xinetd) superserver. Under systemd, the equivalent can be enabled by adding *.socket and *.service unit configuration files.

sshd.socket for specifying a socket to listen on

[Unit]
Description=SSH Socket for Per-Connection Servers

[Socket]
ListenStream=22
Accept=yes

[Install]
WantedBy=sockets.target

sshd@.service as the matching service file of sshd.socket

[Unit]
Description=SSH Per-Connection Server

[Service]
ExecStart=-/usr/sbin/sshd -i
StandardInput=socket

Then reload.

 $ sudo systemctl daemon-reload

The udev system provides mechanism for the automatic hardware discovery and initialization (see udev(7)) since Linux kernel 2.6. Upon discovery of each device by the kernel, the udev system starts a user process which uses information from the sysfs filesystem (see Section 1.2.12, “procfs and sysfs”), loads required kernel modules supporting it using the modprobe(8) program (see Section 3.9, “The kernel module initialization”), and creates corresponding device nodes.

[Tip] Tip

If "/lib/modules/kernel-version/modules.dep" was not generated properly by depmod(8) for some reason, modules may not be loaded as expected by the udev system. Execute "depmod -a" to fix it.

For mounting rules in "/etc/fstab", device nodes do not need to be static ones. You can use UUID to mount devices instead of device names such as "/dev/sda". See Section 9.6.3, “Accessing partition using UUID”.

Since the udev system is somewhat a moving target, I leave details to other documentations and describe the minimum information here.

[Warning] Warning

Don't try to run long running programs such as backup script with RUN in udev rules as mentioned in udev(7). Please create a proper systemd.service(5) file and activate it, instead. See Section 10.2.3.2, “Mount event triggered backup”.

The modprobe(8) program enables us to configure running Linux kernel from user process by adding and removing kernel modules. The udev system (see Section 3.8, “The udev system”) automates its invocation to help the kernel module initialization.

There are non-hardware modules and special hardware driver modules as the following which need to be pre-loaded by listing them in the "/etc/modules" file (see modules(5)).

The configuration files for the modprobe(8) program are located under the "/etc/modprobes.d/" directory as explained in modprobe.conf(5). (If you want to avoid some kernel modules to be auto-loaded, consider to blacklist them in the "/etc/modprobes.d/blacklist" file.)

The "/lib/modules/version/modules.dep" file generated by the depmod(8) program describes module dependencies used by the modprobe(8) program.

[Note] Note

If you experience module loading issues with boot time module loading or with modprobe(8), "depmod -a" may resolve these issues by reconstructing "modules.dep".

The modinfo(8) program shows information about a Linux kernel module.

The lsmod(8) program nicely formats the contents of the "/proc/modules", showing what kernel modules are currently loaded.

[Tip] Tip

You can identify exact hardware on your system. See Section 9.5.3, “Hardware identification”.

You may configure hardware at boot time to activate expected hardware features. See Section 9.5.4, “Hardware configuration”.

You can probably add support for your special device by recompiling the kernel. See Section 9.10, “The kernel”.