Skip to main content

Linux Boot

     In this article, I am going to explain the booting process in detail. I can assure you that, you will get a clear idea about the Linux booting process after reading this post. I would like to divide the Linux booting in the following 5 steps:
From power up/reset to login prompt, we can mainly divide the Linux booting process in to five areas. The BIOS, Stage I boot loader, Stage II boot loader, Kernel and Init. These are the important areas behind a booting process.
Let us start the Linux booting process with BIOS.

Step 1. BIOS (Basic Input Output System)

POST-power on self test which bios loads  checking for hardware .

Step 2. Stage 1 boot loader (MBR)

Master Boot Record, is the first place where boot loaders begins to start. MBR is a 512 byte sector located in the first sector of hard disk. MBR contains both program code and partition table details. Please see the image added below:
When allocating disk space for a partition, the first sector or data unit for each partition is always reserved for programmable code used in booting process. The very first sector of the hard disk is reserved for same purpose and it’s called the Master Boot Record. In case of a mechanical spinning disk, sector 1 of cylinder 0, head 0.

First 446 byte are the primary boot loader which contains both executable code and error message text.
Next 64 bytes contains the partition table. This section contains records for each of four partitions. In the above image P1 represents Partition 1, P2 for Partition 2 and so on.
4 x 16 bytes = 64 bytes
The last two bytes known as magic number (0xAA55). This number is used for the validation check of MBR.
When booting from a hard disk, the BIOS starts by loading and executing boot loader code. The MBR size is not enough sometimes to execute the complete boot loader code. Because, its size is larger than the available space in MBR. So booting has to be done in different stages. These stages are different in different boot loader on your system. Yeah, it’s time to move over to Stage 2 boot loader

Step 3. Stage 2 boot loader.

It is called the kernel loader. The main task at this stage is to load the Linux kernel.
Different boot loaders
LILO : Linux Loader.
GRUB : Grand Unified Boot Loader.
We use GRUB, as LILO has some disadvantages. One great thing about GRUB is it has knowledge about the Linux file system. GRUB can load kernel from an ext2 or ext3 file system.
As we mentioned, the boot loader code is executing in different stages because of the size limit of MBR. This stages are different in different GRUB version. These stages are mainly listed as follows:
  1. Stage 1 and Stage 2 : These are two essential stages
  2. Stage 1.5 : An optional stage
Stage 1 is an essential image used to boot up a Linux machine. Usually, this is embedded in an MBR or the boot sector of the partition. The maximum size of stage one image is 512 bytes, because of the MBR size limit.
Stage 1 does not understand file system. It loads stage 2 or stage 1.5 from local disk for further booting process. So it knows Linux file system details.
Stage 2, this is the core image of GRUB. Usually, you can find this stage in a file system (Not necessary). Stage 1.5 is actually a bridge between Stage 1 and Stage 2.  Stage 1.5 will be installed into an area right after the MBR area.
Boot loaders are loading from Stage 2. So you must know some basic details of Linux boot loaders. The common boot loaders in Linux are listed below:
  1. LILO (Linux Loader)
  2. GRUB (Grand Unified Boot Loader)
Nowadays almost all Linux distributions are using GRUB as boot loader. The latest GRUB is GRUB v2.
Both LILO and GRUB can configure as a primary boot loader (on MBR) or the secondary boot loader (on a bootable partition). Both work with supporting OS such as Linux, FreeBSD, NetBSD. They can also work with unsupported OS like Microsoft Windows. That’s a great thing!
Configuration file for LILO is /etc/lilo.conf and some configuration directives are listed below:


Commonly using boot loader is GRUB and its configuration is also listed under the etc folder.  GRUB has two versions. GRUB v1 and the latest GRUB v2. There are a lot of changes in these two versions of GRUB.

GRUB v1 – /etc/grub.conf

This configuration is actually a link to /boot/grub/grub.conf. All kernel details are included in this configuration file.  Sample entries are pasted below:
# grub.conf generated by anaconda
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/sda3
# initrd /initrd-[generic-]version.img
title CloudLinux Server (2.6.32-673.26.1.lve1.4.27.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-673.26.1.lve1.4.27.el6.x86_64 ro root=UUID=15f2bf27-2e16-4b6f-bc86-fa74314aa8d5 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet nohz=off
initrd /initramfs-2.6.32-673.26.1.lve1.4.27.el6.x86_64.img

GRUB v2 – /etc/grub2.cfg

This is actually a link to /boot/grub2/grub.cfg. The main difference between GRUB v1 and GRUB v2 is, we can not edit the configuration file for changing the kernel and other settings. However, you can use different grub2-* commands for changing GRUB settings.
A sample configuration is pasted below:
terminal_output console
if [ x$feature_timeout_style = xy ] ; then
set timeout_style=menu
set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
set timeout=5
### END /etc/grub.d/00_header ###
BEGIN /etc/grub.d/10_linux ###
menuentry 'CentOS Linux (3.10.0-123.4.2.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-123.el7.x86_64-advanced-fe0109f2-6f34-48ae-b51e-1f5fa78305b5' {
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
The line starting from “menuentry” defines Kernels.
Don’t forget, we are at stage 2 of Linux booting process  In stage 2, GRUB is loaded from a known location in boot file system (/boot/grub).
This stage loads other required drives and kernel modules before reading the GRUB configuration file and displaying the boot menu. The familiar GRUB menu is now displayed on the monitor. You can select the kernel from there.

Step 4.  Kernel stage

Here the kernel stage begins. Kernel is in compressed format. We can select the kernel from the GRUB menu. If not selected, the GRUB automatically load the default one in its configuration. We can change the default kernel details from the GRUB configuration.
The kernel you selected is now loaded into the memory. An image file containing the basic root file system with all kernel modules are then loaded into the memory. This image file is located under /boot and it’s known as initramfs.
Initramfs, abbreviated from “initial RAM file system”, is the successor of initrd “initial ramdisk”. This image file contains the initial file system. The GRUB starts the kernel and tells the memory address of this image file. The kernel then mount this image file as a starter memory based root file system.
The kernel then starts to detect the system’s hardware. The root file system on disk takes over from the one in memory. The boot process then starts INIT (SYSTEMD) and the software daemons according to the Sys Admin’s settings. This can be done at next stages.

Step 5. INIT

The kernel, once it is loaded in step 4, it finds init in sbin (/sbin/init) and executes it. [In RHEL/CentOS 7 /sbin/init is linked to ../lib/systemd/systemd]. When init starts, it become the first or parent process on your Linux machine/server.
The first thing init does is reading the initialization file, /etc/inittab. This instructs init to read an initial configuration script for the environment, which sets the path, starts swapping, checks the file systems, and so on. From the /etc/inittab system will find the run level selected and start services by looking in the appropriate rc directory for that run level.


Popular posts from this blog

Docker basic commands (podman)

  Docker basic commands. Also use full for podman.   docker search <image-name> - search for image in docker-hub docker run <options> <image-name> - by default docker will run command foreground. For running background use -d option, - it interact with the container instead of just seeing the output, -- name option for giving friendly name when lunching container docker logs <friendly-name|container-id> - container standard err or standard out messages docker inspect <friendly-name|container-id> - more detailed information about running container docker ps - list all running docker containers docker run -p <host-port>:<container-port> - define ports you want to bind, when running conatiner d ocker port <friendly-name|container-id> - list port mappings or a specific port mapping for container -v <host-dir>:<container-dir> - mounts container-dir to host-dir docker s

Install Cisco AnyConnect on Ubuntu

Hi   In this post i will show how to  install Cisco AnyConnect on Ubuntu 19.10. First download soft from below link or from site Once archive file  downloaded, extract it:     $ tar xvf anyconnect-predeploy-linux-64-3.1.14018-k9.tar.gz cd extracted folder:     $ cd anyconnect-3.1.14018/vpn/ install  Cisco AnyConnect using this command:     $ sudo ./ After installing you can open application. If application not opening. You have to install libpangox-1.0-0 to solve problem:     $ sudo apt-get install libpangox-1.0-0 That's all.

Service Hosting - KVM documentation

Host System Requirements: Minimum host system requirements 6 GB free disk space. 2 GB RAM. Recommended system requirements One core or thread for each virtualized CPU and one for the host. 2 GB of RAM, plus additional RAM for virtual machines. 6 GB disk space for the host, plus the required disk space for the virtual machine(s). KVM Hypervisor Requirements: an Intel processor with the Intel VT-x and Intel 64 virtualization extensions for x86-based systems; or an AMD processor with the AMD-V and the AMD64 virtualization extensions. Installing the Virtualization Packages: To use virtualization on OL7, at minimum, you need to install the following packages: # yum install qemu-kvm libvirt qemu-kvm: This package provides the user-level KVM emulator and facilitates communication between hosts and guest virtual machines. qemu-img: This package provides disk management for guest virtual machines. The qemu-img package is installed as a dependency of the qemu-kvm package. libvirt: