[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Applying the kernel-patch-2.2.10-raid 2.2.10-3 to 2.2.15



On 2000-05-16 at 19:38 +0800, Sanjeev "Ghane" Gupta wrote:

> From: Mike Bilow <mikebw@colossus.bilow.com>

> When do I use the patches?  At install time, I use the boot-floppies from
> the archive, which persumably dont have the patch applied.  Or do I boot
> with my custom-image?

You need to completely replace the kernel image with a patched kernel
image.  This means, in effect, that you need to have a Debian system,
install the kernel-source-2.2.x and kernel-package packages, apply Ingo's
patch to the source tree, and then use "make-kpkg" per the instructions in
the kernel-package package.  Your patched kernel image file must also be
copied manually and renamed to replace the "LINUX" kernel image on the
rescue floppy, so that you are booting on your custom kernel.  You will
manually (using dpkg) have to install the custom .deb file that results
from "make-kpkg," since otherwise the installation procedure may not be
aware that you switched the boot kernel behind its back (unless you
completely rebuild custom boot floppies).

You are welcome to the kernel ("kernel-image-2.2.14_guardian.1_i386.deb")
I built for our machine (using "make-kpkg --revision=guardian.1") but, of
course, this may not be useful on your particular hardware.  You can run
"dpkg --extract" on the .deb file and copy the kernel (vmlinuz-2.2.14)
over the rescue floppy (as "LINUX").

> > Fifth, you probably want to apply the patch I recommend in critical bug
> > 61227, since kernel panic and filesystem corruption can result otherwise.
> 
> OK, but I wont be swapping to the RAID.

This is something of a debate among RAID users.  I think swapping to RAID
is actually a good idea, since otherwise Linux will certainly crash if the
swap volume fails.  If you want your RAID system to survive the failure of
any arbitrary disk, then you effectively must swap to RAID.

Obviously, the whole installation procedure is also easier if you have a
substantial non-RAID partition running.  The complexity I have been
discussing only really arises when the entire system is RAID, including
the boot volume, but your system is not robust against disk failure unless
everything important -- including boot and swap -- is done from RAID.

> > Can this be done?  Definitely -- we did it.  However, this is not a
> > project for anyone unless they are up for a challenge.
> 
> Challanges are fine, but only if success stories exist.

Yes, it does work.

> As I understand it, I boot normally from the netinstcd, switch to VT2,
> partition, mkraid, mount, and proceed with install.  I make some changes to
> lilo.conf, making device=/dev/md0
> 
> Now what?  When I reboot, will the kernel (2.2.14) boot from md0 ?

Well, I've certainly never tried this with netinstcd!  I elected to do a
standard install using the boot floppies, except that the kernel image on
the rescue floppy was replaced manually.  You will need to set the
partition type codes to 0xFD, which is the signal to the kernel that the
RAID should be autodetected at boot time.  When "pesistent-superblock"
option is specified, and it always should be, the mkraid utility writes a
"RAID superblock" (not to be confused with a filesystem superblock) onto
the end of each partition, which contains enough information for the
kernel to figure out, once triggered to inspect the RAID superblocks by
the 0xFD partition type code, how to reassemble the RAID sets.  You
will need to have a "raidtab" file somewhere when you run mkraid, and this
file should be installed later as /etc/raidtab so it is available in case
something goes wrong later.  Here is an example:

# cat /etc/raidtab
raiddev /dev/md0
        raid-level 1
        nr-raid-disks 2
        nr-spare-disks 0
        chunk-size 4
        persistent-superblock 1
        device /dev/hda1
        raid-disk 0
        device /dev/hdc1
        raid-disk 1
raiddev /dev/md1 
        raid-level 1
        nr-raid-disks 2 
        nr-spare-disks 0
        chunk-size 4
        persistent-superblock 1
        device /dev/hda2
        raid-disk 0
        device /dev/hdc2
        raid-disk 1

Lilo needs a fairly unusual syntax in /etc/lilo.conf, but Lilo does
understand enough about RAID so that it will make all of the RAID volumes
independently bootable in case the one normally used for booting
fails.  Here is an example from our working system:

# egrep -v '^#|^$' /etc/lilo.conf
boot=/dev/md0
root=/dev/md0
install=/boot/boot.b
map=/boot/map
delay=20
vga=normal
append="panic=120"
lba32
default=Linux
image=/boot/vmlinuz
      label=Linux
      read-only 
image=/boot/vmlinuz-backup
      label=LinuxBackup
      read-only

Note that /boot on this system is just an ordinary subdirectory of / on
/dev/md0, and there are no tricks using initrd/linuxrc or anything of that
kind.  Since /dev/md0 is actually a 30 GB RAID-1 set (30 GB + 30 GB), we
need "lba32" to boot because the boot partition is larger than 8.4 GB.  
The 'append="panic=120"' is simply our standard practice for an unattended
server machine, and has nothing to do with RAID.

On the system from which I am supplying configuration examples, /dev/md0
is mounted as filesystem root (/) and /dev/md1 is swap space.

Assuming you get this right (and your BIOS ROM likes you), then your
patched kernel should indeed boot from /boot on /dev/md0.

-- Mike




Reply to: