Version 0.88 (Last Update 2004-02-28) Author: Lucas Albers
Contact information: admin At cs DOT montana dot edu
Thanks to: Alvin Olga, Era Eriksson, Yazz D. Atlas and http://alioth.debian.org
These are quick instructions for how to convert your Debian system to use software RAID, using mdadm.
You do not need a rescue disk or to boot off anything except the hard drive.
You can do this operation completely remotely.
You will not lose any data in this conversion.
I'm not going to spend anytime formatting this document, I'm an engineer not an editor, contact me with errors if you make the improvement.
Mdadm is easier than raidtools or raidtools2.
Directions are for lilo, because I could not get grub to work.
I’m not going to tell you how to compile your kernel, or edit your files. You can figure that out.
My system is installed on /dev/hda1 and I will be putting in a new disk, then I will generate a new raid volume.
By convention, /dev/hda1 is my root partition; /dev/hdb1 will be the new raid volume. See Disk Performance
Debian is installed on a single partition hda1.
We use sfdisk and cfdisk as our partition editors.
ALWAYS reboot after any partition edit when you use fdisk, cfdisk, or sfdisk.
These directions are equally applicable with a system that has multiple partitions. You just need to create multiple raid volumes. (Explained in Notes.)
Output and commands are bold and indented.
Backup your data before attempting this.
Backup your data before attempting this.
Create a floppy boot disk.
Install mdadm: apt-get install mdadm;
Install kernel with raid support in kernel, not a module.
Or follow directions below for configuring kernel with raid support as initrd modules.
Need raid1 as minimum, but I usually also include raid5 support.
Whenever you change your partitions, you need to reboot.
I assume you will mess up a step so wherever possible, we include verification.
Go read how to do this, and then do it.
<Insert Kernel directions here>
Look at the readme's in kernel-package (/usr/share/doc/kernel-package/
Boot kernel with raid support either compiled or in modules.
Verify you have raid support by:
This shows raid is compiled in kernel.
It will list that the kernel can see raid.
Personalities : [raid1] [raid5]
Now that you know your kernel supports raid we can continue.
This will not work if you have your kernel supporting raid via modules.
I could not get raid to work with modules loaded in initrd.
Create Partitions Pick 1A or 1B
1a.) Create partitions on new disk.
Warning: ALWAYS give the partition when editing with cfdisk.
By default cfdisk will select the first disk in the system.
I accidentally wiped the wrong partition with cfdisk, once.
1b) Copy existing partitions to new disk with sfdisk.
cfdisk –d /dev/hda | sfdisk /dev/hdb
1.5)Create correct partition type signatures on new partition.
The partition needs to have its type set to Software Raid to detect it on boot.
On the cfdisk screen it will list “software raid” as your partition type.
You want to select Type, then hit enter, then type FD for your software raid partitions.
Then select Write, Then Select Quit.
2.) Then REBOOT.
Create Raid Volume
3.) Create raid volume that has two members and one of the members does not exist yet.
md0 is the raid partition we are creating, /dev/hdb1 is the initial partition. We will be adding /dev/hda1 back into the /dev/md0
raid set after we boot into /dev/md0.
mdadm –create /dev/md0 –level=1 –raid-disks=2 missing /dev/hdb1
If this gives errors then you need to zero the super block, see useful mdadm commands.
4.) Format raid volume
You can use reiserfs or ext3 for this, both work, I use reiserfs for larger volumes.
Go with what you trust.
mkfs.ext3 /dev/md0 or mkfs reiserfs /dev/md0
5.) Create mount point.
6.) Mount raid volume.
mount /dev/md0 /mnt/md0
7.) Copy data to new mounted raid volume.
cp –ax / /mnt/md0
See Copying Data for more information
You don’t need the –u switch; it just tells cp not to copy the files again if they exist.
If you are running the command a second time it will run faster with the –u switch.
8.) Edit /etc/fstab to have it automatically mount your new raid partition on boot.
This verifies that you have the correct partition signatures on the partition and that your partition is correct
Sample Line in /etc/fstab:
/dev/md0 /mnt/md0 ext3 defaults 0 0
9.) NOW REBOOT, and see if the raid partition comes up. Success?
10.) Now we configure lilo to boot to the new partition.
Later we will configure lilo to write the bootsector to the raid boot volume, so we can still boot even if either disk fails.
edit lilo.conf on /dev/hda1
we make a new entry to boot /dev/md0 the raid volume.
Entries in /etc/lilo.conf.
#the same boot drive as before.
#our new root partition.
This makes an entry specific to the raid volume, so you can still boot to /dev/hda if /dev/md0 does not work.
11.) Then test our lilo entry.
With a RAID installation, always run:
first -- just to let LILO tell you what it is about to do. Use the
'-v' flag, too, for more verbose output.
12.) Write our lilo entry.
Configure a one time lilo boot, and a reboot with kernel panic
The -R tells lilo to only use the specified image for the next boot. So once you reboot it will revert to your old kernel.
This we define a lable so this tells lilo to use the RAID label as the default boot.
lilo -v -R RAID
13.) Update fstab on new raid volume /mnt/md0/etc/fstab
So change the entry in /mnt/md0/etc/fstab to refer to your new root partition, which was previously:
/dev/hda1 / reiserfs defaults 0 0
and is now:
/dev/md0 / ext3 defaults 0 0
So we commented out the original root and set raid as the new root .
Originally the root was /dev/hda1 -- now it is /dev/md0.
Now it will refer to correct root partition when you boot to /dev/md0 as your root partition.
Reboot and see if it comes up with the new raid volume.
If it does not work, just reboot, and you will come up with your previous boot partition.
14.) Verify your are booted into the correct partition.
If you type mount it should show:
/dev/md0 on / type reiserfs (rw)
proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
Now we are booted into the new raid volume. Our raid 1 volume only has 1 disk in it.
15.)Change the signature on /dev/hda to have software raid.
Then select "[Type]"
Then hit "enter".
Then type "FD".
We are setting partition to "Software Raid"
Should already be set.
Then Select "Boot" if not set so you can boot off the device.
Then select "Write".
Note all the boot partitions that are members of your booting raid partition should have the bootable flag set.
So in this case /dev/hda1 and /dev/hdb1 should have bootable option set when you run cfdisk.
16.)Add the first disk to our existing raid volume.
mdadm –add /dev/md0 /dev/hda1
Note: We are adding /dev/hda1 into our existing raid volume, we are currently booted off of /dev/hdb1 as /dev/md0.
Now see if it is syncing.
and it will show that it is syncing.
watch it sync.
17.)Write new /etc/lilo.conf settings, these are from when we are booted onto raid.
#this writes the boot signatures to either disk.
YOU NEED THE raid-extra-boot to have it write the boot loader to all the disks.
YOU ARE OVERWRITING THE BOOT LOADER ON BOTH /dev/hda and /dev/hdb.
You can keep your old boot option to boot /dev/hda so you can boot RAID and /dev/hda.
But remember you don't want to boot into a raid volume in non raid as it will hurt the synchronization. If you make changes on one disk and not the other.
18.)Write new /etc/lilo.conf settings (we are currently booted into raid).
Now we write the lilo settings temporarily:
lilo –r RAID
The -R switch tells lilo it to use the lilo setting ONLY for the next reboot
19.) NOW REBOOT.
You need to boot back in, and run lilo for real.
YOU ARE DONE.
I have found it useful when setting software raid on multiple partitions to set the raid device to the same name as the disk partition.
If I have 3 partitions on /dev/hda and I want to make /dev/hdb into a software raid, then boot /dev/hdb and add /dev/hda back into the volume, exactly what I did earlier, but with 3 partitions which are:
hda1=/boot, hda2=/, hda3=/var
sfdisk –d /dev/hda | sfdisk /dev/hdb;
mdadm –zero-superblock /dev/hda1
mdadm –zero-superblock /dev/hda2
mdadm –zero-superblock /dev/hda3
mdadm –create /dev/md1 –level=1 –raid-disks=2 missing /dev/hdb1
mdadm –create /dev/md2 –level=1 –raid-disks=2 missing /dev/hdb2
mdadm –create /dev/md3 –level=1 –raid-disks=2 missing /dev/hdb3
mkfs.reiserfs /dev/md1;mkfs.reiserfs /dev/md2; mkfs /dev/md3;
mkdir /mnt/md1 /mnt/md2 /mnt/md3;
cp –ax /boot /mnt/md1;cp -ax / /mnt/md2; cp –ax /var /mnt/md3;
add entry in current fstab for all 3.
Sync data again, only copying changed stuff.
cp –aux /boot /mnt/md1;cp -aux / /mnt/md2; cp –aux /var /mnt/md3;
edit lilo.conf entry in this case:
Edit /mnt/md2/etc/fstab to have / set to /dev/md2.
REBOOT into raid.
Add volumes in:
mdadm –add /dev/md1 /dev/hda1
mdadm –add /dev/md2 /dev/hda2
Wait for sync, write lilo permanently, and reboot into your setup.
It is not harder to include more volumes in a software raid volume.
Lilo: You need special entries to use lilo as your boot loader, I couldn't get grub to work, but nothing prevents you from using grub if I had the directions on how to do so. Just standard lilo entries WILL NOT WORK FOR RAID.
This option only has meaning for RAID1 installations. The <option> may be specified as none, auto, mbr-only, or a comma-separated list of devices; e.g., "/dev/hda,/dev/hdc6".
panic='' line in lilo. That entry will automatically boot back to the old install if something goes wrong with the new kernel.
Disk Performance: /dev/hda and /dev/hdb are on the same master/slave relationship so you want your disk controllers on separate controllers for maximum performance. For this raid setup we actually want our disks on /dev/hda and /dev/hdc (as the two master disks.)
Note on hard disk configuration in linux
- Last drive on cable plugged into IDE 1 is hda -- 1st Master.
- Middle drive on cable plugged into IDE 1 is hdb -- 1st Slave.
- Last drive on cable plugged into IDE 2 is hdc -- 2nd Master.
- Middle drive on cable plugged into IDE 2 is hdd -- 2nd Slave.
Copying Data: use "cp -axu" to just copy updated items. if you are copying a partition that is not root you need to copy the subdirectories and not the mount point, otherwise it will just copy the directory over. To copy boot which is a seperatelly mounted partition to /mnt/md1 which is our new software raid partition we copy as thus: "cp -axu /boot/* /mnt/md1" NOTE THE DIFFERENCE when copying mount points and not just /. If you just do cp -axu /boot /mnt/md1 it will just copy over boot as a subdirectory of /mnt/md1.
Rebooting: You should always reboot if you have changed your
partitions, otherwise the kernel will not see the new partitions correctly.
I have changed partitions and and not rebooted, and it caused problems. I would rather have the simpler longer less potentially troublesome approach. Just because it appears to work, does not mean it does work. You really only need to reboot if you are CHANGING or rebooting a new lilo configuration. Don't email me if you hose yourself because you did not feel the urge to reboot. Trust me.
initrd: Use raid as intrd modules: (Via Yazz D. Atlas)
The kernel that is installed when you first build a system does not uses
an initrd.img. However if you just uses apt-get to install a new kernel
from the Internet thoses kernel packages from Debian do us an initrd.img
The new kernel by default won't contain the right modules but they can
Before upgrading to a new kernel you should modify the systems
/etc/mkinitrd/modules to include the following...
With those module you should beable to install the new kernel-image
package. The install will add thoses modules to the initrd.img that.
Now you can do for example (I actually only tested with
kernel-image-2.4.24-1-686-smp on a machine using testing and unstable
listed in the /etc/apt/source.list)
apt-get install kernel-image-2.4.24-1-686-smp
You will need to modify /etc/lilo.conf to include the right stuff other
wise the postinstall scripts for the package will likely fail.
If you already have a new Debian kernel installed that has an initrd.img
already you can rebuild the initrd.img.
mkinitrd -o /boot/initrd.img-2.4.24-1-686-smp /lib/modules/2.4.24-686-smp
(The above is all one line)
run lilo and reboot.
You should now have the modules loaded and cat /proc/mdstats
Alternative way to copy files to raid partition:
find . -xdev -print | cpio -dvpm /mnt/md0
Quick Reference Mdadm (print page12)
Verify Raid Kernel, check /proc/mdstat.
copy partitions hda to hdc: cfdisk –d /dev/hda | sfdisk /dev/hdc
create array: mdadm –create /dev/md0 –level=1 –raid-disks=2 missing /dev/hdc1
copy data: cp –ax / /mnt/md0;
create lilo entry for 1 disk raid volume.
#our new root partition.
add second disk to array: mdadm --add /dev/md0 /dev/hdc1
create final lilo entry:
#this writes the boot signatures to either disk.
RAID 1 ROOT HOWTO PA-RISC
Lilo Raid Configuration:
Grub Raid Howto
Building a Software RAID System in Slackware 8.0
Software RAID HOWTO
HOWTO - Install Debian Onto a Remote Linux System
Kernel Compilation Information and good getting started info for Debian