Migrating a Linux boot partition to RAID-1
Alexander Hajnal
4 Setting up the RAID-1 array
- Shut down your computer
-
Move your current root drive (which we will refer to as the
old disk) from master on the primary IDE bus
(
/dev/hda
) to master to the secondary IDE bus
(/dev/hdc
).
-
Install the new disk that you will be using as the first drive of the array
as the master on the primary IDE bus (
/dev/hda
).
-
Boot the system. As the computer starts up, hold down the
Control key until you get the
lilo
boot
prompt. At the prompt, type
linux boot=/dev/hdc root=/dev/hdc1
where
/dev/hdc
reflects the location of the
old drive. Once the computer has booted, log in
as root
.
You may get mount errors during boot if you have
additional partitions listed in /etc/fstab
that have moved to
different locations or that have been removed. If you do, edit
/etc/fstab
to reflect their new locations or simply comment
them out and then reboot your machine (If you do so, be sure to specify the
boot=/dev/hdc
and root=/dev/hdc1
options while
booting).
-
Create an
/etc/raidtab
with the array intitially in degraded
mode:
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
persistent-superblock 1
device /dev/hda1
raid-disk 0
device /dev/hdc1
failed-disk 1
/dev/hda1
and /dev/hdc1
should reflect the
physical locations of your new and
old drives, respectively.
The
persistent-superblock
tells the kernel
to write meta-information about the array to each partition that is part
of the array. This provides protection against drive mix-ups and also
allows the array to boot even if one of the member drives has failed.
By marking /dev/hdc1
as failed, we are telling the kernel not
to use it as part of the array. This allows us to continue using our old,
non-RAID boot disk while we set up the array.
-
Create a copy of the existing
lilo.conf
by running
cp /etc/lilo.conf /etc/lilo.conf.raid
Modify
lilo.conf.raid
to allow booting off of the array. The
differences from a non-RAID lilo.conf
are:
boot=/dev/md0
raid-extra-boot="/dev/hda,/dev/hdc"
root=/dev/md0
image=/vmlinuz
label=Linux
read-only
restricted
append="md=0,/dev/hda1,/dev/hdc1"
image=/vmlinuz
label=LinuxNoRaid
read-only
restricted
append="root=/dev/hdc1 boot=/dev/hdc"
Note that RAID device names rather than the physical device names are given
for the boot
and root
options. In addition,
both lilo
and the kernel need to be told the geometry of the
array. This is done with the raid-extra-boot
option (which
tells lilo
where to write the bootsectors) and the
append
option (that tells the kernel about the array). Note
that if you already have an append
line specified, you should
add md=0,/dev/hda1,/dev/hdc1
to the exisiting line.
-
You may wish to backup the old drive's
master boot record and partition table. You write these to a floppy disk
by running
dd if=/dev/hdc of=/dev/fd0 bs=512 count=1
If anything goes
wrong, you can restore the old MBR and partition table by running
dd if=/dev/fd0 of=/dev/hdc bs=512 count=1
-
Determine size of the old drive:
- Run
fdisk /dev/hdc
- Type
u
Enter to change the units to sectors
- Type
p
Enter to print the drive's partition table
-
write down the start and end sectors of the root partition
(
/dev/hdc1
in this example)
For example, if the output of the p
command was:
Disk /dev/hdc: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/hdc1 * 1 234436544 117218272 83 Linux
then you should write down 1 and 234436544.
- Type
q
Enter to exit the program.
-
Clear the partition table from the new drive
by running
dd if=/dev/zero of=/dev/hdc bs=512 count=1
-
Partition the new drive:
-
Reboot the system. As the computer starts up, hold down the
Control key until you get the
lilo
boot
prompt. At the prompt, type
linux boot=/dev/hdc root=/dev/hdc1
where
/dev/hdc
reflects the location of the
old drive. Once the computer has booted, log in
as root
.
-
Build array by running
mkraid /dev/md0
You should see something similar to the following (the size and superblock
information will vary):
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/hda1, 4210888kB, raid superblock at 4210816kB
disk 1: /dev/hdc1, failed
It is normal that disk 1 is shown as being failed since we have explicitly
marked it as such. This is so that we can continue to use the
/dev/hdc1
partition as the root partition while we finish
setting up the array.
-
Check that the array is set up correctly by running
cat /proc/mdstat
. You should see something similar to this
(again the sizes will differ):
Personalities : [raid1]
read_ahead 1024 sectors
md0 : active raid1 hda1[0]
4210816 blocks [2/1] [U_]
unused devices: <none>
This shows that the array has been built but is degraded and that it currently
is comprised of a single partition, /dev/hda1
.
-
Create a filesystem on the array by running
mke2fs -j /dev/md0
This will create an ext3 (ext2 with journal) filesystem.
If you want, you can use a different filesytems (ReiserFS, etc.) on the
array Instead of ext3.
-
Test that you can mount the array by running
mkdir /raid
and
then mount /dev/md0 /raid
The filesystem should mount cleanly.
When you have verified that it is mountable, run umount /raid