in branch offices i tend to install two identical pcs running linux and working in active – hot-spare setup. things evolved over time – one location has both routers running under vmware esxi on two different hosts, another – hardware raid, other – desktop-class pcs with single hard drives. hardware raids are good as long as there’s plenty of similar devices around and swapping the raid controller in case it dies is an option. mdadm [software] raid1 might be a reasonable solution for me during the next round of hardware upgrades. below some notes from testing it under vmware and on a physical machine.
i’m running the setup using release candidate of debian wheezy with two separate hard drives. in the installer’s boot menu i select Advanced options > Expert install, after choice of language, keyboard layout, network parameters and login credentials and disk detection phase i reach the interesting part – Partition disks > Manual.
there i repeat all the steps for first and second drive – sda and sdb.
- create gpt partition table
- create partitions:
- 1MB partitioni marked Use as: Reserved BIOS boot area/do not use
- 2018 addition: 100MB EFI System Partition
- as many partitions as needed for the OS. all of them marked Use as: physical volume for RAID
final result of the partitioning physical drives:
Using Configure software RAID i’m creating raid 1 devices on pairs of partitions earlier marked as physical volume for RAID.
after repeating that for all raid partitions:
now file systems and mount points can be defined for newly created mdX devices spanning across both drives. in my case it’ll be just root and swap:
after Finish partitioning and write changes to disk there’ll be set of usual questions about packages and mirrors. when asked where to place the GRUB boot-loader i leave the default option – Install the GRUB boot loader to the master boot record: Yes:
after the reboot / and swap [in my case] already is mounted from the RAID1 spanning both physical drives but i have to install manually grub on the 2nd drive so the system can boot even if the first disk is physically removed:
root@debian:~# grub-install /dev/sdb Installation finished. No error reported.
and voilà – the system will boot up even if the first drive gets physically removed.
status of all raids can be checked by:
root@debian:~# cat /proc/mdstat Personalities : [raid1] md1 : active (auto-read-only) raid1 sda3 sdb3 1260480 blocks super 1.2 [2/2] [UU] md0 : active raid1 sdb2 2927552 blocks super 1.2 [2/1] [_U] unused devices: <none>
status of single mdX device:
root@debian:~# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Apr 28 12:03:26 2013 Raid Level : raid1 Array Size : 2927552 (2.79 GiB 3.00 GB) Used Dev Size : 2927552 (2.79 GiB 3.00 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sun Apr 28 12:34:09 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : debian:0 (local to host debian) UUID : 9ebb319c:b0463e21:4d852665:f6527d33 Events : 75 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 18 1 active sync /dev/sdb2
to re-add disk removed earlier for a while:
root@debian:~# mdadm -a /dev/md0 /dev/sda2 mdadm: added /dev/sda2 root@debian:~# mdadm -a /dev/md1 /dev/sda3 mdadm: added /dev/sda3
to force a consistency check on a single mdX device:
echo check > /sys/block/md0/md/sync_action
to clone partition setup from one disk to another [for instance when new blank drive was installed replacing failed disk] i use gdisk:
#sgdisk is part of debians gdisk package. it's available for squeeze via backports #partition data from sda will be copied to sdb sgdisk -R=/dev/sdb /dev/sda
after that repeat for all mdX:
mdadm -a /dev/md0 /dev/sdb2
and after a while of syncing it’s done.
- RAID1 is not a replacement for scheduled and monitored backups – it’ll not save any information in case of accidental delete
- BIOS of the computer has to be configured with both drives added to the ‘boot order’ so the OS can be loaded even if the first disk is gone
- disk swapping has to be done with the power off. i’ve tried to simulate hdd failure by disconnecting a running drive while the system was running – all my attempts ended with the OS crash.
- grub-install /dev/sdb should be repeated whenever there’s update of grub / kernel
- to force re-read of the partition table after modifications i used partprobe /dev/sdb;/sbin/blockdev –rereadpt /dev/sdb
following resources were helpful:
2018 notes / debian stretch
- besides the 1MB Reserved BIOS boot area i had to add EFI System Partition on both boot disks, each having 100MB
- to make the system bootable from both drives i’ve cloned content of that partition between the disks: dd if=/dev/sdX2 of=/dev/sdY2 bs=1M, looks like this cloning does not have to be repeated too often