recently we needed to expand storage space available on one of our servers. originally it was using RAID10 on 4 4TBSSD drives handled by Dell’s PERC h730p controller, we wanted to add 2 more 4TB drives and go from 8 to 12TB array. we’ve decided to be brave and use RAID10 -> RAID10 array expansion. and – surprisingly – we did not lose any data!
first – make sure you take a backup of all data on such array. you should regularly backup your data anyway, especially when you plan to make risky operation like this.
adding more disks to the raid10 is shown in this video – https://www.youtube.com/watch?v=Rh_ZqE7foy8 ; it took few minutes of clicking in bios, most likely can be done with MegaCli command line tool, without a reboot.
the expansion was quite slow, took over 24h. after that linux kernel happily reported:
sd 0:2:2:0: [sdb] 23437636224 512-byte logical blocks: (12.0 TB/10.9 TiB) sdb: detected capacity change from 8000046497792 to 12000069746688
structure of the block devices once raid expansion has finished shows new size for sdb, but not for the underlying partition, luks container and file system mounted from it – output from lsblk:
sdb 8:16 0 10.9T 0 disk └─sdb1 8:17 0 6.9T 0 part └─data 254:0 0 6.9T 0 crypt /mnt/data
/mnt/data is btrfs partition
we needed to expand sdb1, then luks sitting on top of it and – at the end extend btrfs:
apt-get install cloud-guest-utils growpart /dev/sdb 1 CHANGED: partition=xxx start=xxx old: size=xxx end=xxxnew: size=xxxx,end=xxx
at this point lsblk will show:
sdb 8:16 0 10.9T 0 disk └─sdb1 8:17 0 10.9T 0 part └─data 254:0 0 10.9T 0 crypt /mnt/data
btrfs that sits on top of luks’s /dev/mapper/data still is not aware of the additional 4TB of space available ; btrfs filesystem show /mnt/data/ reports still 6.9TB. the last step needed is:
btrfs filesystem resize max /mnt/online/