About growing logical volumes, md devices, partitions and filesystems in Ubuntu 10.10

Recently I found myself in a situation, where I was running out of space in my server. Luckily for me, the disks was not 100% utilized. On the positive side, that meant, that I had the possibility to extend my system without buying new hardware. On the negative side it meant that I had to fool around with my precious data. Normally I do this for a living, but on my job it is just customer data. In this case we are talking about pictures of the kids and wife….. Much more important stuff for sure :-)

So my setup

  • / mounted as an ext3 fs on top of an logical volume (LVM2) called lvroot in vg000
  • /boot mount as an ext3 fs directly on /dev/md0
  • vg00 has one physical volume called /dev/md1
  • /dev/md0 is a RAID1 device on /dev/sda1 and /dev/sdb1
  • /dev/md1 is a RAID1 device on /dev/sda2 and /dev/sdb2

Here is the business case: both /dev/sda2 and /dev/sdb2 was not fully utilized. /dev/sdb2 had 67GB unused and /dev/sda2 had around 250GB unused. That meant that I could resize / with 67GB up front, for free!

I startet out with this situation

root@edison:~# df -k /
Filesystem             1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg000-root 240149008 232617156   2652288  99% /

So how do you do this, in an easy, controlled way, without taking time consuming backups of hundres of GB (I do make backups, so it is very easy for me to write this ;-) )? You can do it like I did below.

<DISCLAIMER>This is best-effort, free of charge information.  If you end up breaking your system into several pieces, it is your responsibility. I can and will not be held liable. You get to keep the pieces yourself! That will hopefully teach you not to trust ‘expert advice’ from some random internet place run by a random danish guy.</DISCLAIMER>

That said. Lets do some storage work

# First I fail /dev/sdb2
mdadm --fail /dev/md1 /dev/sdb2

# Then I remove /dev/sdb2 from the running config
mdadm --remove /dev/md1 /dev/sdb2
# Then we delete and recreate /dev/sdb2 with a larger size.
# Do a print beforehand to see the "before" values
fdisk /dev/sdb
Command (m for help): p

Disk /dev/sdb: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa286eb78

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          26      208813+  fd  Linux raid autodetect
/dev/sdb2              27       30400   243979155   fd  Linux raid autodetect
# delete it
Command (m for help): d
Partition number (1-4): 2
# recreate with larger size and same type
Command (m for help): n
Command action
 e   extended
 p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (27-38913, default 27):
Using default value 27
Last cylinder, +cylinders or +size{K,M,G} (27-38913, default 38913):
Using default value 38913
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)
# Write it
Command (m for help): w

If you are so lucky that you are not using the device for anything else (ie booting), the kernel can reread the modified partition table on the live system. If the system do use the device for something, you have to reboot the system. Which I had to do. Which I then did… By issuing the command:

reboot

When the server came back up, I assembled the /dev/md1 device again and waited for a resync. I speeded up the resync quite significantly by issuing the echo commands, which basically just tell md that it has to work as hard as it possibly can.

# Assemble again
mdadm --add /dev/md1 /dev/sdb2
# Speed up rebuild process
 echo 900000 > /proc/sys/dev/raid/speed_limit_min
 echo 900000 > /proc/sys/dev/raid/speed_limit_max
# watch status, eat pizze, drink a beer
watch cat /proc/mdstat

When the resync was done, I did it all over, replacing /dev/sdb2 with /dev/sda2.  After that,  I had grown the underlying partitions on my two drives, but both md and lvm had not noticed anything.

I made sure everything worked fine  and went onto the  next step – growing an md device on the fly.

# See current md device stats
root@edison:~# mdadm -D /dev/md1
/dev/md1:
 Version : 00.90
 Creation Time : Sat Jul 31 16:45:08 2010
 Raid Level : raid1
 Array Size : 243979072 (232.68 GiB 249.83 GB)
 Used Dev Size : 243979072 (232.68 GiB 249.83 GB)
 Raid Devices : 2
 Total Devices : 2
Preferred Minor : 1
 Persistence : Superblock is persistent

 Update Time : Mon Nov 15 20:16:27 2010
 State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
 Spare Devices : 0

 UUID : f510bad0:59d990c1:19dfcc76:62eb3300
 Events : 0.204498

 Number   Major   Minor   RaidDevice State
 0       8        2        0      active sync   /dev/sda2
 1       8       18        1      active sync   /dev/sdb2

# Grow the device to the max - took about 1-2 seconds, but see below
root@edison:~#  mdadm --grow --size=max /dev/md1
# Recheck the device stat.
root@edison:~# mdadm -D /dev/md1
/dev/md1:
 Version : 00.90
 Creation Time : Sat Jul 31 16:45:08 2010
 Raid Level : raid1
 Array Size : 312359744 (297.89 GiB 319.86 GB)
 Used Dev Size : 312359744 (297.89 GiB 319.86 GB)
 Raid Devices : 2
 Total Devices : 2
Preferred Minor : 1
 Persistence : Superblock is persistent

 Update Time : Mon Nov 15 20:17:21 2010
 State : active, resyncing
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
 Spare Devices : 0

 Rebuild Status : 78% complete

 UUID : f510bad0:59d990c1:19dfcc76:62eb3300
 Events : 0.204503

 Number   Major   Minor   RaidDevice State
 0       8        2        0      active sync   /dev/sda2
 1       8       18        1      active sync   /dev/sdb2

Notice how the drive went into sync state. Looking at mdstat confirms that. You can use the device before the mirroring is done. … or you can play it safe and wait. I decided to go ahead and use it before the sync was done. Living on the edge … this is not work. Only something your wife will never forgive you for ;-)

root@edison:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1]
[raid6] [raid5] [raid4] [raid10]

md1 : active raid1 sda2[0] sdb2[1]
 312359744 blocks [2/2] [UU]
 [===============>.....]  resync = 78.6% (245745600/312359744)
finish=21.2min speed=52265K/sec

md0 : active raid1 sdb1[1] sda1[0]
 208704 blocks [2/2] [UU]
unused devices: <none>

Next step. Making lvm aware of the change. Quite easy actually:

# Before
root@edison:~# pvdisplay /dev/md
 --- Physical volume ---
 PV Name               /dev/md1
 VG Name               vg000
 PV Size               232,68 GiB / not usable 832,00 KiB
 Allocatable           yes (but full)
 PE Size               4,00 MiB
 Total PE              59565
 Free PE               0
 Allocated PE          59565
 PV UUID               saiFOT-Dw2O-OnVv-rA4b-wZG3-OQHN-fFD1w

You then issue an pvresize on the pv  like this

root@edison:~# pvresize /dev/md1
 Physical volume "/dev/md1" changed
 1 physical volume(s) resized / 0 physical volume(s) not resized

And you end up with

root@edison:~# pvdisplay /dev/md1
 --- Physical volume ---
 PV Name               /dev/md1
 VG Name               vg000
 PV Size               297,89 GiB / not usable 2,62 MiB
 Allocatable           yes
 PE Size               4,00 MiB
 Total PE              76259
 Free PE               16694
 Allocated PE          59565
 PV UUID               saiFOT-Dw2O-OnVv-rA4b-wZG3-OQHN-fFD1wI

We are getting there? Now onto extending the volume. First peek at the current sizes/free space

root@edison:~# vgdisplay vg000
 --- Volume group ---
 VG Name               vg000
 System ID
 Format                lvm2
 Metadata Areas        1
 Metadata Sequence No  4
 VG Access             read/write
 VG Status             resizable
 MAX LV                0
 Cur LV                1
 Open LV               1
 Max PV                0
 Cur PV                1
 Act PV                1
 VG Size               297,89 GiB
 PE Size               4,00 MiB
 Total PE              76259
 Alloc PE / Size       59565 / 232,68 GiB
 Free  PE / Size       16694 / 65,21 GiB
 VG UUID               rL7lfw-H2FC-4ala-KVV3-bLIA-MbA9-co2n1O
root@edison:~# lvdisplay /dev/vg000/root
 --- Logical volume ---
 LV Name                /dev/vg000/root
 VG Name                vg000
 LV UUID                uDPn6b-AvyO-xH2n-zpXm-n0Xt-5g5n-aSdu4C
 LV Write Access        read/write
 LV Status              available
 # open                 1
 LV Size                232,68 GiB
 Current LE             59565
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           251:0

Then extend the volume

root@edison:~# lvextend -l 76259 /dev/vg000/root
Extending logical volume root to 297,89 GiB
Logical volume root successfully resized

One step left – resize the live filesystem

root@edison:~# resize2fs /dev/vg000/root
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vg000/root is mounted on /; on-line resizing required
old desc_blocks = 15, new_desc_blocks = 19
Performing an on-line resize of /dev/vg000/root to 78089216 (4k) blocks.
The filesystem on /dev/vg000/root is now 78089216 blocks long.

How much did I gain:

root@edison:~# df -k /
Filesystem              1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg000-root  307454080 232621080  68588172  78% /

Almost a walk in the park. I did actully run into a problem during this procedure. When I removed /dev/sdb2, grub would not reboot. Instead it gave me a grub rescue prompt.  I overcame this booting a rescue CD, starting up the system using that. I then added /dev/sdb2 into /dev/md1 from the rescue cd, waited for the mirroring and and rebooted again. When I did the same for /dev/sda2, grub was happy all along?! I have not bothered to figure out why grub misbehaved, I just put grub down as being like a teenager: unreliable and causing problems for no obvious reason ;-)

Leave a Reply

You must be logged in to post a comment.