How to convert linear LVM into RAID 1
Overview
Today I would like to present how I created RAID 1 setup from one existing LVM logical volume and a new storage block device. I had a need for such operation when I bought myself new HDD drive with an intention of using it as a simple “copy-paste” RAID 1 type backup for my other HDD.
My configuration
As I already mentioned above, at the time of starting the configuration I had 1 HDD drive (explicitly: 1 PV (physical volume), 1 VG (volume group), 1 linear LV (logical volume)) and a second HDD drive which was newly bought. At the end of the day I wanted to have 2 PVs, 1 VG and 2 LVs in RAID 1. For my convenience, further in the article when I will refer to a block device, and what I have on my mind would be a storage block device.
For the sake of simplicity, let’s continue with experimental storage block devices so you can follow along and stay safe. Afterwards, you can try to apply it to your real setup.
For starters, let’s create the playground. We have one block device, say /dev/sda1
. This is
a PV named /dev/sda1
which belongs to VG by the name of vg_raid1
.
In this VG we have one LV (named lv0
) which is of the size of the whole /dev/sda1
PV formatted
as ext4 fs.
The second block device should be a raw /dev/sda2
. Although the name suggests that the
partition resides on the same drive sdaX
, it doesn’t really matter. It could be any other
block device as well. But there is one requirement - both of them have to be of the same size.
Space for EL
But what is EL? Well, if you are asking yourself that question right now, maybe this will help: basically an EL (extent) is “the elementary block of LVM”. We have to be sure that there is enough space on both PVs for ELs, and you won’t see the message
Insufficient free space: 1 extents needed, but only 0 available
But if you do, then resize existing both /dev/sda1
and lv0
. For more information
you can refer to the related stack exchange thread.
There are at least two ways how to do it…
# Reduce the filesystem in lv_home by 5G
$ sudo resize2fs /dev/vg_muskoka/lv_home 50G
# Shrink the LV:
$ sudo lvreduce vg_muskoka/lv_home --size -5G
# or specify an absolute size of 50G
# Note: If you are experiencing problems mounting the reduced LV,
# it may be due to the difference in extent size
# between the LINV and the filesystem layer. To fix this problem,
# remove the LV, create, format the LV, and then resize only using
# the lvreduce command with the -r option.
# This automatically reduces the filesystem , and the LV.
# Refer to this example below"
$ sudo lvreduce -r vg_muskoka/lv_home --size 50G
(source)
…and I will preform the second one.
WARNING
Resizing your fs/LV might cause data loss or corruption. Before performing the resize action it is a good idea to backup your files. Resize fs/LV with valuable data on it at your own risk.
First things first, now we have to get the exact Block size
of the device block, and
the exact Block count
of it as well. It will be helpful to know these things, so we could
specify the exact new size of the drive in exact manner, thus no relative of course.
This stack exchange thread
describes how to do it, and it will go like that
$ sudo dumpe2fs /dev/vg_raid1/lv0 | grep -E "Block\s(count|size)"
dumpe2fs 1.45.5 (07-Jan-2020)
Block count: 261120
Block size: 4096
What does it all mean? Let me explain. So the Block size
is the elementary block of the
block device defined at the stage of formatting the drive. In the manual for mkfs.ext4
you can see the -c block-size
parameter to set it. Block size
is defined in B (bytes).
So Block count
represents the size of the block device. You can confirm it by multiplying
Block count
by Block size
, and eventually divide it by 1024
once to get the value in
KiB (don’t confuse it with KB) or divide it twice by 1024
to get the value in MiB (again, don’t
confuse it with MB). To get the value in GiB, TiB and so on, you should divide the value
by another 1024s
.
So in this case, we have vg_raid1/lv0
LV of size exactly 1069547520 bytes, which in
English is about 1 GiB (gibibyte, don’t confuse it with gigabyte, again ;) ).
An extent in LVM is usually 4 MiB (this is actually default extent size),
the same as the Block size
coincidentally.
But in the man page of vgcreate you can see,
that it could be set to a specific value, as long as it is a power of 2 and at least 1MiB
(grep for -s, --physicalextentsize
). You can check the extent size per VG like so
$ sudo vgs -o vg_name,vg_extent_size
VG Ext
vg_raid1 4,00m
vgubuntu 4,00m
Note that m
here means MiB.
Let’s say we want to leave LVM about 32 MiB on each PV for it’s VG RAID1 logging stuff.
In that case, if our current PV /dev/sda1
has some free space of about… 0, it won’t be enough
by any means. To check the free space on a PV, use for example pvs
command
NOTE
Although the presented in the guide error message shows that LVM requires space for just one EL, which in our case is 4 MiB, I chose 32 MiB for two reasons
- in the future, if one would like to extend the setup further, more space for ELs might be required
- one may try to free just enough space for the amount of needed ELs, but at that fine tune level, rounding size to boundry between physical extents might obstruct the undertaking (read furher to find out what does it mean if you don’t understand)
$ sudo pvs -o pv_name,vg_name,pv_free
PV VG PFree
/dev/mapper/nvme0n1p3_crypt vgubuntu 0
/dev/sda1 vg_raid1 0
To change it, we will use LVM built-in lvreduce
command with the -r
parameter
which in the man page of the command is described like this
-r|--resizefs
Resize underlying filesystem together with the LV using fsadm(8).
As explained here,
lvreduce -r
takes the size parameter in exact manner. Good that we already know the exact size.
So now, let’s reduce both the fs and LV by 32 MiB. We have a LV that’s
1020.0 MiB size, we want it to now be 988.0 MiB (1020 - 32 = 988).
$ sudo lvreduce -r vg_raid1/lv0 --size 998M
Rounding size to boundary between physical extents: 1000,00 MiB.
fsck from util-linux 2.34
/dev/mapper/vg_raid1-lv0: clean, 11/65280 files, 8843/261120 blocks
resize2fs 1.45.5 (07-Jan-2020)
Resizing the filesystem on /dev/mapper/vg_raid1-lv0 to 256000 (4k) blocks.
The filesystem on /dev/mapper/vg_raid1-lv0 is now 256000 (4k) blocks long.
Size of logical volume vg_raid1/lv0 changed from 1020,00 MiB (255 extents) to 1000,00 MiB (250 extents).
Logical volume vg_raid1/lv0 successfully resized.
lvreduce
rounded the reduced size to be between physical extents, so as a result we have 20 MiB
of free PV space instead of 32 MiB, but it’s fine in our case.
We can confirm that the vg_raid1/lv0
LV size was reduced by comparing the value of Block count
$ sudo dumpe2fs /dev/vg_raid1/lv0 | grep -E "Block\s(count|size)"
dumpe2fs 1.45.5 (07-Jan-2020)
Block count: 256000
Block size: 4096
We can check it and confirm that now, our LV is 1048576000 bytes which is 1000.0 MiB, exactly 20 MiB less than before.
We can also see in that /dev/sda1
PV now has 20 MiB of free space on it.
$ sudo pvs -o pv_name,vg_name,pv_free
PV VG PFree
/dev/mapper/nvme0n1p3_crypt vgubuntu 0
/dev/sda1 vg_raid1 20,00m
New PV in the VG
First, we need to define our new block device to be a LVM PV. Our new PV will be /dev/sda2
.
$ sudo pvcreate /dev/sda2
And we have to add it to a VG as well. Here, we want to add it to the same VG as the original LV
was added to - vg_raid1
.
$ sudo vgextend vg_raid1 /dev/sda2
And as we can see, it’s in the VG now.
$ sudo pvs -o pv_name,vg_name
PV VG
/dev/mapper/nvme0n1p3_crypt vgubuntu
/dev/sda1 vg_raid1
/dev/sda2 vg_raid1
Convert LV to RAID1
Now, all what’s left to do is to convert the original LV into a raid1 type.
Here I’m using only one mirror image, but you can extend number of images in the future.
We are interested in converting linear vg_raid1/lv0
LV to raid1 mirrored LV, which will
keep its one mirror on the /dev/sda2
PV.
$ sudo lvconvert --type raid1 vg_raid1/lv0 -m 1 /dev/sda2
Are you sure you want to convert linear LV vg_raid1/lv0 to raid1 with 2 images enhancing resilience? [y/n]: y
Logical volume vg_raid1/lv0 successfully converted.
However, when after this step you can see this error message instead
$ sudo lvconvert --type raid1 vg_raid1/lv0 -m 1 /dev/sda2
Are you sure you want to convert linear LV vg_raid1/lv0 to raid1 with 2 images enhancing resilience? [y/n]: y
Insufficient free space: 1 extents needed, but only 0 available
That means you should reconsider the Space for EL step of this guide. Basically it means that there is not enough space for extents which LVM VG is using to log it’s raid stuff.
If everything went well, you can now see that the free space on both PVs is less than 20 MiB now.
$ sudo pvs -o pv_name,vg_name,pv_free
PV VG PFree
/dev/mapper/nvme0n1p3_crypt vgubuntu 0
/dev/sda1 vg_raid1 16,00m
/dev/sda2 vg_raid1 16,00m
Exactly one extent less than before.
Check the results
We can see that despite that we didn’t formatted the second PV - `/dev/sda2' to be ext4 fs, it indeed become one. It even has it’s LVM identifier of sort, and a shared UUID
$ sudo blkid
...
/dev/mapper/vg_raid1-lv0: UUID="987ba3d0-68f2-477a-a443-079283bf1c83" TYPE="ext4"
/dev/mapper/vg_raid1-lv0_rimage_0: UUID="987ba3d0-68f2-477a-a443-079283bf1c83" TYPE="ext4"
/dev/mapper/vg_raid1-lv0_rimage_1: UUID="987ba3d0-68f2-477a-a443-079283bf1c83" TYPE="ext4"
...
Another nifty way of confirming that RAID1 is actually doing its thing
is to create a file on vg_raid1/lv0 LV, and then just run strings
command across both PVs
to see if the file’s name is visible on both of them. Make sure to mount the LV somewhere before
using it.
$ sudo mount /dev/vg_raid1/lv0 /mnt
$ sudo touch /mnt/VERY_UNIQUE_FILE_NAME_INDEED
$ sudo strings /dev/sda1 | grep VERY_UNIQUE_FILE_NAME_INDEED
VERY_UNIQUE_FILE_NAME_INDEED
$ sudo strings /dev/sda2 | grep VERY_UNIQUE_FILE_NAME_INDEED
VERY_UNIQUE_FILE_NAME_INDEED
VERY_UNIQUE_FILE_NAME_INDEED
As you can see, on the mirror PV there are two instances of the file name.
Why is it, I don’t know yet, but I suspect that it’s because of how LVM RAID1 works.
It might have some kind of “global linking/names table”. I don’t know, but if you know, you can
send the answer to my email address webmaster@unexpectd.com
and I will be really grateful if
you do so! :) (I will post it here, with your name if you fancy it).
Cheers!