LVM Snapshots
We will now start looking at some more advanced LVM2 topics and the first up is managing LVM Snapshots. In the next blog we will look at Thin Provisioning in LVM. LVM Snapshots are Point in Time copies of LVM Logical Volumes. They are space-effecient in that they start of storing no data, but as data is chenged on the source LV the orignal data is written to the LVM Snapshot volume. The use cases for this include:
- Backups: The LVM snapshots itself is not an efficient backup as it has to be stored within the same Volume Group. However, snapshots can be used to augment backups. Just create a snapshot of the target volume and then backup the snapshot volume to overcome any issues to do with file-concurrency during the backup. The snapshot can be deleted at the end of the backup process
- Test and Destroy: LVM snapshots are a read / write copy of the original LV. You can create a LVM snapshots of an LV that contains complex scripts, change ansd test them as much as you want and then destroy the data when you have finished. All without impacting the original scripts.
- Testing New Software Deployments: The installation of a new version of software may include many hundreds of files located through many software directories. If the target location for all of the software is on a single LVM Logical Volume then we can create a snapshot before updating the software. If after testing the software is not functioning in the desired manner. It is a simple task to revert the original LV to the snapshot. The snapshot, of course, holds that point in time copy of the original source volume at the time the snapshot was taken.
Follow the link below to download the ‘Complete Guide to LVM2 in Linux’, costing just £1.99! Once it is downloaded the eBook can be printed out if required.
Preparing to Create an LVM Snapshots
An LVM snapshot has to be created in the same Volume Group as the source LV. Using CoW (Copy on Write) Technology the underlying storage has to be the same. Allowing unchanged data in the snapshot to be read from the original LV source. Our Volume Group is full, so we will delete the existing Logical Volume and recreate it a a smaller size. We will also use two mountpoints, /mnt/original and /mnt/snap.
# umount /mnt # mkdir /mnt/{original,snap} # lvremove /dev/vg1/lv1 # lvcreate -n lv1 -L 600m vg1 # mkfs.ext4 /dev/vg1/lv1 # mount /dev/vg1/lv1 /mnt/original
After all this work, we are now back to the situation where we have a newly formatted Logical Volume which we have called lv1. This is now mounted to /mnt/original for the purpose of the demonstration. We have created lv1 at 600 MiB leaving 392 MiB free space in the Volume Group vg1. Remember that we are only concerned with free space in the same Volume Group as our Logical Volumes. Snapshot have to be created in the same Volume Group as the source LV. If we check the output from the command vgs we can see the available free space:
# vgs VG #PV #LV #SN Attr VSize VFree vg1 2 1 0 wz--n- 992.00m 392.00m vg2 2 0 0 wz--n- 192.00m 192.00m
This command, as we have seen before, is a great way to summarize Volume Groups. In order to be able to create a point in time snapshot of the Logical Volume, lv1, we will need to add some data. We will simply copy the /etc/services file to /mnt/original.
# cp /etc/services /mnt/original/ # wc -l /mnt/original/services 612 /mnt/original/services
We can also see that this file, on my Ubuntu Server, has 612 lines. We now have data so let’s crack on.
Creating LVM Snapshots in LVM2
LVM Snapshots are, in essence, simple Logical Volumes with some extra goodies bolted on. So, they are created using the lvcreate command and the –s option. We also need to specify the source volume when creating the snapshot.
# lvcreate -L 12m -s /dev/vg1/lv1 -n lv1_snap Logical volume “lv1_snap” created.
From the command options, we can see that we first specify the size to be 12 MiB. We only need enough space to store the changes that we make to the source. We could have filled the source LV with 600 MiB of data but if only 12 MiB is likely to change then we only need a 12 MiB snapshot volume. We size the snapshot to match the size of changes that will occur. The option -s or –snapshot specifies the source volume to snapshot. As before the -n option sets the name of the LV we are creating.The snapshot is read/write so can be mounted. Let’s go ahead and mount it to the directory /mnt/snap:
# mount /dev/vg1/lv1_snap /mnt/snap/
We will be able to see the same content in both directories even though no changes have taken place. LVM snapshots links to the original data until it is changed.
# ls -l /mnt/original/ /mnt/snap /mnt/original/: total 36 drwx------ 2 root root 16384 Aug 21 18:52 lost+found -rw-r--r-- 1 root root 19605 Aug 22 09:18 services /mnt/snap: total 36 drwx------ 2 root root 16384 Aug 21 18:52 lost+found -rw-r--r-- 1 root root 19605 Aug 22 09:18 services
When we look at the detail of both the source and snapshot volume it will look a little different. Firstly, the lv1_snap volume. The snapshot LV.
# lvdisplay /dev/vg1/lv1_snap --- Logical volume --- LV Path /dev/vg1/lv1_snap LV Name lv1_snap VG Name vg1 LV UUID s8gBiX-IV1z-jiZK-q4dN-paG5-8mvq-TCQkuk LV Write Access read/write LV Creation host, time yogi, 2017-08-22 09:26:14 +0000 LV snapshot status active destination for lv1 LV Status available # open 1 LV Size 600.00 MiB Current LE 150 COW-table size 12.00 MiB COW-table LE 3 Allocated to snapshot 0.36% Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:3
We may recall, I certainly hope you do, that we created this with a size of 12 MiB. The LV size, though, shows as 600 MiB. The size we see is of the original or source LV. The snapshot size shows in the COW-table size. The Copy-on-Write occurs when data is changed in the source and the original data is then copied to the snapshot volume. The snapshot volume will always list the original snapshot files no matter if they have changed or not. On creation of the snapshot there are no CoW changes to store so the Allocated value is very low to start and will increase as changes are made to the source data. When we look at the display details now for the lv1 LV, it too will be a little different:
# lvdisplay /dev/vg1/lv1 --- Logical volume --- LV Path /dev/vg1/lv1 LV Name lv1 VG Name vg1 LV UUID dmVaWm-kA9V-xouM-OZBR-b7Id-aMUh-EWymB0 LV Write Access read/write LV Creation host, time yogi, 2017-08-21 18:51:57 +0000 LV snapshot status source of lv1_snap [active] LV Status available # open 1 LV Size 600.00 MiB Current LE 150 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0
We can see that it shows as the source of the snapshot for lv1_snap.
Changing LVM Snapshot Data
If we now make a change to the data in the source volume we will start to see a difference in the data in the snapshot volume. The snapshot volume will hold the original data whilst the source LV will hold the changes. First, we will add a new file to the source volume:
# cp /etc/hosts /mnt/original/
We can now see that the content differs for the source to the snaphot volumes
# ls -l /mnt/original/ /mnt/snap /mnt/original/: total 40 -rw-r--r-- 1 root root 212 Aug 22 10:23 hosts drwx------ 2 root root 16384 Aug 21 18:52 lost+found -rw-r--r-- 1 root root 19605 Aug 22 09:18 services /mnt/snap: total 36 drwx------ 2 root root 16384 Aug 21 18:52 lost+found -rw-r--r-- 1 root root 19605 Aug 22 09:18 services
We can also see that we start to consume more COW space in the Logical Volume. Returning to the lvdisplay for lv1_snap we will see the Allocated to snapshot percentage increasing.
# lvdisplay /dev/vg1/lv1_snap --- Logical volume --- LV Path /dev/vg1/lv1_snap LV Name lv1_snap VG Name vg1 LV UUID s8gBiX-IV1z-jiZK-q4dN-paG5-8mvq-TCQkuk LV Write Access read/write LV Creation host, time yogi, 2017-08-22 09:26:14 +0000 LV snapshot status active destination for lv1 LV Status available # open 1 LV Size 600.00 MiB Current LE 150 COW-table size 12.00 MiB COW-table LE 3 Allocated to snapshot 0.98% Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:3
Even though we have not changed the existing data adding new data will also effect the snapshot as the snapshot volume must show the data as it was when the snapshot was taken. If necessary, we should be able to revert the original to the snapshotted data.
Next, we will overwrite the original /etc/services file:
# > /mnt/original/services
The file /mnt/original/services will now be empty. If we check the snapshot file though, it will still have the data. Using the command wc we can count the number of lines in each file:
# wc -l /mnt/original/services /mnt/snap/services 0 /mnt/original/services 612 /mnt/snap/services 612 total
Of course, rechecking the output of lvdisplay for lv1_snap will show that the Allocated to snapshot percentage has increased with the change:
Allocated to snapshot 1.17%
Volume Changes for LVM Snapshots
The attributes for both the source and snapshot LVs will change when the snapshot is created. We can see this in detail with the command lvdisplay but is better summarized with the command lvs:
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 vg1 owi-aos--- 600.00m lv1_snap vg1 swi-aos--- 12.00m lv1 1.37
There are 10 attributes that we read from left to right:
For lv1 the attributes read:
- Volume Type: Origin. The source of a snapshot
- Permissions: Writable
- Allocation Policy: Inherited from the Volume Group
- Fixed minor number is not set
- State: Is marked as active
- Device: Is open or mounted
- Target type: Snapshot, i.e. it is participating in a snapshot
For lv1_snap the attributes read:
- Volume Type: Snapshot volume
- Permissions: Writable
- Allocation Policy: Inherited from the Volume Group
- Fixed minor number is not set
- State: Is marked as active
- Device: Is open or mounted
- Target type: Snapshot, i.e. it is participating in a snapshot
The original and target LVs must be in the same Volume Group as we have already mentioned, but they won’t necessarily share the same devices within those Volume Groups. This is normally abstracted from us but we can add the option to the lvs command:
# lvs -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices lv1 vg1 owi-aos--- 600.00m /dev/sdc1(0) lv1 vg1 owi-aos--- 600.00m /dev/sdc2(0) lv1_snap vg1 swi-aos--- 12.00m lv1 1.37 /dev/sdc2(26)
Adding in the –o for options and +devices will show the underlying devices that make up the LV. We can see that lv1 is bigger than either /dev/sdc1 or /dev/sdc2 and is spanned across both. Whereas, vg1_snap can make use of the left-over space in /dev/sdc2. The number in brackets after the device name indicates the physical extent number that the LV starts on from the device. We can see that lv1 starts from on extent 0 for both /dev/sdc1 and /dev/sdc2 and lv1_snap starts on extent 26 of /dev/sdc2. To see all options available to you with the –o option, use the command:
# lvs -o help
As the list is extensive we have not included the output.
As well as looking at the Logical Volumes directly we can also use the lsblk command. Here we will see more changes than perhaps you think:
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop1 7:1 0 100M 0 loop sdb 8:16 0 256M 0 disk [SWAP] loop0 7:0 0 100M 0 loop sdc 8:32 0 1000M 0 disk ├─sdc2 8:34 0 500M 0 part │ ├─vg1-lv1-real 252:1 0 600M 0 lvm │ │ ├─vg1-lv1 252:0 0 600M 0 lvm /mnt/original │ │ └─vg1-lv1_snap 252:3 0 600M 0 lvm /mnt/snap │ └─vg1-lv1_snap-cow 252:2 0 12M 0 lvm │ └─vg1-lv1_snap 252:3 0 600M 0 lvm /mnt/snap └─sdc1 8:33 0 499M 0 part └─vg1-lv1-real 252:1 0 600M 0 lvm ├─vg1-lv1 252:0 0 600M 0 lvm /mnt/original └─vg1-lv1_snap 252:3 0 600M 0 lvm /mnt/snap sda 8:0 0 9.8G 0 disk /
We already are familiar with the vg1-lv1 device and the vg1-lv1_snap devices. These have been the underlying LVs we have been working with. The kernel major number or driver that is used with LVM2 is 252. So, we see this for all the LVs we have in place. The first LV was vg1-lv1 so this has the minor number of 0. When we look at vg1-lv1_snap, though, this has a minor number of 3 indicating it the 4th LV and not the 2nd as we may expect. Instead, two wrapper LVs are created for the snapshot, vg1-lv1-real and vg1-lv1_snap-cow. These are internally managed and we do not become involved in these object that LVM2 uses to manage the snapshotting process. Each LV that we see here also has a corresponding devmapper device. If we list the /dev directory and filer on dm-* we can show these.
# ls /dev/dm-* /dev/dm-0 /dev/dm-1 /dev/dm-2 /dev/dm-3
Again. we see the 4 devices not just the 2 which we may have expected. So in the background LVM2 is managing a lot for us and allowing us access only to the elements that we need.
Reverting a Volume to the Snapshot
A recovery scenario with LVM snapshots allows us to revert to the snapshot.If we find that we need to revert the original LV to the point in time snapshot we can do so. Remember, that the snapshot will be a representation of the original volume at the time of the snapshot. We may do this after a software upgrade has been tested and the decision has been made to revert to the snapshot which should have been taken before the software upgrade process. To ensure that we can see this in real time we will first unmount both LVs:
# umount /mnt/{original,snap}
If we do not close the devices we will need to wait until the original LV, vg1 is next activated. Often this is on a reboot. With both the Logical Volumes now unmounted and closed we use the lvconvert command to merge the snapshot with the parent or origin. This is reverting the source Logical Volume to the snapshot contents. The snapshot is automatically removed at the end of the process.
# lvconvert --merge /dev/vg1/lv1_snap Merging of volume lv1_snap started. lv1: Merged: 99.2% lv1: Merged: 100.0%
Depending on the size of data to be merged and the speed of the disks it is possible that this can take some time. You make use the –b option to background the process.
If we now mount lv1 again and check the contents. We will be missing the hosts file that was added after the snapshot was taken. The services file we now have the content that we overwrote.
# mount /dev/vg1/lv1 /mnt/original/ # ls /mnt/original/ lost+found services # wc -l /mnt/original/services 612 /mnt/original/services
Test and Development
Another use for LVM snapshots is in a test environement. Should you prefer to work directly with the snapshotted data then you may. When you have finished you just need to unmount the snapshot LV and delete it. The underlying original LV is left unchanged. This is great where you want to perhaps work on scripts without effecting the original production scripts.
The process is pretty much the same but we work just with the /mnt/snap directory now. A summary of the commands follows:
# lvcreate -L 12m -s /dev/vg1/lv1 -n lv1_snap # mount /dev/vg1/lv1_snap /mnt/snap/ # > /mnt/snap/services # wc -l /mnt/snap/services 0 /mnt/snap/services # wc -l /mnt/original/services 612 /mnt/original/services # umount /mnt/snap # lvremove /dev/vg1/lv1_snap
Extending the Snapshot Size
When we create a snapshot volume we should set a size that we feel is adequate to store all of the changes made to the original LV whilst the snapshot is in place. However, it is possible to automatically expand a snapshot volume if required.
It is important to note that if a snapshot fills up completely the snapshot is automatically deleted.
The defaults settings do not allow snapshots to automatically grow. We need to enable this. The configuration of LVM2 is in the /etc/lvm/lvm.conf. If we search for the effective settings using grep:
# grep -E ‘^\s*snapshot_auto’ /etc/lvm/lvm.conf snapshot_autoextend_threshold = 100 snapshot_autoextend_percent = 20
Having the snapshot_autoextend_threshold set at 100 % means that the snapshot will never grow. As soon as we hit the 100% threshold the snapshot is deleted. If we need autoextend enabled consider setting this to something like 70. In which case, when the snapshot becomes 70% full the size will be increased. The size of the increase in controlled by the snapshot_autoexend_percent. The default is set to 20% meaning the size will increase by 20% of its current size each time growth is required.
The video follows: in the next blog we will look at Thin Provisioning in LVM.: