Thin Provisioning and Over Provisioning
In my previous blog we looked at LVM Snapshots. Another advanced feature that we can look at in LVM2 is that of Thin Provisioning. This is a feature that is common in most specialist storage hardware and is also part of LVM2, Creating LVM thinly provisioned volumes allow more space to be provisioned than may actually exist on the system. Although this may sound mad it can be beneficial. It is a human trait that we will ask for more than we need. This is true for storage space requirements. So, we may as for 15TB of data but only actually every use 3TB. Using Thin Provisioning in LVM2 will allow for use to deploy volumes with more space than actually exists on the system. This keeps are users happy. We must make sure, though, that we adequately monitor the system to stop that undersupply becoming an issue. In the next lesson we look at Migrating LVM Data
Follow the link below to download the ‘Complete Guide to LVM2 in Linux’, costing just £1.99! Once it is downloaded the eBook can be printed out if required.
Install Required Software
The tools to manage Thin Provisioning in LVM may have to be installed. The package is thin-provisioning-tools in Arch Linux and Ubuntu and in CentOS the package is device-mapper-persistent-data. We can check for the existence of one of the tools, if that is not there we can then install the required package:
Install Ubuntu
# which thin_check || apt-get install -y thin-provisioning-tools
Install Arch
# which thin_check || pacman -S thin-provisioning-tools --no-confirm
Install CentOS
# which thin_check || yum install -y device-mapper-persistent-data
Creating the Thin Pool
LVM Thin Provisioning requires that the thinly provisioned volumes exist in what is known as a Thin Pool. The Thin Pool is just a special type of Logical Volume. The Thin Pool sets how much space is made available to thinly provisioned volumes. It is very important that we monitor the available space in this pool.
# lvcreate -L 200m --thinpool tpool vg1 Logical volume “tpool” created.
The Thin Pool is created by specifying the –thinpool option when creating the special LV. We have assigned the name tpool and it is being created in the Volume Group vg1. We have set a deliberately small size of 200 MiB so we can demonstrate over provisioning and the need to monitor the Thin Pool. To monitor the Thin Pool tpool we can use lvdisplay.
# lvdisplay /dev/vg1/tpool --- Logical volume --- LV Name tpool VG Name vg1 LV UUID 0hfHX1-HfFX-bTv0-NmxV-U6fB-c4Qc-Wyh0am LV Write Access read/write LV Creation host, time yogi, 2017-08-23 09:49:31 +0000 LV Pool metadata tpool_tmeta LV Pool data tpool_tdata LV Status available # open 0 LV Size 200.00 MiB Allocated pool data 0.00% Allocated metadata 0.88% Current LE 50 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:3
Critically, we must monitor the Allocated pool data value. As we have not created any LVs or added data yet this is great with 0% in use. We can also use the command lvs to monitor this value:
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 vg1 -wi-ao---- 600.00m tpool vg1 twi-a-tz-- 200.00m 0.00 0.88
The Data% being the critical column. We can also look at the Attributes in use in the pool:
For tpool the attributes read:
- Volume Type: Thin pool
- Permissions: Writable
- Allocation Policy: Inherited from the Volume Group
- Fixed minor number is not set
- State: Is marked as active
- Device: Is not open or mounted
- Target type: Is marked as thin provisioning
- Zeroed: Newly allocated blocks will be overwritten with zeros before use.
Creating Thin Volumes in LVM2
Now that we have the Thin Pool, we can allocate space to Thin Volumes. Remember, that we are limited to a ridiculously small 200 MiB in reality. This does not stop us provisioning more space. This is done on the assumption that the space requested in not likely to equal the space needed. The Urban Penguin Rule of IT users applies:
Space Requested != Space Needed
Even if they do need the requested space it is likely that it will not be needed in totality at the start. More space can be added to the Volume Group and Thin Pool later.
# lvcreate -V 1G --thin -n thin_volume vg1/tpool WARNING: Sum of all thin volume sizes (1.00 GiB) exceeds the size of thin pool vg1/tpool and the size of whole volume group (992.00 MiB)! For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100. Logical volume “thin_volume” created.
We can see that a warning is supplied letting us know that we are over provisioning. It also lets us know that we can enable autogrowth of the pool if required. Much in the same way as with snapshots
At least the users are happy, they have their 1 GiB volume not knowing that we have cheated them a little.
If we look at the output of the lvs command again we will see the new volume:
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 vg1 -wi-ao---- 600.00m thin_volume vg1 Vwi-a-tz-- 1.00g tpool 0.00 tpool vg1 twi-aotz-- 200.00m 0.00 0.98
We can see that the thin_volume is sized at 1 GiB. The attributes show a little more detail:
For thin_volume the attributes read:
- Volume Type: Thin Volume
- Permissions: Writable
- Allocation Policy: Inherited from the Volume Group
- Fixed minor number is not set
- State: Is marked as active
- Device: Is not open or mounted
- Target type: Is marked as thin provisioning
- Zeroed: Newly allocated blocks will be overwritten with zeros before use.
Note: That as the pool in now in use it is marked as open.
Monitoring Pool Usage
The Virtual Size of the thin volume is 1 GiB, as we have mentioned. If we try to format that with the ext4 file system, the process will write enough metadata to support the 1 GiB volume size. As the physical storage is just 200 MiB we will find that the space used jumps significantly with just the formatting of the LV.
# mkfs.ext4 /dev/vg1/thin_volume
Using the lvs command we can see that the file system metadata takes 3% disk space in the thinly volume but 16% of space in the thin pool. It is the thin pool space that we need to take great care over ensuring that it does not run out of space:
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 vg1 -wi-ao---- 600.00m thin_volume vg1 Vwi-a-tz-- 1.00g tpool 3.20 tpool vg1 twi-aotz-- 200.00m 16.38 1.37
Configuring Auto Extending of the Thin Pool
Editing the settings in the /etc/lvm/lvm.conf can allow auto growth of the thin pool when required. By default, the threshold is 100% which means that the pool will not grow. If we set this to, 75%, the Thin Pool will autoextend when the pool is 75% full. It will increase by the default percentage of 20% if the value is not changed. We can see these settings using the command grep against the file.
# grep -E ‘^\s*thin_pool_auto’ /etc/lvm/lvm.conf thin_pool_autoextend_threshold = 100 thin_pool_autoextend_percent = 20
The video follows