Thin Provisioning Function Support
Thin Provisioning (TP) operates by allocating disk storage space in a flexible manner among multiple devices, based on the minimum space required by each volume at any given time.
Test Case
- On the Lightbits server, check the total free physical capacity of storage.
- Create volumes, in which the total logical size is bigger than the free physical size of storage.
- Prove that each client can allocate volumes that exceed the total physical capacity of the resource pool.
- Connect these volumes on the client side (reflected as a new namespace on the client side), and use FIO to test that they work properly.
- On the Lightbits server, check the freePhysicalStorage capacity parameter, to find out how much free storage capacity is available:
$ lbcli get cluster -o json
root@lightos-server00:~ lbcli get cluster -o json
{
"UUID": "2ac0b5f1-e332-4526-9799-e5cec2208837",
"subsystemNQN": "nqn.2016-01.com.lightbitslabs:uuid:1fa41a41-cf47-4ebd-b0f4-babda5fe322c",
"currentMaxReplicas": 3,
"supportedMaxReplicas": 3,
"statistics": {
"installedPhysicalStorage": "40007795470848",
"managedPhysicalStorage": "38005951500288",
"effectivePhysicalStorage": "27844312999526",
"logicalStorage": "85899345920",
"logicalUsedStorage": "0",
"physicalUsedStorage": "0",
"physicalUsedStorageIncludingParity ": "0",
"** freePhysicalStorage **": "**27844312999526**",
"estimatedFreeLogicalStorage": "27844312999526",
"estimatedLogicalStorage": "27844312999526",
"compressionRatio": 1
In this example, we have 27.8 terabytes physically available, which means that you can create a total of X terabytes of logical volumes.
- Run a simple ‘for’ loop in order to create a few volumes, whose total capacity are greater than the ‘freePhysicalStorage’ parameter. In this example, we will create six volumes of 12 Tib each; 6x12=72 Terabytes.
for i in {1..5} ; do lbcli create volume --size="12 Tib" --name=vol$i --acl="acl3" --replica-count=1 --project-name=default; done
Sample Output
root@lightos-server00:~ for i in {1..5}; do lbcli create volume --size="12 Tib" --name=vol$i --acl="acl5" --replica-count=1 --project-name=default; done
Name UUID State Protection State NSID Size Replicas Compression ACL Rebuild Progress
vol0 362accb5-5918-4531-a4d5-bc9a4b957244 Creating Unknown 0 12 TiB 1 false values:"acl5"
Name UUID State Protection State NSID Size Replicas Compression ACL Rebuild Progress
vol1 eee903b2-da24-4715-81b7-0da49ffd03e4 Creating Unknown 0 12 TiB 1 false values:"acl5"
Name UUID State Protection State NSID Size Replicas Compression ACL Rebuild Progress
vol2 62ded229-eef0-4b8a-9281-11077be592a6 Creating Unknown 0 12 TiB 1 false values:"acl5"
Name UUID State Protection State NSID Size Replicas Compression ACL Rebuild Progress
vol3 24abc673-3c8e-4b08-adfa-aed18e408f51 Creating Unknown 0 12 TiB 1 false values:"acl5"
Name UUID State Protection State NSID Size Replicas Compression ACL Rebuild Progress
vol4 a8a1af43-5b55-461e-b08d-29320a6993ed Creating Unknown 0 12 TiB 1 false values:"acl5"
Name UUID State Protection State NSID Size Replicas Compression ACL Rebuild Progress
vol5 2d3fa8df-b6d8-491a-95ed-78d20d1cd940 Creating Unknown 0 12 TiB 1 false values:"acl5"
- From the client server, connect to the newly-created volumes and check that six new NVME devices of 12 Tib each are attached, with
lsblk
:
root@lightos-server00:~ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:1 0 50G 0 disk
nvme0n8 259:20 0 12T 0 disk
nvme0n6 259:16 0 12T 0 disk
nvme0n4 259:13 0 12T 0 disk
nvme0n2 259:3 0 30G 0 disk
nvme0n7 259:18 0 12T 0 disk
sda 8:0 0 111.8G 0 disk
|-sda2 8:2 0 1G 0 part /boot
|-sda3 8:3 0 110.8G 0 part
| |-inaugurator--v2-osmosis--cache 253:1 0 30G 0 lvm
| |-inaugurator--v2-root 253:2 0 58.2G 0 lvm /
| `-inaugurator--v2-swap 253:0 0 8G 0 lvm [SWAP]`
sda1
8:1 0 2M 0 part
nvme0n5 259:14 0 12T 0 disk
nvme0n3 259:7 0 12T 0 disk
- Run random FIO tests on the volumes and check performance:
- Make sure you have FIO installed. If not:
$ yum install fio -y
- With lshw, double-check the num of CPUs and apply accordingly, on
numjobs=32 cpus_allowed=0-31
in the FIO file.
Sample FIO file:
root@rack03-server64:~ cat fio_lightOS.fio
[global]
ramp_time=0
rw=randwrite
#rwmixread =100 # When using randrw in the line above
refill_buffers
loops=1
buffer_compress_percentage=50
buffer_compress_chunk=4096
direct=1
norandommap=1
time_based
cpus_allowed_policy=split
log_avg_msec=1000
numjobs=12
cpus_allowed=0-11
iodepth=32
randrepeat=0
ioengine=libaio
group_reporting=1
runtime=300
bs=4k
[job_1]
filename=/dev/nvme0n5:/dev/nvme0n8:/dev/nvme0n4 # Device paths for Lightbits volumes
Was this page helpful?