Snapshots and Clones
With snapshots and clones, you can leverage fast and thin point-in-time snapshots/clones of Lightbits volumes. A snapshot can be taken from a volume, and a volume can be created from a snapshot (aka a clone).
Creating the Initial Volume on Lightbits
To create a sample volume:
root@rack03-server72:~ lbcli create volume --size="25 Gib" --name=vol1 --acl="acl3" --replica-count=1 --project-name=defaultName UUID State Protection State NSID Size Replicas Compression ACL Rebuild Progressvol1 01b20f49-13d6-468f-b82d-752594922a6e Creating Unknown 0 25 GiB 1 false values:"acl3"Get the cluster details for attaching the volume to the client.
root@rack03-server72:~ lbcli get clusterUUID Subsystem NQN Current max replicas Supported max replicas MinVersionInCluster MinAllowedVersion MaxAllowedVersion2ac0b5f1-e332-4526-9799-e5cec2208837 nqn.2016-01.com.lightbitslabs:uuid:1fa41a41-cf47-4ebd-b0f4-babda5fe322c 3 3 2.3.12~b793 2.3.XConnect to the volume from a client to see the NVMe device.
Connect to the volume to attach the NVMe device to the client (this can be executed with a simple ‘for’ loop as well).
LIGHTBITS_CONTROLLER_IPS="172.16.231.70 172.16.231.71 172.16.231.72"; for CONTROLLER_IP in $LIGHTBITS_CONTROLLER_IPS; do nvme connect -t tcp -a $CONTROLLER_IP --ctrl-loss-tmo -1 -n $LIGHTBITS_CLUSTER_NQN -s 4420 -q $VOLUME_ACL; doneCheck volumes are mapped
lsblkSample Output
root@rack07-server56:~ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTnvme0n1 259:1 0 50G 0 disksda 8:0 0 111.8G 0 disk|-sda2 8:2 0 1G 0 part /boot|-sda3 8:3 0 110.8G 0 part| |-inaugurator--v2-osmosis--cache 253:1 0 30G 0 lvm| |-inaugurator--v2-root 253:2 0 58.2G 0 lvm /| `-inaugurator--v2-swap 253:0 0 8G 0 lvm [SWAP]`-sda1Mount the NVMe device to a mounting point on the client.
Make sure the e2fsprogs package is installed. If not: yum install e2fsprogs.
# Create directory to mount snapshotroot@rack07-server56:~ mkdir /tmp/snapshot# Format device to ext4root@rack07-server56:~ mkfs.ext4 /dev/nvme0n1# Mount volumeroot@rack07-server56:~ mount /dev/nvme0n1 /tmp/snapshot# Check mountroot@rack07-server56:~ df -hFilesystem Size Used Avail Use% Mounted ondevtmpfs 32G 0 32G 0% /dev/dev/nvme0n1 25G 45M 24G 1% /tmp/snapshotPut some data into the mounted NVMe disk.
Create a file using the dd command:
# Move to working directoryroot@rack07-server56:~ cd /tmp/snapshot# Add dataroot@rack07-server56:/tmp/snapshot dd if=/dev/zero of=1g_sample_file.txt bs=4096 count=256000# Check file contentroot@rack07-server56:/tmp/snapshot ll -h-rw -r--r--. 1 root root 1000M Mar 2 10:32 1g_sample_file.txtFrom the Lightbits server, create a snapshot of the initial volume.
# Create snapshotroot@rack03-server72:~ lbcli create snapshot --name=snapshot1 --project-name=default --source-volume-name=vol1# Check snapshot is availableroot@rack03-server72:~ lbcli list snapshots --project-name=defaultName UUID Source volume UUIDsnapshot1 ca6e5578-3289-4c03-8445-f035189698c3 01b20f49-13d6-468f-b82d-752594922a6ePut more data from the client side to the mounted disk.
# Add dataroot@rack07-server56:/tmp/snapshot dd if=/dev/zero of=1g_sample_file2.txt bs=4096 count=256000# Check file contentroot@rack07-server56:/tmp/snapshot ll -h-rw -r--r--. 1 root root 1000M Mar 2 10:32 1g_sample_file.txt-rw -r--r--. 1 root root 1000M Mar 2 11:35 1g_sample_file2.txtCreate a clone from the snapshot that was created.
root@rack03-server72:~ lbcli create volume --name=clone_snap1 --source-snapshot-uuid=ca6e5578-3289-4c03-8445-f035189698c3 --acl="acl3" --replica-count=1 --size="25 Gib" --project-name=defaultName UUID State Protection State NSIDSize Replicas Compression ACL Rebuild Progressclone_snap1 30242a3b-cf7d-4a1e-ae0c-14295b631ebf Creating Unknown 0 25 GiB 1 false values:"acl3"Connect to the clone and verify that it holds only the initial data.
If the same ACL is used, the cloned NVMe disk should be attached automatically on the client.
root@rack07-server56:~ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTnvme0n1 259:1 0 25G 0 disk /tmp/snapshotnvme0n2 259:3 0 25G 0 diskMount the clone to a different directory
# Create mount pointroot@rack07-server56:~ mkdir /tmp/clone# Mount volumeroot@rack07-server56:~ mount /dev/nvme0n2 /tmp/cloneCheck mounts
root@rack07-server56:~ df -h/dev/nvme0n1 25G 3.0G 21G 13% /tmp/snapshot/dev/nvme0n2 25G 1.1G 23G 5% /tmp/cloneSee that only the original sample file exists before the snapshot was taken
# Move to clone directoryroot@rack07-server56:~ cd /tmp/clone# Check contents of clone volumeroot@rack07-server56:/tmp/clone ll -htotal 1024004-rw -r--r--. 1 root root 1048576000 Mar 2 10:32 1g_sample_file.txtRoll back a volume to the snapshot, and verify that its contents now only have the file that was created before taking the snapshot.
The rollback operation takes a volume and restores it back to a snapshot state (data + md). It is recommended to either remove active mounts, detach the volume, or flush caches before performing this operation.
The process should be as follows:
- Stop process/es that use the volume.
- Unmount FS create on top of the volume; e.g., for mount point /mnt/volumetorollback.a. Unmount /mnt/volumetorollback.
- Perform rollback via lbcli.
- Mount the volume: mount /mnt/volumetorollback.
Unmount the device from the mount point:
root@rack07-server56:~ umount -l /tmp/snapshotumount: /dev/nvme0n1: not mountedCheck the UUID of the snapshot, and of the volume you would like to roll back:
# List volumesroot@rack03-server72:~ lbcli list volumesName UUID State Protection State NSID Size Replicas Compression ACL Rebuild Progressvol1 01b20f49-13d6-468f-b82d-752594922a6e Available FullyProtected 2 25 GiB 1 false values:"acl3" Noneclone_snap1 30242a3b-cf7d-4a1e-ae0c-14295b631ebf Available FullyProtected 3 25 GiB 1 false values:"acl3" None# List snapshotsroot@rack03-server72:~ lbcli list snapshots --project-name=defaultName UUID Source volume UUID Statesnapshot1 ca6e5578-3289-4c03-8445-f035189698c3 01b20f49-13d6-468f-b82d-752594922a6e AvailableTo roll back the volume to the snapshot, run:
root@rack03-server72:~ lbcli rollback volume --project-name=default --src-snapshot-uuid=ca6e5578-3289-4c03-8445-f035189698c3 --uuid=01b20f49-13d6-468f-b82d-752594922a6eMount the volume again from the client side, and verify that only one file is there:
# Mount volumeroot@rack07-server56:~ mount /dev/nvme0n1 /tmp/snapshot# Check only original file existsroot@rack07-server56:~ ll -h /tmp/snapshottotal 1024004-rw -r--r--. 1 root root 1048576000 Mar 2 10:32 1g_sample_file.txtCreate a snapshot-policy:
# Flags meaningroot@rack03-server72:~ lbcli create snapshots-policy --help--days-in-cycle int Days in cycle for daily schedule policy (required ifhours-in-cycle wasn't provided) (default -1)--description string Policy description--hours-in-cycle int Hours in cycle for hourly schedule policy (required ifdays-in-cycle wasn't provided) (default -1)--name string Policy name (required)--project-name string Project name (required)--retention-time string Retention time will determine for how long to keep thesnapshots--start-time string Schedule start time. It must be in the form of HH:MM , inwhich HH is 00-23 and MM is 00-59--volume-name string Volume name to assign policy to (required if volume -uuidwasn't provided)--volume-uuid string Volume UUID to assign policy to (required if volume -namewasn't provided) root@rack03-server72:~ lbcli create snapshots-policy --project-name=default --name=policy1 --volume -uuid=a7f67ad0-1f3d-49cb-9650-b82042378014 --description="my policy" --hours-in-cycle=2 --start-time=22:00 --retention-time=4hName UUID Volume Name State Typepolicy1 3c03b2bf-e102-4c6c-8c6d-173224148f31 vol1 Creating HourlyList snapshot-policies:
root@rack03-server72:~ lbcli list snapshots-policies --project-name=default --volume-uuid=a7f67ad0-1f3d-49cb-9650-b82042378014Name UUID Volume Name State Typepolicy1 3c03b2bf-e102-4c6c-8c6d-173224148f31 vol1 Active HourlyGet snapshot-policies:
root@rack03-server72:~ lbcli get snapshots-policy --project-name=default --uuid=3c03b2bf-e102-4c6c-8c6d-173224148f31 -o json{"UUID": "3c03b2bf-e102-4c6c-8c6d-173224148f31","name": "policy1","resourceUUID": "a7f67ad0-1f3d-49cb-9650-b82042378014","resourceName": "vol1","projectName": "default","schedulePolicy": {"snapshotSchedulePolicy": {"hourlySchedule": {"startTime": "2021-03-08 T22:16:21Z","hoursInCycle": 2}},"retentionTime": "14400s"},"description": "my policy","state": "Active"}Rollback volume:
root@rack03-server72:~ lbcli rollback volume --project-name=default --src-snapshot-uuid=bd52d3d8-65a5-49fe-a934-b7d8b07ef040 --uuid=05f49718-4897-4ff5-adb9-5d7ccd6fc138Delete snapshots/snapshot-policy:
# Delete snapshotroot@rack03-server72:~ lbcli delete snapshot --project-name=default --uuid=bd52d3d8-65a5-49fe-a934-b7d8b07ef040# Delete snapshot policyroot@rack03-server72:~ lbcli delete snapshots-policy --project-name=default --uuid=3c03b2bf-e102-4c6c-8c6d-173224148f31