Snapshots and Clones
With snapshots and clones, you can leverage fast and thin point-in-time snapshots/clones of Lightbits volumes. A snapshot can be taken from a volume, and a volume can be created from a snapshot (aka a clone).
Creating the Initial Volume on Lightbits
To create a sample volume:
root@rack03-server72:~ lbcli create volume --size="25 Gib" --name=vol1 --acl="acl3" --replica-count=1 --project-name=default
Name UUID State Protection State NSID Size Replicas Compression ACL Rebuild Progress
vol1 01b20f49-13d6-468f-b82d-752594922a6e Creating Unknown 0 25 GiB 1 false values:"acl3"
Get the cluster details for attaching the volume to the client.
root@rack03-server72:~ lbcli get cluster
UUID Subsystem NQN Current max replicas Supported max replicas MinVersionInCluster MinAllowedVersion MaxAllowedVersion
2ac0b5f1-e332-4526-9799-e5cec2208837 nqn.2016-01.com.lightbitslabs:uuid:1fa41a41-cf47-4ebd-b0f4-babda5fe322c 3 3 2.3.12~b793 2.3.X
Connect to the volume from a client to see the NVMe device.
Connect to the volume to attach the NVMe device to the client (this can be executed with a simple ‘for’ loop as well).
LIGHTBITS_CONTROLLER_IPS="172.16.231.70 172.16.231.71 172.16.231.72"; for CONTROLLER_IP in $LIGHTBITS_CONTROLLER_IPS; do nvme connect -t tcp -a $CONTROLLER_IP --ctrl-loss-tmo -1 -n $LIGHTBITS_CLUSTER_NQN -s 4420 -q $VOLUME_ACL; done
Check volumes are mapped
lsblk
Sample Output
root@rack07-server56:~ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:1 0 50G 0 disk
sda 8:0 0 111.8G 0 disk
|-sda2 8:2 0 1G 0 part /boot
|-sda3 8:3 0 110.8G 0 part
| |-inaugurator--v2-osmosis--cache 253:1 0 30G 0 lvm
| |-inaugurator--v2-root 253:2 0 58.2G 0 lvm /
| `-inaugurator--v2-swap 253:0 0 8G 0 lvm [SWAP]
`-sda1
Mount the NVMe device to a mounting point on the client.
Make sure the e2fsprogs package is installed. If not: yum install e2fsprogs.
# Create directory to mount snapshot
root@rack07-server56:~ mkdir /tmp/snapshot
# Format device to ext4
root@rack07-server56:~ mkfs.ext4 /dev/nvme0n1
# Mount volume
root@rack07-server56:~ mount /dev/nvme0n1 /tmp/snapshot
# Check mount
root@rack07-server56:~ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 32G 0 32G 0% /dev
/dev/nvme0n1 25G 45M 24G 1% /tmp/snapshot
Put some data into the mounted NVMe disk.
Create a file using the dd command:
# Move to working directory
root@rack07-server56:~ cd /tmp/snapshot
# Add data
root@rack07-server56:/tmp/snapshot dd if=/dev/zero of=1g_sample_file.txt bs=4096 count=256000
# Check file content
root@rack07-server56:/tmp/snapshot ll -h
-rw -r--r--. 1 root root 1000M Mar 2 10:32 1g_sample_file.txt
From the Lightbits server, create a snapshot of the initial volume.
# Create snapshot
root@rack03-server72:~ lbcli create snapshot --name=snapshot1 --project-name=default --source-volume-name=vol1
# Check snapshot is available
root@rack03-server72:~ lbcli list snapshots --project-name=default
Name UUID Source volume UUID
snapshot1 ca6e5578-3289-4c03-8445-f035189698c3 01b20f49-13d6-468f-b82d-752594922a6e
Put more data from the client side to the mounted disk.
# Add data
root@rack07-server56:/tmp/snapshot dd if=/dev/zero of=1g_sample_file2.txt bs=4096 count=256000
# Check file content
root@rack07-server56:/tmp/snapshot ll -h
-rw -r--r--. 1 root root 1000M Mar 2 10:32 1g_sample_file.txt
-rw -r--r--. 1 root root 1000M Mar 2 11:35 1g_sample_file2.txt
Create a clone from the snapshot that was created.
root@rack03-server72:~ lbcli create volume --name=clone_snap1 --source-snapshot-uuid=ca6e5578-3289-4c03-8445-f035189698c3 --acl="acl3" --replica-count=1 --size="25 Gib" --project-name=default
Name UUID State Protection State NSID
Size Replicas Compression ACL Rebuild Progress
clone_snap1 30242a3b-cf7d-4a1e-ae0c-14295b631ebf Creating Unknown 0 25 GiB 1 false values:"acl3"
Connect to the clone and verify that it holds only the initial data.
If the same ACL is used, the cloned NVMe disk should be attached automatically on the client.
root@rack07-server56:~ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:1 0 25G 0 disk /tmp/snapshot
nvme0n2 259:3 0 25G 0 disk
Mount the clone to a different directory
# Create mount point
root@rack07-server56:~ mkdir /tmp/clone
# Mount volume
root@rack07-server56:~ mount /dev/nvme0n2 /tmp/clone
Check mounts
root@rack07-server56:~ df -h
/dev/nvme0n1 25G 3.0G 21G 13% /tmp/snapshot
/dev/nvme0n2 25G 1.1G 23G 5% /tmp/clone
See that only the original sample file exists before the snapshot was taken
# Move to clone directory
root@rack07-server56:~ cd /tmp/clone
# Check contents of clone volume
root@rack07-server56:/tmp/clone ll -h
total 1024004
-rw -r--r--. 1 root root 1048576000 Mar 2 10:32 1g_sample_file.txt
Roll back a volume to the snapshot, and verify that its contents now only have the file that was created before taking the snapshot.
The rollback operation takes a volume and restores it back to a snapshot state (data + md). It is recommended to either remove active mounts, detach the volume, or flush caches before performing this operation.
The process should be as follows:
- Stop process/es that use the volume.
- Unmount FS create on top of the volume; e.g., for mount point /mnt/volumetorollback.a. Unmount /mnt/volumetorollback.
- Perform rollback via lbcli.
- Mount the volume: mount /mnt/volumetorollback.
Unmount the device from the mount point:
root@rack07-server56:~ umount -l /tmp/snapshot
umount: /dev/nvme0n1: not mounted
Check the UUID of the snapshot, and of the volume you would like to roll back:
# List volumes
root@rack03-server72:~ lbcli list volumes
Name UUID State Protection State NSID Size Replicas Compression ACL Rebuild Progress
vol1 01b20f49-13d6-468f-b82d-752594922a6e Available FullyProtected 2 25 GiB 1 false values:"acl3" None
clone_snap1 30242a3b-cf7d-4a1e-ae0c-14295b631ebf Available FullyProtected 3 25 GiB 1 false values:"acl3" None
# List snapshots
root@rack03-server72:~ lbcli list snapshots --project-name=default
Name UUID Source volume UUID State
snapshot1 ca6e5578-3289-4c03-8445-f035189698c3 01b20f49-13d6-468f-b82d-752594922a6e Available
To roll back the volume to the snapshot, run:
root@rack03-server72:~ lbcli rollback volume --project-name=default --src-snapshot-uuid=ca6e5578-3289-4c03-8445-f035189698c3 --uuid=01b20f49-13d6-468f-b82d-752594922a6e
Mount the volume again from the client side, and verify that only one file is there:
# Mount volume
root@rack07-server56:~ mount /dev/nvme0n1 /tmp/snapshot
# Check only original file exists
root@rack07-server56:~ ll -h /tmp/snapshot
total 1024004
-rw -r--r--. 1 root root 1048576000 Mar 2 10:32 1g_sample_file.txt
Create a snapshot-policy:
# Flags meaning
root@rack03-server72:~ lbcli create snapshots-policy --help
--days-in-cycle int Days in cycle for daily schedule policy (required if
hours-in-cycle wasn't provided) (default -1)
--description string Policy description
--hours-in-cycle int Hours in cycle for hourly schedule policy (required if
days-in-cycle wasn't provided) (default -1)
--name string Policy name (required)
--project-name string Project name (required)
--retention-time string Retention time will determine for how long to keep the
snapshots
--start-time string Schedule start time. It must be in the form of HH:MM , in
which HH is 00-23 and MM is 00-59
--volume-name string Volume name to assign policy to (required if volume -uuid
wasn't provided)
--volume-uuid string Volume UUID to assign policy to (required if volume -name
wasn't provided)
root@rack03-server72:~ lbcli create snapshots-policy --project-name=default --name=policy1 --volume -uuid=a7f67ad0-1f3d-49cb-9650-b82042378014 --description="my policy" --hours-in-cycle=2 --start-time=22:00 --retention-time=4h
Name UUID Volume Name State Type
policy1 3c03b2bf-e102-4c6c-8c6d-173224148f31 vol1 Creating Hourly
List snapshot-policies:
root@rack03-server72:~ lbcli list snapshots-policies --project-name=default --volume-uuid=a7f67ad0-1f3d-49cb-9650-b82042378014
Name UUID Volume Name State Type
policy1 3c03b2bf-e102-4c6c-8c6d-173224148f31 vol1 Active Hourly
Get snapshot-policies:
root@rack03-server72:~ lbcli get snapshots-policy --project-name=default --uuid=3c03b2bf-e102-4c6c-8c6d-173224148f31 -o json
{
"UUID": "3c03b2bf-e102-4c6c-8c6d-173224148f31",
"name": "policy1",
"resourceUUID": "a7f67ad0-1f3d-49cb-9650-b82042378014",
"resourceName": "vol1",
"projectName": "default",
"schedulePolicy": {
"snapshotSchedulePolicy": {
"hourlySchedule": {
"startTime": "2021-03-08 T22:16:21Z",
"hoursInCycle": 2
}
},
"retentionTime": "14400s"
},
"description": "my policy",
"state": "Active"
}
Rollback volume:
root@rack03-server72:~ lbcli rollback volume --project-name=default --src-snapshot-uuid=bd52d3d8-65a5-49fe-a934-b7d8b07ef040 --uuid=05f49718-4897-4ff5-adb9-5d7ccd6fc138
Delete snapshots/snapshot-policy:
# Delete snapshot
root@rack03-server72:~ lbcli delete snapshot --project-name=default --uuid=bd52d3d8-65a5-49fe-a934-b7d8b07ef040
# Delete snapshot policy
root@rack03-server72:~ lbcli delete snapshots-policy --project-name=default --uuid=3c03b2bf-e102-4c6c-8c6d-173224148f31