Volume Placement Feature
With Volume Placement, you can specify which failure domain to place a volume on upon its creation. The idea behind volume placement is to statically define, separate, and manage how volumes and data can tolerate a failure and provide availability - through failure of a server, rack, row, power grid, etc. By default, Lightbits will define server level failure domains. To define custom failure domains, see the Failure Domains article in the Lightbits Administration Guide.
Note that this feature is associated with the create volume command.
Current feature limitations of Volume Placement include the following:
- Available only for single replica volumes.
- Only failure domain labels are currently matched (e.g., fd:server00).
- Up to 25 affinities can be specified.
- Value (failure domain name) is limited to up to 100 characters.
- Volume Placement cannot be specified for a clone (a clone is always placed on the same nodes as the parent volume/snapshot).
- Dynamic rebalancing must be disabled.
In Lightbits 2.3.8 and above, the ‘placement-affinity’ flag is available - which can be used as described below.
In this example we are creating a 55 Gib volume, and placing it on server02 as the primary server.
First, disable the dynamic rebalancing features:
Fail in place mode disable:
lbcli disable feature-flag fail-in-place
Proactive rebalance mode disable:
lbcli disable feature-flag proactive-rebalance
Create volume:
lbcli create volume --name=vol1 --acl=acl1 --size="55 Gib" --project-name=default --replica-count=1 --placement-affinity="fd:server02"
Sample Output
root@rack03-server72:~ lbcli create volume --name=vol65 --acl=acl3 --size="55 Gib" --replica-count=1 --project-name=default --placement-affinity="fd:server02"
Name UUID State Protection State NSID Size Replicas Compression ACL Rebuild Progress
vol65 b1c49657-d4a5-4cf8-b8fd-13a0bd0e1358 Creating Unknown 0 55 GiB 1 false values:"acl3"
List the nodes to choose on which Lightbits node you would like to place the volume:
(In this example we will choose server02).
lbcli list nodes
Sample Output
root@rack03-server72:~ lbcli list nodes
Name UUID State NVMe endpoint Failure domains Local rebuild progress
** server02-0 ** 3b7a262a-ba3f-5447-9c21-e3c8ed699f0f Active 172.16.231.72:4420 [server02] None
server00-0 8b980170-f941-5473-9162-36a17e866a36 Active 172.16.231.70:4420 [server00] None
server01-0 cfbfdc90-43b3-5ce6-be99-1ab4171785b0 Active 172.16.231.71:4420 [server01] None
From the client server, check that the NVMe device is created and that it was placed on the correct Lightbits node:
# List all block devices
root@rack07-server56 :~ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:1 0 25G 0 disk /snapshot/tmp
nvme0n4 259:8 0 55G 0 disk
nvme0n2 259:3 0 25G 0 disk
sda 8:0 0 223.6G 0 disk
|-sda2 8:2 0 1G 0 part /boot
|-sda3 8:3 0 222.6G 0 part
| |-inaugurator --v2 -osmosis --cache 253:1 0 30G 0 lvm
| |-inaugurator --v2 -root 253:2 0 147.7G 0 lvm /
nvme0n3 259:6 0 200G 0 disk
# Check primary controller
root@rack07-server56 :~ nvme list-subsys /dev/nvme0n4
nvme-subsys0-NQN=nqn.2016-01.com.lightbitslabs:uuid:66ee2ad7-8602-49ed-9b82-0c85af8da36d
\+
- nvme0 tcp traddr=172.16.231.70 trsvcid=4420 live
+- nvme1 tcp traddr=172.16.231.71 trsvcid=4420 live
+- nvme2 tcp traddr=172.16.231.72 trsvcid=4420 live optimized