Confirming the Cluster Client Connection to Lightbits
Lightbits Cluster Installation Process
# | Installation Steps |
---|---|
1 | Connecting your installation workstation to Lightbits’ software repository |
2 | Verifying the network connectivity of the servers used in the cluster |
3 | Setting up an Ansible environment on your installation workstation |
4 | Installing a Lightbits cluster by running the Ansible installation playbook |
5 | Updating clients (if required) |
6 | Provisioning storage, connecting clients, and performing IO tests |
Each /dev/nvmeX is a successful NVMe over TCP connection to a server in the cluster. When the optimized path is connected, a block device is created with the name /dev/nvmeXnY, which can then be used as any block device (create fs on top of it and mount it).
If you see a multipath error (with the nvme block devices showing up as 0 byte, or each replica/nvme connection showing up as a separate nvme block device), refer to the Lightbits Troubleshooting Guide, or contact Lightbits Support.
After you have entered the nvme connect
command, you can confirm the client’s connection to Lightbits by entering the nvme list
command. This will list all of the NVMe block devices. For more information on each connection's multipathing, you can use nvme list-subsys
, which will list all of the NVMe character devices.
The nvme list
and lsblk
command will show the NVMe block device that is created upon a successful connection. It will be of the format nvme0n1
. The nvme list-subsys
will list all of the paths that make up these block devices; these paths appear as character devices. So from the output below we can conclude that block device nvme0n1
is made of three character devices: nvme0
, nvme1
, and nvme2
. When we need to interact with the block device - for example to create a filesystem and mount it - we will interact with the block device, nvme0n1
, and not the character devices (nvme0,nvme1, and nvme2).
Sample Command
$ nvme list-subsys
Sample Output
nvme-subsys0 - NQN=nqn.2014-08.org.nvmexpress:NVMf:uuid:b5fe744a-b919-465a-953a-a8a0df7b9d3
\
+- nvme0 tcp traddr=10.10.10.100 trsvcid=4420 live
+- nvme1 tcp traddr=10.10.10.101 trsvcid=4420 live
+- nvme2 tcp traddr=10.10.10.102 trsvcid=4420 live
Next, review your connected block devices to see the newly connected NVMe/TCP block device using the Linux lsblk
command.
Sample Command
$ lsblk
Sample Output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:1 0 2G 0 disk
sdb 8:16 0 223.6G 0 disk
|-sdb2 8:18 0 222.6G 0 part
| |-Alma00-swap 253:1 0 22.4G 0 lvm [SWAP]
| |-Alma00-home 253:2 0 150.2G 0 lvm /home
| |-Alma00-root 253:0 0 50G 0 lvm /
|-sdb1 8:17 0 1G 0 part /boot
sda 8:0 0 111.8G 0 disk
A new nvme0n1 block device with 2GB of storage is identified and available.
To determine which node in the cluster is the primary and which is secondary for this block device, enter the nvme list-subsys
command with the block device name.
Sample Command
$ nvme list-subsys /dev/nvme0n1
Sample Output
nvme-subsys0 - NQN=nqn.2014-08.org.nvmexpress:NVMf:uuid:b5fe744a-b919-465a-953a-a8a0df7b9d31\
+- nvme0 tcp traddr=10.10.10.100 trsvcid=4420 live optimized
+- nvme1 tcp traddr=10.10.10.101 trsvcid=4420 live inaccessible
+- nvme2 tcp traddr=10.10.10.102 trsvcid=4420 live
In the output, the optimized status identifies the primary node, and an inaccessible status for the secondary node. In this case we can see that server 10.10.10.100 is the primary node with the optimized path. All of the IO from the client will go to 10.10.10.100. The cluster will then replicate the data between the other nodes.