Connecting Application Servers to Lightbits

Connecting an application server to the volumes on the Lightbits storage server is accomplished through the following procedure.

Connecting an Application Server to a Volume

StepCommand TypeSimplified Command
Get details about Lightbits storage clusterLightbits lbcli CLI

lbcli get cluster

lbcli list nodes

Lightbits REST

get /api/v2/cluster

get /api/v2/nodes

Verify network connectivityLinux commandping <IP of Lightbits Instance>
Connect to LightbitsConnect to Lightbits clusternvme connect <your Lightbits connection details>
Review block device detailsLinux commandlsblk or nvme list

Only cluster admins have access to cluster and node level APIs. Therefore, tenant admins should get all of the required connection details from their cluster admin.

Before You Begin

Before you begin the process to connect an application server to the Lightbits storage server, confirm that the following conditions are met:

  • A volume exists on the Lightbits storage server with the correct ACL (of the client or ALLOW_ANY).
  • A TCP/IP connection exists to the Lightbits storage server.
  • If you are a tenant admin, you should get all of the connection details from your cluster admin.

The clients need to have these main components connect properly to volumes on the Lightbits cluster:

  • NVMe/TCP client-side drivers (part of the Linux Kernel 5.0 and up)
  • Discovery Client

This section includes:

  • Reviewing the Lightbits Storage Cluster Connection Details (Cluster Admin Only)
  • Verifying TCP/IP Connectivity
  • Connecting to the Lightbits Cluster
  • Reviewing Block Device Details on the Application Server
  • NVMe/TCP MultiPath

Reviewing the Lightbits Storage Cluster Connection Details (Cluster Admin Only)

The following table lists the required details that you need to use for the nvme connect command on your application server. You can retrieve this information using the nvme connect command or an lbcli command.

Required Lightbits Storage Cluster Connection Details

ItemDescriptionNVMe Connect Command Parameterlbcli Command (To Get the Information)
Subsystem NQNThe Lightbits cluster that was used to create the volume.-nlbcli get cluster
Instance IP addressesThe IP addresses for all the nodes in the Lightbits cluster.-albcli list nodes
TCP portsThe TCP ports used by the Lightbits cluster nodes.-slbcli list nodes
ACL stringThe ACL used when you created the volume on the Lightbits storage cluster.-qlbcli get volume –name <volume_Name>

Obtaining the Lightbits Cluster Subsystem NQN

On any Lightbits server, enter the get cluster command.

Sample Command

Bash
Copy

Sample Output

Bash
Copy

The output includes the subsystem NQN for the Lightbits cluster. In the Get Cluster API response you will also get a list of API and Discovery Endpoint IPs:

Bash
Copy

Obtaining the Lightbits Node Data IP Addresses and TCP Ports

On any Lightbits server, enter the lbcli List Nodes command.

Sample Command

Bash
Copy

Sample Output

Bash
Copy

The output’s NVME-Endpoint includes the Instance IP addresses and TCP ports for all of the Lightbits cluster’s nodes.

Obtaining the Volume ACL String

The ACL string is the ACL you used when you created the volume on the Lightbits storage cluster.

You can also review the list of existing volumes and their ACLs by executing list volumes or get volume (with a specific volume name/UUID) on any of the Lightbits servers.

Verifying TCP/IP Connectivity

Before you run the nvme connect command on the application server, enter a Linux ping command to check the TCP/IP connectivity between your application server and the Lightbits storage cluster.

Sample Command

Bash
Copy

Sample Output

Bash
Copy

The output indicates that this application server has a good connection to the Lightbits storage instance.

It is recommended to repeat this check with all the IP addresses obtained from the list nodes command.

Connecting to the Lightbits Cluster

With the IP, port, subsystem NQN and ACL values for your volume, you can execute the nvme connect command.

You must repeat the nvme connect command for each of the NVMe endpoints retrieved by the lbcli list nodes command.

Sample NVMe Connect Command

Bash
Copy

Use the client procedure for each node in the cluster. Remember to use the correct NVME-Endpoint for each node.

Add the --ctrl-loss-tmo -1 flag to allow infinite attempts to reconnect nodes. This prevents a timeout from occurring when attempting to connect with a node in a failure state.

During the connection phase to a client, the system can crash if you use NVMe/TCP drivers that are not supported by Lightbits.

For more details on the NVMe CLI, see the NVME CLI Overview section of this document.

Currently, Lightbits only supports TCP for the transport type value.

The above connect command will connect you to the primary node where the volume is. It is recommended to have the discovery client installed on all the clients. This will automatically pull the required information from the cluster (or from several clusters), discover all the volumes the client has access to, and maintain high availability so that if the primary fails, the optimized NMVe/TCP path will go to the new primary. See the Discovery Client Deployment section below for additional information.

After you have entered the nvme connect command, you can confirm the client’s connection to Lightbits cluster by entering the nvme list-subsys command.

Sample Command

Bash
Copy

Sample Output

Bash
Copy

Next, review your connected block devices to see the newly connected NVMe/TCP block device using the Linux lsblk command.

Reviewing Block Device Details on the Application Server

After the nvme connect command completes, you can see the available block devices on the application server using the Linux lsblk command, or the nvme connect command.

The following example shows how to use the Linux lsblk command to list all block devices after the nvme connect command has been executed. This command will show a list of all block devices on the client and the block devices the client can connect to (all the volumes for which the client is part of their ACL and all volumes that are ALLOW_ANY).

Sample Command

Bash
Copy

Sample Output

Bash
Copy

In this example output, you can see the 10GB NVMe/TCP device with the name nvme0n1. This name indicates that the device is:

  • From the NVMe subsystem
  • The first volume on the NVMe subsystem 0

Your Lightbits storage cluster is now initialized and ready for use.

You can configure your applications to use this NVMe/TCP connected volume as you would with any other Linux block device.

Discovery-client Deployment

The clients use nvme/tcp to connect to the cluster. The discovery-client service should run on the client machines in addition to the NVMe/TCP client-side drivers. It manages the nvme connections to the cluster. The service is responsible for discovering, connecting to, and handling changes in Lightbits clusters. It also provides an ongoing nvme connect-all functionality to a remote cluster of Lightbits nvme controllers. This service keeps the client connected through clusters; node changes (scale-up/down, node replacement, node failure, etc.).

The Discovery-client is a deployable service running under systemd.

This means the following:

  • The discovery-client maintains an updated list of NVMe-over-Fabrics discovery controllers. A change of the controllers in the Lightbits cluster will be reflected automatically in this list.
  • Discovery-client discovers available nvme-over-fabrics subsystems by running nvme discover commands against these discovery controllers. Discover commands are triggered either by an AEN (Asynchronous Event Notification) received from a remote discovery controller, or by a configuration file from the user that specifies new discovery endpoint(s).
  • Discovery-client automatically connects to available nvme-over-fabrics subsystems by running nvme connect commands.

You can configure the service via a configuration file.

Setting up the discovery-client on the Client VM

Clients should have the discovery controller installed to query the network and discover the Lightbits cluster.

This is supported natively on the following operating systems:

  • AlmaLinux 9
  • RHEL 9
  • Rocky Linux 9
  • Ubuntu 22.04
  • RHEL8 (matching Alma rocky)

Note that if the clients are deployed outside the Lightbits cluster's Vnet, a Vnet peering connection must be made between the client’s Vnet and the Lightbits cluster Vnet.

For detailed information on storage usage, refer to the Common Administration Tasks section of this guide.

Installing the discovery-client Service

  1. Verify that the nvme-tcp related modules are loaded. Run the following command: lsmod | grep nvme

Expected result:

Bash
Copy
  1. In case the kernel module is not loaded, load it using the command: modprobe nvme-tcp
  2. Install the discovery client on the client host: yum install -y discovery-client
  3. Start the discovery-client service: systemctl start discovery-client. Connect all or set up the discovery-client configuration file (for persistent discovery): systemctl start discovery-client

Configuring the discovery-client Service

To configure the discovery client:

  1. Get the list of the cluster’s discovery endpoints (Clusters NQN): lbcli get cluster -o json -J $LIGHTOS_JWT
  2. Make sure the discovery client is running and enabled: systemctl start discovery-client
  3. Add the discovery endpoints to the discovery client: discovery-client add-hostnqn --name=<config file name> -a <discovery-endpoints> -w <client’s ip> -n <cluster’s nqn>

For example:

sudo discovery-client add-hostnqn --name=discovery.conf -a 10.0.1.12:8009, 10.0.1.11:8009, 10.0.1.4:8009 -q test123 -n nqn.2016-01.com.lightbitslabs:uuid:fa085f73-686a-4bef-b8aa-d07659a5959a

--name = the name to give to the persistent configuration file - recommended to be <myfilename>.conf. The configuration file will usually be stored under: /etc/discovery-client/discovery.d/. The defined file name must be a proper file name that does not start with "tmp.dc."

-a = IP address of the discovery service endpoint (information in the get cluster API response). It is recommended to have a list of all of the endpoints (comma-delimited). When using discovery behind a load balancer, only the LB VIP and port should be used.

-q = Host NQN of the client (any unique name you want to give to the client - the NQN should be used in the ACL when creating a volume to limit volume access to specific clients).

-n = clusters subsystem NQN (info in the get cluster API response)

systemctl restart discovery-client (this is not mandatory) Note that a client can be connected to multiple Lightbits clusters (which is why the Cluster NQN is required in the connection details).

An alternative way is to run the connect-all command.

-a is one of the IP’s of the discovery endpoints of the cluster (all of the information is in the output of the get cluster API). There is no need to state the port, as by default it uses the discovery port of 8009.

-q is the client-nqn (alternatively you can use -w with the IP of the client).

-p will keep the connection persistent.

> discovery-client connect-all -t tcp -a <endpoint-iplb-api-or-dns> -q client1 -p

Expected output example:

Bash
Copy

In order to get an actual connection, you will need a volume. You should lbcli Create Volume with an ACL of ALLOW_ANY or the NQN you gave the client in the discovery configuration. Once you do this you will be able to view connected nvme devices:

> nvme list

Expected output example:

Bash
Copy

Or

Bash
Copy

Expected output example:

Bash
Copy

You can now create a filesystem on the block device and mount or use it as a direct block device. Below is an example of a FIO command to run to perform reads and writes to the block device from the client.

Bash
Copy

Persistent Multi-Cluster discovery-client Configuration

Once you run the add-hostnqn, the NVME-tcp discovery will be persistent. The configuration file is usually under:

/etc/discovery-client/discovery.d/<name>.conf

Because the discovery-client can work with multiple Lightbits clusters, it needs to know which discovery-service belongs to which cluster. This grouping is achieved by specifying the subsysnqn as an identifier for each entry. You can edit the file directly and add the required entries.

The following is an example input file:

Bash
Copy

Where:

-t: Transport type (tcp)

-a: The IP of the Lightbits storage instance. All of the IPs can be retrieved from the command ‘lbcli list nodes’, under the NVMe endpoint column.

-s: The port used by the discovery-service (8009).

-q: The hostnqn/ACL name of the client. You can use the value stored in the file /etc/nvme/hostnqn, or specify any string (in the example the client is host1). This value must match the ACL attribute set in the ‘lbcli create volume’ command.

-n The Lightbits cluster hostnqn (can be retrieved from the get cluster API under the Subsystem NQN column).

  1. Restart the discovery-client service: systemctl restart discovery-client.
  2. Once the service is up, you will be able to see the Lightbits nodes connected by running the command: discovery-client list ctrl.
  3. In order to see the volumes attached to the client, you should run the command: nvme list.

Updating the discovery-client Package

To update the discovery-client package on the clients, perform the following steps.

Using yum (CentOS/RedHat):

  1. Update the lightbits.repo file under /etc/yum.repos.d/ to the updated repository, to point to the right version.
  2. Execute the Linux command: yum update -y discovery-client.x86_64.

Using RPM/dnf:

To set up the client repository, refer to the Client Software Installation section of the Lightbits Installation Guide.

Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard