Volume Placement Feature

With Volume Placement, you can specify which failure domain to place a volume on upon its creation. The idea behind volume placement is to statically define, separate, and manage how volumes and data can tolerate a failure and provide availability - through failure of a server, rack, row, power grid, etc. By default, Lightbits will define server level failure domains. To define custom failure domains, see the Failure Domains article in the Lightbits Administration Guide.

Note that this feature is associated with the create volume command.

Current feature limitations of Volume Placement include the following:

  • Available only for single replica volumes.
  • Only failure domain labels are currently matched (e.g., fd:server00).
  • Up to 25 affinities can be specified.
  • Value (failure domain name) is limited to up to 100 characters.
  • Volume Placement cannot be specified for a clone (a clone is always placed on the same nodes as the parent volume/snapshot).
  • Dynamic rebalancing must be disabled.

In Lightbits 2.3.8 and above, the ‘placement-affinity’ flag is available - which can be used as described below.

In this example we are creating a 55 Gib volume, and placing it on server02 as the primary server.

First, disable the dynamic rebalancing features:

Fail in place mode disable:

Bash
Copy

Proactive rebalance mode disable:

Bash
Copy

Create volume:

Bash
Copy

Sample Output

Bash
Copy

List the nodes to choose on which Lightbits node you would like to place the volume:

(In this example we will choose server02).

Bash
Copy

Sample Output

Bash
Copy

From the client server, check that the NVMe device is created and that it was placed on the correct Lightbits node:

Bash
Copy
Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard