Host Server Configuration

This article details how to define configuration files for each "Ansible Host" (server) in the cluster.

Return to the /~/light-app/ansible/inventories/cluster_example directory you created in Inventory Structure and Adding the Ansible Hosts File.

Bash
Copy

From this path we will edit each of the yml files found in the ~/light-app/ansible/inventories/cluster_example/host_vars subdirectory. In our example cluster, we have three Lightbits storage nodes that are defined by the files:

  • host_vars/server00.yml
  • host_vars/server01.yml
  • host_vars/server02.yml:

In each of the host variable files, update the following required variables.

Required Variables for the Host Variable File

VariableDescription
nameThe cluster server’s name. Example: serverXX. Must match the filename (without the extension) and server names configured in the "hosts" file.
instanceIDThe configuration parameters for the logical node in this server. Currently, Lightbits supports up to two logical nodes per server.
ec_enabled(per logical node) Enables Erasure Coding (EC), which protects against SSD failure within the storage server by preventing IO interruption. Normal operation continues during reconstruction when a drive is removed. At least six NVMe devices must be in the node for erasure coding to be enabled.
failure domains(per logical node) The servers sharing a network, power supply, or physical location that are negatively affected together when there are network, power, cooling, or other critical service experience problems. Different copies of the data are stored in different FDs to keep data protected from various failures. To specify the servers in the FD, you will need to add the server names. For additional information, see Defining Failure Domains.
data_ip(per logical node) The data IP used to connect to other servers. Can be IPv4 or IPv6.
storageDeviceLayout(per logical node) Sets the SSD configuration for a node. This includes the number of initial SSD devices, the maximum number of SSDs allowed, allowance for NUMA across devices, and memory partitioning and total capacity. For additional information, see Setting the SSD Configuration.
allowCrossNumaDevicesLeave this setting as "false" if all of the accounted NVMe drives for this instance are in the same NUMA. Set it to "true" if to access the NVME drives this instanceID will need to do cross-NUMA communication.
deviceMatcersThis determines which NVMe drives will be considered for data and which will be ignored. For example, if the OS drive is an NVME drive, it can be ignored using the name option. The default settings do well by only counting NVMe drives greater than 300 GiB and without partitions to be part of the data.

Filling Out Drive Information

To fill out the InitialDeviceCountfor each instance, check the NUMA placement of the NVMe drives. The command below shows which NUMA each NVMe drive belongs to. Additionally, you can update your table with this information. Note that the main example of this installation section assumes that each server has six NVMe drives in NUMA 0. This would be configured as initialDeviceCount 6 and allowCrossNumaDevices false (as all of the drives are in the same NUMA, so cross talk is not required). maxDeviceCountis configured as 12, as there are a total of 12 available SSD slots for this NUMA.

Bash
Copy

The example output below is unrelated to the cluster that we are demonstrating for installation. However, its output is useful in demonstrating how to interpret the command output and how to apply the configuration in a dual NUMA situation. This shows six drives in NUMA0 and six drives in NUMA1. The column after numa_node shows the NUMA ID. Based on this output, the following are the possible configurations for this server:

  • One instance could be configured with an initialDeviceCount of 12 with allowCrossNumaDevices set to true (as there would be cross-NUMA communication with the drives). Assuming this server has 24 drive slots, then configure maxDeviceCount to 24.
  • Configure instance 0 with initialDeviceCount of 6 and instance 1 with initialDeviceCount of 6, with both instances of allowCrossNumaDevices set to false. Assuming this server has 24 drive slots evenly split between the two NUMAs, configure the maxDeviceCount to 12 for both instances.
  • Other configurations are possible as long as they are valid, do not go over the actual device count, and meet the requirements. See Host Configuration File Variables for additional information.
Bash
Copy

To update these parameters, the cluster details table is useful.

Ensure that the designated drives do not contain any partitions.

Installation Planning Table Sample

The following is an example for three Lightbits servers in a cluster with a single client. Each Lightbits server comes with an initial set of six NVMe drives and can be expanded to up to 12 drives.

Server NameRoleManagement Network IPData NIC Interface NameData NIC IPNVMe Drives
server00Lightbits Storage Server 1192.168.16.22ens110.10.10.1006
server01Lightbits Storage Server 2192.168.16.92ens110.10.10.1016
server02Lightbits Storage Server 3192.168.16.32ens110.10.10.1026
client00client192.168.16.45ens110.10.10.103N/A

The following are three examples for the three host variable files.

server00.yml

Bash
Copy

server01.yml

Bash
Copy

server02.yml

Bash
Copy
  • See Host Configuration File Variables for the entire list of variables available for the host variable files.
  • You can also reference additional host configuration file examples.
  • Typically the servers should already be configured with the data_ip. However, the Ansible playbook can configure the data NIC IP; for that you will need to add a section data_ifaces with the data interface name. For additional information, see Configuring the Data Network. The article on Setting the SSD Configuration also shows an example of this configuration.
  • If you need to create a separate partition for etcd data on the boot device, see etcd Partitioning.
  • Based on the placement of SSDs in the server, check if you need to make a change in the client profile to permit cross-NUMA devices.
  • Starting from Version 3.1.1, data IP can be IPv6. For example: data_ip: 2600:80b:210:440:ac0:ebff:fe8b:ebc0
Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard