Host Configuration File Variables
Each host configuration file includes some basic configuration variables.
See Host Configuration File Examples for instances of how these variables are used in a host configuration file.
data_ifaces
bootproto
ifname
ip4
ip6
instances
instanceID
data_ip
failure_domains
ec_enabled
storageDeviceLayout
initialDeviceCount
maxDeviceCount
allowCrossNumaDevices
deviceMatchers
model =~
partition ==
size >=
use_lvm_for_etcd
etcd_lv_name
etcd_settings_user
etcd_lv_size
etcd_vg_name
auto_reboot
datapath_config
listen-client-urls
Host Configuration File Variable Notes
Variable | Required | Description |
---|---|---|
data_ifaces | No | If provided, the Ansible playbook configures the interface. The configuration is permanent, and results in a new ifcfg-<iface-name> file. If this variable is not provided, no action is taken, the playbook assumes that the interfaces are valid, and the link is up and configured. If data-ifaces is used, you must also use the bootproto, conn_name, ifname, and ip4 variables. |
bootproto | No | IP allocation type (dynamic or static). Only static is supported in Lightbits 2. Default value: Static. |
ifname | No | The interface name, such as eth0 or enp0s2, that the data path in the ip4 variable is dedicated to. |
ip4 | No | The data IPv4 address and subnet to set for the interface mentioned in the ifname field. The format is "IP/subnet". Example: "10.10.10.100/24" |
ip6 | No | The data IPv6 address and prefix to set for the interface mentioned in the ifname field. The format is "IP/prefix". Example: "2001:0db8:0:f101::1/64" |
data_ip | Yes | The data/etcd IP used to connect to other nodes. The subnet or prefix is not required, only the address. |
instances | Yes | A list of instance IDs, one for each logical data-path instance. |
failure_domains | No | The servers sharing a network, power supply, or physical location that are negatively affected together when network, power, cooling, or other critical service experiences problems. For more information, see the Defining Failure Domains procedure. Default value: Empty list. |
instanceID | Yes | A unique number assigned to this logical node. Only two logical nodes per server are supported in Lightbits. This means that the value is “0” and/or "1". |
storageDeviceLayout | Yes | The storageDeviceLayout key, under the node-specific settings, groups the information required to detect the initial storage configuration of the node. |
initialDeviceCount | Yes | A setting specifying the initial count of physical drives the system will start with on the first startup. If fewer devices are found, the startup sequence fails. If too many devices are found, a subset of the specified size is automatically selected and used. If the initialDeviceCount variable is unspecified, the storage inventory will include min (current_discovered_ devices, maxDeviceCount). The initialDeviceCount cannot be larger than maxDeviceCount. |
maxDeviceCount | Yes | The pre-determined, maximum number of physical nvme drives that this node can contain; i.e. the number of NVME drive slots available for this node. |
allowCrossNumaDevices | No | An optional setting, specifying whether block devices can be used by system-nodes that are affiliated with a different Numa ID, or instance IDs that the block device is attached to. Default: false (change this to true in the hosts file). Note: Do not allow devices attached on different NUMAs to be used by this node. |
deviceMatchers | No | This section contains a list of matching conditions for locating the wanted physical drives to be used by the system. |
partition | No | Whether or not a device is a partition (true/false). |
model | No | The vendor/model of the device. For example: model =~ "SAMSUMG*" or model == "SAMSUNG 111-222". |
serial | No | The serial identifier of the device. For example: serial == "S3H11112" or serial =~ "S3H1.*" |
size | No | The capacity of the physical device. size >= mib(1000), size == gib(20), size <= tib(50). |
ec_enabled | Yes | Enables Erasure Coding (EC) for protecting against SSD failure within the storage server. Normal operation continues during reconstruction when a drive is removed. At least six NVMe devices must be in the node for erasure coding to be enabled. |
name | Yes | A unique, user-friendly name for the node. |
use_lvm_for_etcd | No | Use the Linux Volume Manager (LVM) partition for etcd data. Default value: false. Note: If this variable is not used in the host configuration file, the system uses the default fault value. The following etcd variables are only relevant if the use_lvm_ for_etcd variable value is true. |
etcd_lv_name | No | Logical volume name for etcd data local volume management. |
etcd_settings_user | No | Key-value map for overriding the etcd service settings. |
etcd_lv_size | No | Logical volume size for etcd data local volume management. |
etcd_vg_name | No | Volume group name for etcd data local volume management. Mandatory if use_lvm_for_etcd is used. |
datapath_config | Yes | The path to the system-profile yml file. |
etcd_settings_user | No | User etcd settings. |
listen-client-urls | No | http://127.0.0.1:2379 |
profile_generator_overrides_dir | No | Directory path containing <system-profile>.yaml file to override the profile-generator generated one. |
auto_reboot | No | If set to false the system will not automatically reboot after installation. |
inject_jwt_to_nodes | No | Should we inject system-admin jwt to every lightbits server at /etc/lbcli/lbcli.yaml |
- The user must provide the etcd volume group name in the
etcd_vg_name
variable, and confirm that there is enough server space to create a new logical volume. The default logical volume name (etcd_lv_name
) is "etcd" and the default volume size (etcd_lv_size
) is 10GB. - If there is not enough space in the server, the user must reduce the other logical volume sizes before the cluster software installation to allocate the required space. For more details, see https://www.rootusers.com/lvm-resize-how-to-decrease-an-lvm-partition.
Was this page helpful?