Host Configuration File Variables

Each host configuration file includes some basic configuration variables.

See Host Configuration File Examples for instances of how these variables are used in a host configuration file.

YAML
Copy

Host Configuration File Variable Notes

VariableRequiredDescription
data_ifacesNoIf provided, the Ansible playbook configures the interface. The configuration is permanent, and results in a new ifcfg-<iface-name> file. If this variable is not provided, no action is taken, the playbook assumes that the interfaces are valid, and the link is up and configured. If data-ifaces is used, you must also use the bootproto, conn_name, ifname, and ip4 variables.
bootprotoNoIP allocation type (dynamic or static). Only static is supported in Lightbits 2. Default value: Static.
ifnameNoThe interface name, such as eth0 or enp0s2, that the data path in the ip4 variable is dedicated to.
ip4NoThe data IPv4 address and subnet to set for the interface mentioned in the ifname field. The format is "IP/subnet". Example: "10.10.10.100/24"
ip6NoThe data IPv6 address and prefix to set for the interface mentioned in the ifname field. The format is "IP/prefix". Example: "2001:0db8:0:f101::1/64"
data_ipYesThe data/etcd IP used to connect to other nodes. The subnet or prefix is not required, only the address.
instancesYesA list of instance IDs, one for each logical data-path instance.
failure_domainsNoThe servers sharing a network, power supply, or physical location that are negatively affected together when network, power, cooling, or other critical service experiences problems. For more information, see the Defining Failure Domains procedure. Default value: Empty list.
instanceIDYesA unique number assigned to this logical node. Only two logical nodes per server are supported in Lightbits. This means that the value is “0” and/or "1".
storageDeviceLayoutYesThe storageDeviceLayout key, under the node-specific settings, groups the information required to detect the initial storage configuration of the node.
initialDeviceCountYesA setting specifying the initial count of physical drives the system will start with on the first startup. If fewer devices are found, the startup sequence fails. If too many devices are found, a subset of the specified size is automatically selected and used. If the initialDeviceCount variable is unspecified, the storage inventory will include min (current_discovered_ devices, maxDeviceCount). The initialDeviceCount cannot be larger than maxDeviceCount.
maxDeviceCountYesThe pre-determined, maximum number of physical nvme drives that this node can contain; i.e. the number of NVME drive slots available for this node.
allowCrossNumaDevicesNoAn optional setting, specifying whether block devices can be used by system-nodes that are affiliated with a different Numa ID, or instance IDs that the block device is attached to. Default: false (change this to true in the hosts file). Note: Do not allow devices attached on different NUMAs to be used by this node.
deviceMatchersNoThis section contains a list of matching conditions for locating the wanted physical drives to be used by the system.
partitionNoWhether or not a device is a partition (true/false).
modelNoThe vendor/model of the device. For example: model =~ "SAMSUMG*" or model == "SAMSUNG 111-222".
serialNoThe serial identifier of the device. For example: serial == "S3H11112" or serial =~ "S3H1.*"
sizeNoThe capacity of the physical device. size >= mib(1000), size == gib(20), size <= tib(50).
ec_enabledYesEnables Erasure Coding (EC) for protecting against SSD failure within the storage server. Normal operation continues during reconstruction when a drive is removed. At least six NVMe devices must be in the node for erasure coding to be enabled.
nameYesA unique, user-friendly name for the node.
use_lvm_for_etcdNoUse the Linux Volume Manager (LVM) partition for etcd data. Default value: false. Note: If this variable is not used in the host configuration file, the system uses the default fault value. The following etcd variables are only relevant if the use_lvm_ for_etcd variable value is true.
etcd_lv_nameNoLogical volume name for etcd data local volume management.
etcd_settings_userNoKey-value map for overriding the etcd service settings.
etcd_lv_sizeNoLogical volume size for etcd data local volume management.
etcd_vg_nameNoVolume group name for etcd data local volume management. Mandatory if use_lvm_for_etcd is used.
datapath_configYesThe path to the system-profile yml file.
etcd_settings_userNoUser etcd settings.
listen-client-urlsNohttp://127.0.0.1:2379
profile_generator_overrides_dirNoDirectory path containing <system-profile>.yaml file to override the profile-generator generated one.
auto_rebootNoIf set to false the system will not automatically reboot after installation.
inject_jwt_to_nodesNoShould we inject system-admin jwt to every lightbits server at /etc/lbcli/lbcli.yaml
  • The user must provide the etcd volume group name in the etcd_vg_name variable, and confirm that there is enough server space to create a new logical volume. The default logical volume name (etcd_lv_name) is "etcd" and the default volume size (etcd_lv_size) is 10GB.
  • If there is not enough space in the server, the user must reduce the other logical volume sizes before the cluster software installation to allocate the required space. For more details, see https://www.rootusers.com/lvm-resize-how-to-decrease-an-lvm-partition.
Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard