Host Configuration File Examples
This section covers various configuration examples and how they affect the host configuration files. The first host configuration file will be shown server00.yml, and the host configuration files for the remaining servers will follow the same pattern. For more complex examples, two host configuration files will be shown to help better understand the configuration.
Example 1: Data Network Interface Manually Configured
Host configuration with no data interfaces (data_ifaces) provided. The playbook does not need to configure the IP on the data interface. The user configured the interfaces prior to running the playbook.
---
nodes
instanceID0
data_ip10.10.10.100
failure_domains
server00
ec_enabledtrue
lightfieldMode SW_LF
storageDeviceLayout
initialDeviceCount6
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
# - model =~ ".*"
partition == false
size >= gib(300)
Example 2: Data Network Interface Automatically Configured
Host configuration with a single data interface. The playbook configured the interface. The user did not configure interface with the data IP.
---
data_ifaces
bootproto static
conn_name ens1
ifname ens1
ip4 10.10.10.100/24
nodes
instanceID0
data_ip10.10.10.100
failure_domains
server00
ec_enabledtrue
lightfieldMode SW_LF
storageDeviceLayout
initialDeviceCount6
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
# - model =~ ".*"
partition == false
size >= gib(300)
The example above shows the format for setting IPv4 addresses using ip4: ip/subnet
. IPv6 addresses can be set using ip6: ip/prefix
.
Example 3: Override the Lightbits Configurations
Host configuration with Lightbits override. The provided value overrides the key listen-client-urls
.
---
nodes
instanceID0
data_ip10.10.10.100
failure_domains
server00
ec_enabledtrue
lightfieldMode SW_LF
storageDeviceLayout
initialDeviceCount6
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
# - model =~ ".*"
partition == false
size >= gib(300)
etcd_settings_user
listen-client-urls http //127.0.0.1 2379,http //10.16.173.14 2379,http //192.168.16.2192379
Example 4: Provide Custom Datapath Configuration
Host configuration with custom datapath configuration provided.
By default, the playbook inspects the remote machine and determines the directory containing the specific configuration for Duroslight and backend services (datapath configuration). The excluding node-manager configuration uses the following logic:
<system_vendor>-<processor_count>-processor-<processor_cores>-cores
---
nodes
instanceID0
data_ip10.10.10.100
failure_domains
server00
ec_enabledtrue
lightfieldMode SW_LF
storageDeviceLayout
initialDeviceCount6
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
# - model =~ ".*"
partition == false
size >= gib(300)
datapath_config custom-datapath-config
Example 5: Use the Linux Volume Manager (LVM) Partition for etcd Data
Host configuration with custom lvm partition for etcd data.
---
nodes
instanceID0
data_ip10.10.10.100
failure_domains
server00
ec_enabledtrue
lightfieldMode SW_LF
storageDeviceLayout
initialDeviceCount6
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
# - model =~ ".*"
partition == false
size >= gib(300)
use_lvm_for_etcdtrue
etcd_lv_name etcd
#etcd_settings_user:
etcd_lv_size 15GiB
etcd_vg_name Alma
Example 6: Profile-Generator Overrides
Enable humans to override profile-generator output and provide for each server a custom file that will be taken by profile-generator as the system-profile.
Each host can be different so each host can specify its own override file.
nodes
instanceID0
data_ip10.10.10.100
failure_domains
server00
ec_enabledtrue
storageDeviceLayout
initialDeviceCount6
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
partition == false
size >= gib(300)
profile_generator_overrides_dir /tmp/overrides.d/server00
In case the cluster is homogeneous and we want to apply the same override to all nodes, we can provide a single setting in the groups/all.yml
file
or via the cmd with:
ansible-playbook -i ansible/inventories/cluster_example/hosts playbooks/deploy-lightos.yml -e profile_generator_overrides_dir=/tmp/overrides.d
In the above example, we specify profile_generator_overrides_dir
, which is a directory on the Ansible Controller that will be copied to the target machine.
Example 7: Dual Instance Configuration
In a dual instance configuration. Lightbits is run from both NUMAs of the CPU.
Below is an example configuration for server00.yml and server01.yml.
In this configuration, both servers NUMA0 and 1 have 12 NVME drives, for a total of 24 NVME drives in the server.
Each server has a single managment IP configured (not shown below) and two data IPs (one for each NUMA). Each instance has a unique data IP set for it.
server00.yml
---
name server00
nodes
instanceID0
data_ip172.16.10.10
failure_domains
server00
ec_enabledtrue
lightfieldMode SW_LF
storageDeviceLayout
initialDeviceCount12
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
# - model =~ ".*"
partition == false
size >= gib(300)
# - name =~ "nvme0n1"
instanceID1
data_ip172.16.20.10
failure_domains
server00
ec_enabledtrue
lightfieldMode SW_LF
storageDeviceLayout
initialDeviceCount12
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
# - model =~ ".*"
partition == false
size >= gib(300)
server01.yml
---
name server01
nodes
instanceID0
data_ip172.16.10.11
failure_domains
server01
ec_enabledtrue
lightfieldMode SW_LF
storageDeviceLayout
initialDeviceCount12
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
# - model =~ ".*"
partition == false
size >= gib(300)
# - name =~ "nvme0n1"
instanceID1
data_ip172.16.20.11
failure_domains
server01
ec_enabledtrue
lightfieldMode SW_LF
storageDeviceLayout
initialDeviceCount12
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
# - model =~ ".*"
partition == false
size >= gib(300)
Example 8: Single IP Dual NUMA Configuration
This example is for Single IP Dual Instance configuration. For more information, see Single-IP-Dual-NUMA Configuration.
The below is an example of a full server00.yml, with a similar physical configuration as the above example. However, instead of using two data IPs, it only uses one data IP for both of its instances.
Unlike dual instance configuration from the example above, which had a unique data IP per instance, each instance on a server has the same IP.
---
name server00
nodes
instanceID0
data_ip10.10.10.100
failure_domains
server00
ec_enabledtrue
lightfieldMode SW_LF
storageDeviceLayout
initialDeviceCount12
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
# - model =~ ".*"
partition == false
size >= gib(300)
# - name =~ "nvme0n1"
instanceID1
data_ip10.10.10.100
failure_domains
server00
ec_enabledtrue
lightfieldMode SW_LF
storageDeviceLayout
initialDeviceCount12
maxDeviceCount12
allowCrossNumaDevicesfalse
deviceMatchers
# - model =~ ".*"
partition == false
size >= gib(300)