Known Issues in Lightbits 3.6.1
ID | Description |
---|---|
38043 | If encryption was turned on but enabling it failed - resulting in the creation of an 'EnableServerEncryptionFailed' event - the API service will return stale events. Any event from that point onward that exists in the system will not be returned by the "ListEvents" API. As a workaround, check if this event exists before upgrading to 3.14/3.15.1. Note that a similar issue could also occur when a cluster has double disk failure on one of the servers (or single disk failure with no EC), and Lightbits 3.2.x or older was used at the time of failure. |
37831 | In some cases, silent data corruption on an SSD could cause a node crash instead of attempting to recover the data and reporting an event. This can occur if the SSD returns invalid data rather than an I/O error. |
37205 | Incorrect handling of IO errors from NVMe SSDs during abrupt recovery may cause node recovery to fail. |
36722 | Users can reference the NVMe device by its path name (e.g., /dev/nvme0n1) - as used during the initial system setup - to determine the storage SSD used by servers in the Lightbits storage cluster. However, this could lead to data loss since device names are not persistent across reboots. |
36282 |
|
36090 | Due to a rare internal error, involved long network disconnections nodes might lose service and stay in Inactive state - even though the node should be active. |
35837 | On a single-instance service on a machine with multiple numa-nodes with memory, memory stress can occur, and the kernel will try to perform memory reclamation. This leads to start failures in the duroslight service, with the node staying inactive. |
34970 | api-service could become irresponsive if it loses connectivity to etcd during its startup. Workaround: Restart api-service using systemctl restart api-service. |
33173 | data-layer: under rare conditions, deleting snapshots could lead to rebuilds not starting and volumes being stuck in degraded/read-only protection states. This issue is fixed in 3.9.1 and 3.8.3 and upgrading is strongly recommended. |
32950 | node-manager: there is an extremely rare race in the service's startup flow, that combined with a very rare chain of migrations of the same volume, may in theory lead to incomplete rebuilds. This issue is fixed in 3.9.1 and 3.8.3 and upgrading is strongly recommended. |
32402 | When installing Lightbits in versions prior to 3.8.1, there is a missing dependency required on RHEL 9 distros. It is required to install manually; e.g.: yum install fio-engine-libaio -y. Without this step, it could fail. |
32168 | Front end metrics such as: lightbox_fe_ nr_read__, lightbox_fe_nr_write__ may not be collected and exported correctly on servers that have IPv6 support disabled. |
31071 | Installation fails on RHEL 9.x (or similar) distros on systems with NVMe boot device. Workaround: Remove the "load our nvme drive" section from the installation role in Ansible. |
29683 | Systems with Solidigm/Intel D5-P5316 drives may experience higher than expected write latency after several drive write cycles. Contact Lightbits Support if you use Solidigm/Intel D5-P5316 SSDs and are experiencing higher than expected write latency. |
25382 | Under the conditions below, the amount of storage occupied by cold units (filled with 4096 small objects), is not accounted for and not reported, which may result in reaching a storage full or almost full situation that is not observable in the node storage statistics:
When such a situation occurs, the control plane software does not detect storage capacity reaching the threshold to start proactive rebalancing to free capacity. Furthermore, the system administrator relies on the same storage statistics the control plane exposes, and therefore cannot tell that the system capacity is reaching the limit. |
22582 | A server could remain in "Enabling" state if the enable server command is issued during an upgrade. |
19670 | The compression ratio returned by get-cluster API will be incorrect when the cluster has snapshots created over volumes. The calculation of the compression ratio at the cluster level uses different logic for physical used capacity and the amount of uncompressed data written to storage. Hence the compression ratio value might be higher than the actual value. A correct indication of cluster level compression can be deduced from a weighted average of compression ratio at the node levels; i.e., Compression ratio = sum(node compression ratio * node physical usage) / sum(node physical usage). |
18966 | "lbcli list events" could fail with "received message larger than max" when there are events that contain a large amount of information. Workaround: Use the --limit and --since arguments to read a smaller amount of data at a time. |
18948 | The node local rebuild progress (due to SSD failure) shows 100% done when there is no storage space left to complete the rebuild. |
18771 | When a Lightbits node is considered to be in a "permanent failure", Lightbits considers the cluster to be smaller. This will affect the minimum allowed replica count for new volumes. For example, in a three-nodes cluster, users will not be able to create a three-replica volume. |
18522 | When attempting to add a server to a cluster using lbcli 'create server' or rest post '/api/v2/servers", and the operation fails for any reason, 'list servers' could permanently show the new server in 'creating' state. |
18214 | Automatic rebalancing features (fail-in-place and proactive-rebalance) should be disabled if enable_iptables is enabled during installation. |
17398 | Metrics scraping in Lightbits may take more than 10 seconds, depending on the number of volumes and snapshots. The default scrape_timeout and scrape_ interval should be increased in case metrics are not collected by Prometheus. |
17329 | Lightbits exposes latency information per request size. The time window for latency measurement is not synchronized with the measurement of nr read/write requests. Therefore a weighted average calculation of latency over all request sizes will result in inaccurate latency information. |
17298 | The migration of volumes due to automatic rebalancing could take time, even when volumes are empty. |
15715 | During volume rebuild, Grafana dashboard does not show the write IOs for the recovered data. |
15037 | With the IP Tables feature enabled, adding a new node requires opening the etcd ports for that node using the "lbcli create admin-endpoint" command. |
14995 | A single server cluster cannot be upgraded using the API. In order to upgrade, manually log into the server, stop the Lightbits services, run a yum update, and reboot. |
14889 | In case of an SSD failure, the system will scan the storage and rebuild the data. The entire raw capacity will be scanned, even when not all of it was utilized. This leads to a longer rebuild time than necessary. |
14863 | Prior to lb CSI installation, the lb discovery client service must be installed and started on all K8S cluster nodes. |
14787 | Lightbits installation will fail on systems with NVDIMMs that do not support auto labels. Workaround: Log into the server and issue the following command: ndctl create-namespace -f -e namespace0.0 --type=pmem --mode=dax --no-autolabel |
14212 | OpenStack: Once a volume attach fails, the following attempts to attach will also fail. Workaround: Remove the discovery-client configuration files for the failed volume and restart the discovery-client and Nova services. |
13680 | In a cluster deployed with a minimum of two replicas and when more than one node fails, after completing a rebuild for the three-replicas volume, this volume may stay in read-only mode if another node returns to active state at the same time. |
13434 | Invoking rmmod nvme-fabrics before stopping the discovery-client service could cause a kernel panic on the client. |
13253 | A local rebuild takes the same amount of time, independently of storage utilization. |
13147 | If during a "disable server" flow another server fails, then “disable server” might not occur and the server will become active again. The "disable server" command should be issued again. |
13064 | Following a 'replace node' operation, volumes with a single replica will be created as 'unavailable' in the new node. Note: Single replica volumes are not protected, and data will not move to the new node. Workaround: Delete single replica volumes before replacing the node, or reboot the new server after replacing the node. |
12950 | When nodes are configured with allowCrossNumaDevices=false, adding an nvme-device from a NUMA node other than the logical node's instance ID will not work. In order to be able to tell to which NUMA node the nvme device is connected, check numaNodeID returned by lbcli get nvme-device. To tell what the logical node's instance ID is, check the suffix of the node's name in the lbcli list nodes. |
12310 | After a volume becomes unavailable due to the failure of all replicas, it could take more than one replica to recover before the volume can be available again. |
11856 | Volume and node usage metrics might show different values between REST/lbcli and Prometheus when a volume is deleted and a node is disconnected. |
11565 | In some cases, during Lightbits power-up, if an NVMe device is unexpectedly reset, Lightbits may fail to load and crashes. This requires a server reboot to recover. |
11326 | Volume metrics do not return any value for volumes that are created but do not store any data. |
10763 | The resources list (servers, nodes, volumes, nvme-devices) can only be filtered by a single field (e.g., list nvme-devices by node UUID rather than by both node UUID and server UUID). |
10021 | Commands affecting SSD content (such as blkdiscard, nvme format) should not be executed on the Lightbits server. |
9219 | If the Lightbits server unexpectedly enters an inactive state, a manual reboot is required. |
6734 | Compression should not be enabled on AMD EPYC based systems. |