Release 3.12.2
Release Date
v3.12.2 was released to the public on Apr 2, 2025.
New in This Release
This release is the latest 3.12.x LTS release. It introduces the following changes since version 3.12.1. Upgrading from earlier versions to 3.12.2 is recommended.
Item | Description | ID |
---|---|---|
1 | data-layer : Fixed incorrect handling of invalid values in control-plane monitoring. In unexpected cases in which a control-plane key-value object could mistakenly be stored with an invalid value, Lightbits would stop monitoring for specific key/prefix changes, which could cause the control plane to malfunction. | LBM1-35501 |
2 | data-layer : Fixed an issue that could cause a node to hang when etcd becomes unresponsive. | LBM1-35894 |
3 | data-layer : Fixed incorrect error handling when watching for changes in the control plain. Due to a missing retry mechanism, in some rare cases, Lightbits could potentially completely stop watching for changes - without raising any significant user event or attempting to restart the watch. This could potentially lead to service loss. Upgrading from 3.10.x, 3.11.x, and 3.12.1 is highly recommended. | LBM1-36090 |
4 | duroslight : Fixed a bug that could cause slightly longer than needed rebuilds due to missed synchronization points - causing Lightbits to rebuild based on an earlier synchronization point. | LBM1-35565 |
5 | lb-support : Did not bring kdump information if kdump was not used on the system. | LBM1-36260 LBM1-35753 |
6 | node-manager : A failure - while starting to run the internal services of the node-manager - could cause the node to set its status to active, although it cannot handle any change in the protection group. | LBM1-35886 |
7 | node-manager : Improved the node-manager startup flow. Added previously missing events to different failure flows, and retry-mechanisms for transient network issues when loading different data-layer objects. | LBM1-35749 |
8 | node-manager : Made TPM encryption handling more robust in the face of reusing servers in different clusters, by not relying on existing TPM sub-keys for encryption. | LBM1-36027 |
9 | node-manager : Fixed a bug that caused a node to remain inactive if encryption was enabled using TPM, but then the TPM was mistakenly cleared on the node's server (and then the node-manager service was restarted). | LBM1-36044 |
10 | node-manager : Fixed an issue where a failure while starting to run the internal services of the node-manager could cause the node to appear as active, when it is actually inactive. | LBM1-35575 |
11 | node-manager : Increased the default LimitNOFILE the node-manager sets in the duroslight-*.service files from 65536 to 300,000. This is to prevent duroslight process failure due to "Too many open files" with a large number of connected clients. duroslight will gracefully reject new connections once the limit of 2030 clients is reached. Each client can have up to 128 I/O connections. | LBM1-36478 |
12 | node-manager : Fixed an issue that can lead to a node becoming inactive under certain circumstances, where TPM access completely jams the go runtime scheduler, preventing any other go-routines spawned by the running service from running. | LBM1-35340 |
13 | node-manager : Prevented the possibility of rejoining a cluster after a server has been deleted from the cluster. | LBM1-35739 LBM1-36018 |
14 | packages/backend : The phase of TRIM object processing is now reflected in the powerup progress. | LBM1-35611 |
15 | profile-generator : Fixed a potential hang on Alma9 kernels related to single-instance multi-numa deployment. The fix enables node-manager to reserve memory via huge pages and free the memory once GFTL completes allocating most of the system RAM – ensuring that every node has minimal RAM space left to avoid hangs due to endless node-reclaim on Alma 9 kernels. The default extraReservedRAM = 50% of the OS reserved (currently this is 4 GB). | LBM1-35837 |
16 | upgrade-manager : Fixed an issue that caused the cluster last upgrade status to remain as "Upgrading" instead of "Failed", in case there was a failure in the cluster upgrade process. | LBM1-35663 |
17 | userlbe : Resolved a rare issue where SSDs failing to return write completions for an extended period could cause the GFTL to crash, resulting in the node becoming inactive. | LBM1-35598 |
18 | userlbe : Fixed a bug where - under some circumstances for graceful shutdown and recovery - Lightbits would use one core less than what was possible. This fix should slightly speed up graceful shutdown/recovery under those circumstances. | LBM1-35733 |
19 | userlbe : Fixed an issue where the sorting algorithm used was not configured properly for large lists sorted during powerup, which caused graceful powerup to take significantly longer than it should. | LBM1-35734 |
20 | userlbe : Fixed an issue where latency metrics were all bunched together in the smallest histogram bucket, making it harder to debug disk latency issues. Metrics are now correctly divided into buckets. | LBM1-35882 |
Installation and Upgradeability
You can upgrade to this release from all previous Lightbits v3.9.x, v3.10.x, and v3.11.x releases - as well as from 3.12.1. Upgrading is recommended.
Was this page helpful?