Release 3.11.1
Release Date
v3.11.1 was released to the public on October 9, 2024.
New in This Release
This release introduces the following changes since version 3.10.x:
Item | Description | ID |
---|---|---|
1 | New feature [tech preview]: Software Encryption at Rest. You can now enable cluster-level encryption for your data stored on drives (encryption at rest), such that if any drive is removed from the cluster, the data on it remains encrypted on the drive and cannot be read as plain text. The data is encrypted using AES-XTS-256.
Note that this feature is in technology preview, not yet generally available, and offers only limited support for evaluation purposes. It can only be enabled on clusters that have no data (e.g., newly installed clusters), and it cannot be disabled once activated. Use it with caution and do not enable it in your production clusters. For more information on how to enable and use it, see the Lightbits Administration Guide and the lbcli and API documentation. | LBM1-33293 |
2 | New feature [tech preview]: Active Directory Federation Services (ADFS) Integration. You can now enable ADFS support for your clusters, enabling Lightbits to authenticate and authorize API invocations using your organization’s ADFS/oAuth services and single sign-on. Note that this feature is in technology preview, not yet generally available, and offers only limited support for evaluation purposes. Use it with caution and do not enable it in your production clusters. For more information on how to enable and use it, see the Lightbits Administration Guide and the lbcli and API documentation. | LBM1-33555 |
3 | New experimental feature: Added a new Generative AI assistant for lbcli using the new lbcli ask command (this is an experimental feature that requires internet connectivity). | LBM1-34644 |
4 | New feature: Added the ability to add/edit/remove user-defined metadata in the form of key=value labels to volumes. For more information, see the Lightbits Administration Guide and the lbcli and API documentation. | LBM1-33764 |
5 | New feature: All internal etcd clients (e.g., cluster-manager, node-manager, api-service ) will now have multiple mTLS endpoints that they can use to connect to etcd . If the local etcd fails, the client can continue to communicate with an etcd service on another server in the cluster and not cause the node to become unavailable. | LBM1-33549 |
6 | api-service : Fixed a rare condition where the api-service could return a volume as created, without a creation timestamp. | LBM1-34392 |
7 | api-service : Blocked the option to set a negative value for a server-specific permanent failure timeout - even when ForceConfiguration is configured. | LBM1-34013 |
8 | cluster-manager : Fixed a bug that could lead to repeating crashes of the cluster-manager process (process panic), volumes being left in Migrating state, a flood of repeating volume events in the event log, and increased CPU and I/O consumption by the cluster-manager . The corresponding process crashes can be identified by a panic message in journalctl ("panic: resource ... was not considered as under update, task ID: ..., operation: PgMigrationOperation, state: Running, resource: ..."). | LBM1-34103 |
9 | cluster-manager : When a migration task is failed or aborted, the corresponding etcd updates are now performed as one atomic transaction to prevent partial updates. This fix prevents race conditions which could have led to cluster instability resulting in the inability to perform management operations and even to properly access volumes. | LBM1-33412 |
10 | cluster-manager : Fixed incorrect handling when trying to delete a server that has the same IP as an existing server (for example if a server was replaced using the same IP). | LBM1-30306 |
11 | data-layer : Fixed a minor error that caused the repeated generation of an empty key /registry in etcd. | LBM1-34209 |
12 | lb_monitor.sh : Improved the rebuild percentage statistic by weighing based on physical usage of each volume, and added a total rebuild percentage for the cluster that shows 100% when all volumes are synced and a rebuild is done. | LBM1-33545 |
13 | lbcli : Added an option to fetch logs from all servers in the cluster. For example, you can do lbcli fetch logs --path Dir --servers-list=server01,server02 or lbcli fetch logs --path Dir --servers-list=all . | LBM1-34072 |
14 | lbcli : Added additional debug information for scenarios where lbcli fails to open its configuration file. | LBM1-34340 |
15 | light-app : Fixed an issue with the logs playbook running the log-collector role. Updated the log_days parameter to correctly specify the number of days of logs to collect from each server. | LBM1-34345 |
16 | lightbits-api : Encryption at rest can only be enabled if there are no existing volumes or snapshots on the cluster, | LBM1-33605 |
17 | lightbits-api : Enabling encryption can take a couple of seconds. During this time, if a user tries to create a volume, the operation will fail with an error to the user or an event to the event log. It is recommended to validate that the cluster encryption state is enabled before creating volumes. | LBM1-33606 |
18 | lightbox-exporter : Fixed a server performance dashboard exporter port problem that caused empty selector and "no data" in some graphs. | LBM1-34171 |
19 | node-manager : Increased node-manager.service shutdownCompleteTimeout and the internal shutdownCompleteTimeout and recoveryCompleteTimeout to be sufficiently long for systems with large storage size. This prevents timing out during a graceful shutdown (resulting in an unwanted abrupt powerup), or during an abrupt powerup (resulting in failure to power up the node). Note that this change is only applicable to new installations. | LBM1-34417 |
20 | openstack : Added volume level IPACL support to Lightbits cinder and os-brick drivers in OpenStack, starting from the OpenStack Dalmatian release. | LBM1-29596 |
21 | userlbe : During shutdown, the GFTL will wait for the frontend to close all connections and disconnect, before starting cleanup and shutting down. | LBM1-34343 |
22 | userlbe : Fixed an issue that caused abrupt recovery to fail with assertion failure in md_recovery.c:644 , leaving a server unable to power up after abrupt shutdown. This can happen following the deletion of more than one snapshot of a volume, where one of the deleted snapshots has the SnapshotNsidVer value matching a certain pattern (two values out of each 64, bits [5:1]=0x1f ). The snapshots must contain data prior to deletion (non-0 values of userWritten/physicalCapacity ). | LBM1-34586 |
23 | userlbe : Implemented hysteresis for metadata (MD) ranges and storage over provisioned space (OVP) read-only/read-write flickering. Under certain conditions - e.g., running out of MD shards and/or running out of storage OVP - it would go into read-only, and then immediately go back to read-write when going below the threshold again. This could lead to quick flickering between read-only and read-write states, which is undesirable to filesystems. Implement a hysteresis mechanism to increase the likelihood that when going into read-only or back to read-write, it will stay for a longer period of time. | LBM1-34172 |
24 | userlbe : Sped up graceful shutdown by removing a (no longer needed) limit of only using four cores. | LBM1-34180 |
25 | Photon - Lightbits’ multi-cluster management UI - was refreshed from the previously released public preview version, with several performance and quality improvements. | LBM1-34285 |
Installation and Upgradeability
You can upgrade to this release from all previous Lightbits v3.8.x, 3.9.x, and 3.10.x releases.
Was this page helpful?