StarlingX OpenStack 12.0¶
ISO image¶
The pre-built ISO (Debian) for StarlingX 12.0 is located at the
StarlingX mirror repo:
OpenStack Helm Charts:
Source Code for StarlingX 12.0¶
The source code for StarlingX 12.0 is available on the r/stx.12.0 branch in the StarlingX repositories.
Deployment¶
To deploy StarlingX 12.0, see Consuming StarlingX.
For detailed installation instructions, see StarlingX 12.0 Installation Guides.
New Features / Enhancements / Limitations¶
OpenStack Release 12.0 is Supported on StarlingX Release 12.0¶
OpenStack Release 12.0 is supported on StarlingX Cloud Release 12.0 and includes the following updates:
Updates driven by StarlingX Kernel and Operating System upgrades.
Helm chart and Kubernetes resource updates to support new Kubernetes versions.
Kubernetes versions supported;
1.32.2
1.33.0
1.34.1
Supported default version in for new installs is:
1.34.1
System application updates required by StarlingX System Application Framework changes.
StarlingX OpenStack Support on Intel Sapphire Rapids Server¶
For StarlingX OpenStack 12.0 support on Intel Sapphire Rapids Server,
StarlingX OpenStack Support on Intel Granite Rapids-D Server¶
Note
Hosts in StarlingX Release 12.0 based on Granite Rapids for example, use Intel E825-C and E830 NICs, which are currently not supported in OpenStack 12.0 for OVS-DPDK deployments.
Update from StarlingX OpenStack Caracal to StarlingX OpenStack Epoxy (SLURP)¶
OpenStack Release 12.0 has been upgraded from OpenStack Caracal to OpenStack Epoxy (Skip-Level Upgrade Release Process (SLURP)).
Epoxy is the OpenStack release codename for OpenStack 12.0.
Epoxy is designated as a SLURP release, which introduces a new upgrade model in OpenStack.
Supports OpenStack Epoxy for new installations of OpenStack 12.0, delivered as part of the SLURP (Skip Level Upgrade Release Process) release.
OpenStack supports Epoxy on K8s v1.34.1.
OpenStack Epoxy initial release date: April 2, 2025.
This update is delivered as an in-service application update, with the following updates:
OpenStack configuration is preserved across the application.
Minimal elapsed time for the application.
Minimal OpenStack control plane service disruption.
Hosted virtual machines remain in service throughout the update.
NetApp External Storage Support¶
StarlingX OpenStack now supports NetApp as an external storage for OpenStack services Cinder, Glance, and Nova, enabling deployments without requiring internal Ceph or Ceph-Rook storage.
Volume and backup storage in Cinder
Image storage in Glance
Ephemeral volume storage in Nova
Supported storage protocols include Fibre Channel, iSCSI, and NFS. The following backend configurations are supported across Cinder, Glance, and Nova:
Internal Ceph storage backend only
External NetApp storage backend only
Concurrent use of both Ceph and NetApp backends
Note
External NetApp deployments do not require Ceph or Ceph-Rook for Cinder, Glance, or Nova.
The following OpenStack NetApp capabilities are supported:
Fibre Channel (FC) support for NetApp storage integration.
Full integration with OpenStack Cinder drivers for persistent VM storage.
First external storage integration available for StarlingX OpenStack, enabling enterprise-grade shared storage architectures.
Leverages NetApp external SAN for disaster recovery, capacity burst, and overflow workloads.
Enables seamless migration from VMware vSAN to StarlingX OpenStack with NetApp-backed storage.
Supports non-disruptive operations, including rolling upgrades and hardware refresh without downtime.
Delivers synchronous metro-cluster replication with zero-RPO disaster recovery.
Provides instant, space-efficient snapshots, cloning, and rapid rollback at scale.
Ensures predictable performance and tenant isolation through NetApp QoS policies.
Enables boot-from-SAN deployments and stateless compute architectures.
Allows independent scaling of compute and storage resources for operational flexibility.
Helps meet regulatory, certification, and vendor support compliance requirements.
See:
Role Based Access Control (RBAC) policy configuration¶
StarlingX OpenStack Release 12.0 enables group-based RBAC for Dex-federated users by leveraging OIDC group claims. Keystone processes these claims from the identity provider and maps them directly to OpenStack groups, allowing roles to be automatically assigned according to group membership.
Full support for all OpenStack Keystone roles and policies.
Dynamic creation of customer-defined roles and associated policies.
Flexible role criteria for multiple user types (master, leader, viewer, tester, etc.).
Integration of RBAC policies with SSO authentication mechanisms such as Active Directory.
See: RBAC for Dex Users
Remote Centralized Authentication (LDAP/Active Directory)¶
StarlingX OpenStack now supports Single Sign On (SSO) remote centralized authentication using LDAP / AD (Active Directory) with integration to multiple enterprise authentication technologies, enabling centralized and secure access control.
For StarlingX OpenStack Release 12.0 the scope of this feature is limited to Active Directory authentication using LDAP.
Support for additional SSO mechanisms—such as OAuth, SAML (Security Assertion Markup Language), Okta, and other identity providers is not included in this release and is planned for consideration in future roadmap releases.
StarlingX OpenStack Bond/AE interfaces support - OVS only¶
StarlingX OpenStack supports the use of bonded (aggregated Ethernet AE) network interfaces, which combine two or more physical NICs into a single logical interface. Bonding provides high availability through failover, increased bandwidth, and load balancing. If a physical NIC fails, traffic continues seamlessly over the remaining NICs.
Bonded interfaces are supported across management (MGMT), out-of-band management (OAM), storage, and data/provider networks, including VLAN subinterfaces configured on bonded interfaces, enabling resilient and scalable networking configurations.
Known Limitations and Procedural Changes for StarlingX 12.0 OpenStack¶
Enabling DEX for StarlingX OpenStack in TLS active systems¶
An issue has been identified in DEX (OpenID Connect (OIDC)) Single Sign-On (SSO) that prevents automated enablement of DEX SSO on systems where TLS is enabled. During the automated DEX enablement process, an issue occurs causing the DEX origin URL to be configured as HTTP instead of HTTPS. As a result, although user authentication via DEX succeeds, the subsequent redirection back to the OpenStack interface fails with a 404 error and redirects the users to the OpenStack login page indicating no backend endpoint is available for redirection.
Procedural Changes: To activate DEX on TLS enabled systems, follow the steps below.
Procedure
Log in to the active controller as an administrator.
Create a Helm override file to secure Horizon by running:
cat << 'EOF' > secure-horizon.yaml conf: horizon: local_settings: config: raw: SECURE_PROXY_SSL_HEADER: - HTTP_X_FORWARDED_PROTO - https EOF
Apply the override to the Horizon Helm chart.
$ system helm-override-update stx-openstack horizon openstack --reuse-values --values secure-horizon.yaml
Reapply the OpenStack application:
$ system application-apply <prefix>-openstack
After the application deployment completes, DEX will redirect authenticated users back to the OpenStack Horizon home page.
VMs backed by NetApp iSCSI and FC volumes are in ERROR state after StarlingX OpenStack Backup and Restore¶
OpenStack virtual machines (VMs) backed by NetApp SAN storage backends (iSCSI and FC) may complete the StarlingX OpenStack Backup and Restore process in an ERROR state. Manual intervention is required to transition the affected VMs back to an ACTIVE state.
Procedural Changes: NetApp iSCSI and FC backed VMs that remain in an ERROR state after the StarlingX OpenStack backup and restore procedure can be manually recovered by refreshing their resources using the OpenStack Server Shelve and Unshelve actions.
$ nova reset-state --active <VM uuid>
$ openstack server stop <VM uuid>
$ openstack server shelve <VM uuid>
$ openstack server unshelve <VM uuid>
For more details on procedural changes, refer to the StarlingX OpenStack Backup and Restore documentation at OpenStack Backup Considerations
NetApp PVCs are not Automatically Backed up and Restored by StarlingX OpenStack Ansible playbooks¶
Glance, Cinder, and Nova can be configured to use NetApp PVCs for data persistence; however, these PVCs are not automatically backed up or restored by the StarlingX OpenStack backup and restore (B&R) playbooks. As a result, the following issues may occur after the B&R procedure completes:
Glance images previously stored in the
glance-imagesPVC continue to appear in the output ofopenstack image listbut are no longer usable because the underlying image data is missing (for example, volume creation from image fails).Cinder volume backups previously stored in the
cinder-backupPVC continue to appear in the output of OpenStack volume backup list but are no longer usable because the backup data is unavailable (for example, volume creation from backup fails).Nova ephemeral volumes previously stored in the
nova-instancesPVC are no longer available, causing the affected ephemeral instances to enter an ERROR state.
Procedural Changes: Manually back up and restore NetApp PVCs using Kubernetes Volume Snapshots.
Note
Deployments and configurations for StarlingX OpenStack that rely on NetApp-backed PVCs are not recommended. Instead, use the following storage options:
NFS for Nova ephemeral volumes and Cinder backups
Cinder volumes for storing Glance images
For more details on limitations and procedural changes, refer to the StarlingX OpenStack Backup and Restore documentation at OpenStack Backup Considerations.
StarlingX OpenStack Backup and Restore Fails to Reapply the Application¶
When StarlingX OpenStack is deployed with TLS enabled (recommended for production
environments), a configuration-escaping issue may cause PostgreSQL fail while
processing the SQL file during execution of the restore-openstack playbook.
As a result, application configuration overrides are not fully restored, and
the StarlingX OpenStack restore operation fails during the application reapply phase.
Procedural Changes: Execute the restore-openstack playbook with
TLS disabled (HTTP mode) to restore StarlingX OpenStack, then reapply the application
with the required configuration overrides to enable TLS after restore
completes.
For more details on procedural changes, refer to the StarlingX OpenStack Backup and Restore documentation at OpenStack Backup Considerations
StarlingX OpenStack Reapply Fails to Deploy Nova-Compute Pods¶
When stx-openstack is removed (system application-remove stx-openstack)
while a orphaned libvirt domain (Guest Virtual Machine not owned by the
NOVA service) is still running in one of the compute nodes, further application
re-applies (system application-apply stx-openstack) may fail to deploy
nova-compute pods, even if at the end of the apply operation the application
status is displayed as applied.
During the stx-openstack removal procedure, the application lifecycle
plugin blocks the removal operation if any virtual machines are present in the
Nova database. However, in rare cases, a virtual machine may become orphaned
by the Nova service and exist only as a libvirt domain. In such cases, the
lifecycle plugin does not detect the orphaned instance and allows the
application removal to proceed.
Procedural Changes: When the application fails to re-apply due to orphan
VMs from previous apply, before re-applying the StarlingX OpenStack application, it is
necessary to ssh into the specific compute node that is presenting the
issue (same node where the orphaned VM was left behind) and manually delete
using the virsh CLI.
$ ssh ${COMPUTE_NODE_NAME}
$ sudo virsh list --all # Identify and copy the name of the orphaned VMs
$ sudo virsh destroy <instance-name> # Turn off the VM
# Flag --nvram required for UEFI-booted VM with NVRAM variables
$ sudo virsh undefine <instance-name> [--nvram] # Remove the VM
Note
To remove the StarlingX OpenStack application, first you need to remove all StarlingX OpenStack resources (including VMs) and then remove the StarlingX OpenStack, for more details see Uninstall OpenStack.
Cinder Volume Creation Fails when a QoS Policy Is Applied to a NetApp NFS Volume Type¶
NetApp QoS support and REST Client Dependency Volume creation with QoS using the NetApp NFS backend is not supported in environments running NetApp ONTAP versions earlier than 9.11.1.
When using the default legacy ZAPI client (netapp_use_legacy_client = true), the Cinder NetApp driver relies on APIs such as qos-policy-group-get-iter, which are not fully supported or behave differently in ONTAP 9.9.1. This prevents proper QoS policy handling and results in volume provisioning failures.
An attempt to switch to the REST client (netapp_use_legacy_client = false) does not resolve the issue in these environments. After resolving configuration-related issues (such as NFS share formatting and backend configuration), the driver fails to initialize due to the following version requirement:
Note
REST Client can be used only with ONTAP 9.11.1 or higher.
Impacts due to this limitation:
QoS-based volume creation is limited when using the legacy ZAPI client on ONTAP 9.9.1, and cannot be validated using the REST client due to its requirement for ONTAP 9.11.1 or higher.
NetApp backend initialization may fail when attempting to use REST client.
No functional configuration allows QoS usage in these environments.
Procedural Changes: N/A.
stx-openstack Application does not Clean Up Extra Ceph Pools¶
After the stx-openstack application is removed, the associated Ceph pools remain in the Ceph cluster.
Warning
Deleting the additional Ceph pools created by StarlingX OpenStack results in permanent data loss. After these Ceph pools are removed, the associated OpenStack data cannot be recovered. As a consequence, OpenStack resources such as Glance images, Cinder volumes, and Cinder backups stored in those pools will not be restored by the StarlingX OpenStack Backup and Restore procedure.
$ system application-list
+--------------------------+----------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+----------+-------------------------------------------+------------------+----------+-----------+
| cert-manager | 26.03-83 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| dell-storage | 26.03-17 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| deployment-manager | 26.03-30 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| ipsec-policy-operator | 26.03-42 | ipsec-policy-operator-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| nginx-ingress-controller | 26.03-76 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 26.03-76 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 26.03-5 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | applied | completed |
| rook-ceph | 26.03-5 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| stx-openstack | 26.03-0 | stx-openstack-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
+--------------------------+----------+-------------------------------------------+-----------------------------------------+
$ ceph osd pool ls kube-rbd kube-cephfs-data kube-cephfs-metadata images cinder.backups cinder-volumes
Procedural Changes:
Procedure
Verify that the pools exist before trying to remove them.
$ ceph osd pool ls
Unlock the monitors.
$ ceph tell mon.* injectargs --mon-allow-pool-delete=true
Remove the pools.
$ ceph osd pool delete images images --yes-i-really-really-mean-it $ ceph osd pool delete cinder-volumes cinder-volumes --yes-i-really-really-mean-it $ ceph osd pool delete cinder.backups cinder.backups --yes-i-really-really-mean-it
Lock the monitors.
$ ceph tell mon.* injectargs --mon-allow-pool-delete=false
stx-openstack Application is not Always Re-applied when New Worker Nodes are Added¶
OpenStack does not automatically apply to newly added compute nodes, whether the nodes are added individually or in batches, especially when scaling to higher node counts.
Procedural Changes: After adding OpenStack worker nodes, the OpenStack application must be manually re-applied.
Note
When the total number of compute nodes exceeds 30, limit onboarding to only two compute nodes at a time.
Procedure
Disable Kubernetes application audit.
$ system service-parameter-modify platform config k8s_application_audit=disabled
Disable automatic re-apply after runtime manifest apply.
$ system service-parameter-modify platform config autoreapply_apps_after_apply_runtime_manifest=disabled
Provision compute nodes (batch onboarding).
Re-enable Kubernetes application audit.
$ system service-parameter-modify platform config k8s_application_audit=enabled
Re-enable automatic reapply after runtime manifest apply system service-parameter-modify.
$ platform config autoreapply_apps_after_apply_runtime_manifest=enabled
Manually re-apply the openstack application (as required).
$ system application-apply stx-openstack
OpenStack Restore Overwrites Glance Helm User Overrides¶
During the OpenStack restore procedure, the ansible playbook executes the helm-override-update command on the Glance chart without the “–reuse-values” flag. This causes any user overrides previously applied to the Glance chart to be completely overwritten by the restore process.
Procedural Changes: Before running the restore procedure, execute the following command to add the “–reuse-values” flag to all helm-override-update commands in the restore-openstack ansible files.
sudo find / -path "*/restore-openstack*" -type f -exec grep -l "system helm-override-update" {} \; 2>/dev/null | while read f; do tmp=$(mktemp) sudo sed -e '/--reuse-values/!s/system helm-override-update/system helm-override-update --reuse-values/g' -e 's/show_multiple_locations=True/show_multiple_locations=False/g' "$f" > "$tmp" chmod 644 "$tmp" sudo mount --bind "$tmp" "$f" done
Manual TLS Certificate Rotation¶
When StarlingX OpenStack is deployed with FQDN and TLS configured, certificates do not refresh automatically. Kubernetes subPath mounts bind the certificate at pod creation and do not update when the secret changes. Rotation must be done manually by patching the TLS secrets and reapplying the application.
Note
Certificate expiration does not trigger pod failures. Pods remain Running
and appear healthy in kubectl get pods. Errors are raised only during a
subsequent TLS handshake (for example, CERTIFICATE_VERIFY_FAILED from
openstack volume list). Ensure certificate expiry is tracked independently
rather than relying on pod health.
Procedural Changes: Follow the procedure below:
Prerequisites
Verify if the OpenStack endpoint domain is configured:
$ system service-parameter-list | grep endpoint_domain
Verify if Helm overrides point to the certificate file paths:
$ system helm-override-show stx-openstack clients openstack openstackCertificateFile: /var/opt/openstack/certs/openstack-cert.crt openstackCertificateKeyFile: /var/opt/openstack/certs/openstack-cert.key openstackCertificateCAFile: /var/opt/openstack/certs/openstack-ca-cert.crt
Follow the manual TLS certificate rotation steps below:
Procedure
Place the new certificate files on the controller.
In most cases only the server cert and key need to be rotated. Replace the CA cert only if it has also changed (e.g. full chain rotation or CA re-issuance).
Server cert and key (required):
$ sudo cp openstack-cert.crt /var/opt/openstack/certs/openstack-cert.crt $ sudo cp openstack-cert.key /var/opt/openstack/certs/openstack-cert.key
CA cert (only if the CA has changed):
$ sudo cp openstack-ca-cert.crt /var/opt/openstack/certs/openstack-ca-cert.crt
Verify the chain and the cert/key are valid before proceeding:
$ openssl verify -CAfile /var/opt/openstack/certs/openstack-ca-cert.crt \ /var/opt/openstack/certs/openstack-cert.crt
Patch each TLS secret in the openstack namespace.
Pods hold the old cert via subPath mounts and will not reload until reapply. Patch each secret to update the CA, certificate, and key values:
CA=$(base64 -w0 /var/opt/openstack/certs/openstack-ca-cert.crt) CRT=$(base64 -w0 /var/opt/openstack/certs/openstack-cert.crt) KEY=$(base64 -w0 /var/opt/openstack/certs/openstack-cert.key) for SECRET in $(kubectl -n openstack get secrets -o name | grep tls-public); do kubectl patch $SECRET -n openstack --type=json -p="[ {\"op\":\"replace\",\"path\":\"/data/ca.crt\",\"value\":\"$CA\"}, {\"op\":\"replace\",\"path\":\"/data/tls.crt\",\"value\":\"$CRT\"}, {\"op\":\"replace\",\"path\":\"/data/tls.key\",\"value\":\"$KEY\"} ]" doneReapply the application to initiate a pod rollout and ensure the updated secrets are applied.
$ system application-apply stx-openstack
Monitor the status with:
$ system application-show stx-openstack
Wait until status: applied and progress: completed is displayed.
Confirm the certificate rotation by verifying that the new certificate is present on the wire:
FQDN=$(system service-parameter-list | awk '/endpoint_domain/{print $8}') echo | openssl s_client -connect keystone-${FQDN}:443 \ -servername keystone-${FQDN} 2>/dev/null | \ openssl x509 -noout -dates -subject -issuer
Confirm that certificate rotation is complete by comparing the dates and issuer against the new certificate.
Manual Creation of LDAP Groups on the Central Controller¶
The LDAP logic added during the OpenStack client containerization requires the “OpenStack” user group to exist on the Central Controller when deploying DC for OpenStack on subclouds. If the group does not exist, the application upload will fail with an error indicating that the group cannot be found.
Procedural Changes: Before uploading StarlingX OpenStack to the subcloud, manually create the “OpenStack” user group on the Central Controller with a fixed GID to maintain consistent group ownership across subclouds.
Unexpected Controller Shutdown Results in StarlingX OpenStack Service Disruption¶
On AIO-DX systems, an abrupt controller node shutdown for example, caused by a power outage can bring down the StarlingX OpenStack MariaDB service, resulting in OpenStack services becoming unavailable. In this state, standard OpenStack CLI commands, for example, openstack server list --all are expected to fail.
Procedural Changes:
To recover OpenStack after an abrupt controller node shutdown on AIO-DX systems use any of the two supported methods:
Option 1
Wait for Node Recovery. Once the controller node is back online and available, the platform automatically triggers an OpenStack reapply. This restores the MariaDB service and all associated OpenStack services. No manual intervention is required.
Option 2
Manual recovery using Kubernetes taint and reapply. If immediate recovery is needed or the node remains unavailable use the following steps:
Taint the affected node to mark it as out-of-service which was implemented with K8s v1.28:
$ kubectl taint nodes <node> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute
Reapply the OpenStack application:
$ system application-apply stx-openstack
Note
During OpenStack reapply if any OpenStack pods remain in a “Terminating” state, force delete them using the following command:
$ kubectl delete pod <pod-name> --force -n openstack
Postrequisites
If Option 2 is used, the following steps must be executed to get OpenStack redeployed on both controllers when the node is back online:
Remove the taint from the node.
$ kubectl taint nodes <node> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute-
Reapply the OpenStack application.
$ system application-apply stx-openstack
If the apply stalls because RabbitMQ pods are in CrashLoopBackOff, run the following command to reconfigure the pods and continue the apply.
$ kubectl delete pods -n openstack -l application=rabbitmq
Note
During OpenStack reapply if any OpenStack pods remain in a “Terminating” state, force delete them using the following command:
$ kubectl delete pod <pod-name> --force -n openstack
VMs with Large Local Ephemeral Disks are Unable to Evacuate from a Compute Node that is Down¶
NFV limits the size of the VMs with large local ephemeral disks from being evacuated.
VMs that have a flavor with a local ephemeral disk larger than 60GB and are not evacuated from a compute node that is down (intermittent) are in an inaccessible / ERROR state.
VMs that have a flavor with a local ephemeral disk smaller than the limit set by NFV or are using a volume(s) backed by a remote storage backend (e.g. rook ceph), will be evacuated upon node failure.
Procedural Changes: The VM intermittently enters into an error state following migration triggered by a host reset or shutdown. If this happens, the affected VMs must be recreated.
To allow servers with big disk size evacuation do the following:
Change
max_evacuate_local_image_disk_gb" configurationtomax_evacuate_local_image_disk_gb: <size in GB>inside[instance-configuration]section of the/etc/nfv/vim/config.iniconfig file.Restart VIM service.
sudo sm-dump | awk '$1 ~ "vim" && $2 ~ "enabled" {print $1}' | sudo xargs -rn1 sm-restart-safe service
Recover and Update Operations Incorrectly Reported as ‘Completed’¶
Sometimes the status of recover or update operations is reported as ‘completed’ by system application-show |prefix|-openstack command, but the operation is still running.
Procedural Changes: Monitor progress of recover and update operations using kubectl get helmreleases -n openstack. The update operation can be considered as ‘completed’ only when all the helm releases displayed by kubectl get helmreleases -n openstack | awk '$3=="True".
Users can Pause up to 2 VMs Concurrently¶
If more than 2 VMs are paused / suspended /stopped concurrently a http ERROR 500 is displayed.
Procedural Changes: Pause / Suspend / Stop only up to 2 VMs concurrently.
Ceph Volume Size does not Decrease¶
Ceph volume size stays the same after the volume and VMs are deleted, impacting resource availability.
Procedural Changes: Increase the size for the affected volume and VMs to mitigate the impact of this delay.
OVS-DPDK Huge Page limitation¶
If Huge Pages are not present in the guest OS (small page size is used or no page size is set), the network interface will appear but will not work as expected.
Procedural Changes: VM instances must be configured to utilize large (1G)
Huge Pages through the custom flavor attribute hw:mem_page_size. Use the
following command to add Huge Page support on flavors, enabling AVS/DPDK to
work as expected.
$ openstack flavor set <flavor> --property hw:mem_page_size=1GB
Hardware Updates¶
See:
Bug status¶
Fixed bugs¶
This release provides fixes for a number of defects. Refer to the StarlingX bug database to review the Release 12.0 Fixed Bugs.