Skip to main content

OSISM 10

Instructions for the upgrade can be found in the Upgrade Guide.

ReleaseRelease Date
10.0.0-rc.18. December 2025
10.0.0-rc.230. January 2026

Upgrade notes

RabbitMQ 3 to RabbitMQ 4 migration

OSISM 10 only supports RabbitMQ 4. This requires a mandatory switch to quorum queues if this has not already been done.

If you were already using quorum queues with RabbitMQ 3, migrating from RabbitMQ 3 to RabbitMQ 4 is easy. Run osism apply -a upgrade rabbitmq. Most of the existing old classic queues are automatically removed when upgrading the individual OpenStack services afterwards. After completing all upgrades, run osism migrate rabbitmq3to4 delete to remove old classic queues.

If you are unsure whether you are already using quorum queues or not, first make the upgrade from the Manager service. Then run osism migrate rabbitmq3to4 check.

$ osism migrate rabbitmq3to4 check
2025-12-03 21:04:33 | INFO | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack...
2025-12-03 21:04:33 | INFO | Found 210 classic queue(s)
2025-12-03 21:04:33 | INFO | Found 0 quorum queue(s)
2025-12-03 21:04:33 | INFO | - 210 classic queue(s) in vhost /
2025-12-03 21:04:33 | INFO | Migration is REQUIRED: Only classic queues found, no quorum queues

If you have not used quorum queues before, here is our recommended procedure. This creates a new RabbitMQ vHost openstack that uses quorum queues by default and then moves all queues there when upgrading the services.

  1. If not already done upgrade the Manager service as usual.

  2. Remove the om_enable_rabbitmq_quorum_queues parameter from environments/kolla/configuration.yml.

  3. Add the om_rpc_vhost: openstack parameter in environments/kolla/configuration.yml.

  4. Add the om_notify_vhost: openstack parameter in environments/kolla/configuration.yml.

  5. Upgrade RabbitMQ with osism apply -a upgrade rabbitmq.

  6. Prepare a new RabbitMQ vHost that uses quorum queues by default with osism migrate rabbitmq3to4 prepare.

  7. Upgrade the services that use RabbitMQ and delete the old queues afterwards. For aodh, for example, first run the upgrade with osism apply -a upgrade aodh and then remove the classic queues.

    $ osism migrate rabbitmq3to4 delete aodh
    2025-12-02 20:55:27 | INFO | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack...
    2025-12-02 20:55:27 | INFO | Found 2 classic queue(s) for service 'aodh' in vhost '/'
    2025-12-02 20:55:27 | INFO | Deleted queue: alarm.all.sample
    2025-12-02 20:55:27 | INFO | Deleted queue: alarming.sample
    2025-12-02 20:55:27 | INFO | Successfully deleted 2 queue(s) for service 'aodh' in vhost '/'

    Before upgrading Nova, two additional steps are required in preparation. Afterwards, you can upgrade Nova as usual with osism apply -a upgrade nova.

    osism apply -a config nova -l 'nova-conductor[0]'
    osism apply nova-update-cell-mappings

    After upgrading all services, you can also delete all remaining classic queues at once using osism migrate rabbitmq3to4 delete.

    These services use RabbitMQ:

    • aodh
    • barbican
    • ceilometer
    • cinder
    • designate
    • magnum
    • manila
    • neutron
    • nova
    • octavia
  8. Once everything has been upgraded, the old notification queues can also be deleted with osism migrate rabbitmq3to4 delete notifications.

  9. Old exchanges can be removed with osism migrate rabbitmq3to4 delete-exchanges.

When the Manager's listener service is used (enable_listener in environments/manager/configuration.yml) add the new openstack RabbitMQ vhost to the manager_listener_broker_uri parameter. Then update the manager with osism update manager and delete the old queues with osism migrate rabbitmq3to4 delete manager.

Finally, you can re-run the check command. There should now be no more classic queues.

$ osism migrate rabbitmq3to4 check
2025-12-04 08:38:58 | INFO | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack...
2025-12-04 08:38:58 | INFO | Found 0 classic queue(s)
2025-12-04 08:38:58 | INFO | Found 216 quorum queue(s)
2025-12-04 08:38:58 | INFO | - 216 quorum queue(s) in vhost openstack
2025-12-04 08:38:58 | INFO | Migration is NOT required: Only quorum queues found

New namespace for Kolla images

To make it easier to identify which OpenStack version is being used, the OpenStack version is now included in the Kolla Image namespace. An existing docker_namespace parameter must be adjusted accordingly. In the case of OSISM 10, this looks as follows. In the future, it will be possible to use different OpenStack versions with a specific OSISM release.

environments/kolla/configuration.yml
docker_namespace: kolla/release/2025.1

New container registry

Container images are no longer pushed to Quay.io and are only made available on our own container registry. During the transition phase, the new container registry must be made known in the configuration repository. In the future these parameters can be removed again.

environments/manager/configuration.yml
docker_registry: index.docker.io
docker_registry_ansible: registry.osism.tech
docker_registry_netbox: registry.osism.tech
inventory/group_vars/all/registries.yml
ceph_docker_registry: registry.osism.tech
dnsmasq_docker_registry: registry.osism.tech
docker_registry_ansible: registry.osism.tech
docker_registry_cephclient: registry.osism.tech
docker_registry_cgit: registry.osism.tech
docker_registry_dnsdist: registry.osism.tech
docker_registry_homer: registry.osism.tech
docker_registry_kolla: registry.osism.tech
docker_registry_netbox: registry.osism.tech
docker_registry_nexus: registry.osism.tech
docker_registry_openstackclient: registry.osism.tech

New service names for RadosGW in Ceph Reef

The naming scheme for the Ceph RadosGW service was changed from

rgw.$HOSTNAME.$INSTANCE

to

rgw.$ZONE.$HOSTNAME.$INSTANCE

Please adapt any client entries in ceph_config_overrides in environments/ceph/configuration.yml accordingly. E.g. if you previously had

environments/ceph/configuration.yml
ceph_conf_overrides:
"client.rgw.{{ hostvars[inventory_hostname]['ansible_hostname'] }}.rgw0":

change it to

environments/ceph/configuration.yml
ceph_conf_overrides:
"client.rgw.{{ rgw_zone }}.{{ hostvars[inventory_hostname]['ansible_hostname'] }}.rgw0":

Removal of the community.general.yaml Ansible plugin

If community.general.yaml has been set for stdout_callback in ansible.cfg, this entry must be removed and replaced with result_format=yaml.

ERROR! [DEPRECATED]: community.general.yaml has been removed. The plugin
has been superseded by the option result_format=yaml in callback plugin
ansible.builtin.default from ansible-core 2.13 onwards. This feature was
removed from community.general in version 12.0.0. Please update your
playbooks.

TLS for ProxySQL is now enabled by default

If you are already using ProxySQL, but without TLS, set the following parameter in environments/kolla/configuration.yml.

environments/kolla/configuration.yml
database_enable_tls_internal: "no"

Remove of the Apache2 Shibboleth module in Keystone image

Due to repeated problems with the Apache2 Shibboleth module in conjunction with the Apache2 OIDC module in the Keystone container image, the Apache2 Shibboleth module has been removed. An overlay image is now available with osism/keystone-shib, which only contains the Apache2 Shibboleth module and can be used as needed.

New parameters

  • Generate password with pwgen 32 and add prometheus_haproxy_password to environments/kolla/secrets.yml

Ceph RGW Multisite support

Support for Ceph RGW Multisite deployments is available through a dedicated ceph-ansible-rgw-multisite container image. This image is provided for Ceph Quincy, Reef, and Squid releases and includes the necessary functionality for deploying and managing RGW Multisite configurations.

To use the RGW Multisite image, set the following parameter in environments/ceph/configuration.yml:

environments/ceph/configuration.yml
ceph_ansible_container_image: "registry.osism.tech/osism/ceph-ansible-rgw-multisite:CEPH_RELEASE"

Replace CEPH_RELEASE with your target Ceph version (quincy, reef, or squid).

Removals

Kubernetes

  • ingress-nginx: The Kubernetes project has announced the retirement of ingress-nginx. The project will receive best-effort maintenance until March 2026, after which no new releases, bug fixes, or security updates will be provided. Users should migrate to the Gateway API or an alternative ingress controller.

  • Kubernetes Dashboard: The Kubernetes Dashboard has been archived by SIG UI and is no longer actively maintained. The recommended successor is Headlamp, which provides a modern web interface with plugin support and proper RBAC integration.

Deprecations

Deprecation of ceph-ansible

The deployment tool ceph-ansible is deprecated as of OSISM 10 and will not be supported in upcoming OSISM releases. While ceph-ansible is still maintained upstream, development activity has slowed significantly. The official recommendation is to migrate to cephadm.

Existing Ceph clusters deployed with ceph-ansible will continue to be fully usable in OSISM 10. This deprecation affects the manageability of Ceph clusters via ceph-ansible. The day-to-day functionality of the clusters themselves is not impacted. However, upgrades, expansions, and other lifecycle operations on Ceph clusters via ceph-ansible will not be possible in future OSISM releases. A migration to a an other deployment tool like cephadm will be required to perform such operations going forward.

We are actively preparing migration paths from ceph-ansible to cephadm. As each environment is unique, the exact migration approach will depend on the specific deployment scenario. OSISM customers will receive dedicated support for their migration. If you are planning to migrate, please contact us so we can assist you with your specific requirements. In the meantime, we recommend familiarizing yourself with the cephadm documentation.

References

Ceph 18.2 (Reef)

Ceph 18.2 release notes: https://docs.ceph.com/en/latest/releases/reef/

OpenStack 2025.1 (Epoxy)

OpenStack 2025.1 release notes: https://releases.openstack.org/epoxy/index.html

Release notes for each OpenStack service: