Skip to main content


With the OSISM Testbed, it is possible to run a full Sovereign Cloud Stack deployment on an existing OpenStack environment such as Cleura or

OSISM is the reference implementation for the Infrastructure as a Service (IaaS) layer in the Sovereign Cloud Stack (SCS) project. The OSISM Testbed is therefore used in the SCS project to test and work on the Instrastructure as a Service layer.

The OSISM Testbed is intended as a playground. Further services and integration will be added over time. A increasing number of best practices and experiences from the productive deployments will be included here in the future. It will become more production-like over time. However, at no point does it claim to represent a production setup exactly.


Cloud access

The usual prerequisite is to have an account on one of the supported OpenStack cloud providers. As the OSISM Testbed also virtualizes systems itself, the OpenStack cluster should provide the capabilities for nested virtualization.

It is not part of this guide to describe the registration with the individual cloud providers. Please contact the respective cloud provider for this.

ProductProviderProfile nameNote
Fuga CloudFUGAfuga
pluscloud openplusserverpluscloudopen
pluscloud SCS Testplusservergx-scs
REGIO.cloudOSISMregio-fastboot from NVMe SSD backed volumes
Wavestacknoris networkwavestack

For each cloud provider listed in the table, a predefined profile is available in the terraform/environments directory. This profile contains the name of the public network, which flavors to use, etc.

Here is an example from the profile for

flavor_manager            = "SCS-4V-16-50"
flavor_node = "SCS-8V-32-50"
volume_type = "ssd"
image = "Ubuntu 22.04"
image_node = "Ubuntu 22.04"
public = "public"
availability_zone = "nova"
volume_availability_zone = "nova"
network_availability_zone = "nova"

Cloud resources

The OSISM Testbed requires at least the following project quota when using the default flavors:

4Instances28 VCPUs + 112 GByte RAM (3 modes, 1 manager)
9Volumes90 GByte volume storage
1Floating IP
3Security group
16Security group rules


  • make must be installed on the system
  • Wireguard or sshuttle must be installed on your system for VPN access
  • Python must be installed, the Python version used must be at least 3.10, otherwise the current Ansible release cannot be used (details in the Ansible support matrix)
  • python3-venv must be installed for managing Python dependencies like Ansible


This section describes step by step how to deploy the OSISM Testbed.

  1. Request access from the administrator of the respective cloud or get access to an OpenStack cloud.

  2. Clone the osism/testbed repository.

    The repository can also be cloned to any other location.

    mkdir -p ~/src/
    git clone ~/src/
    cd ~/src/
  3. Configure your cloud access profile

    The access data for the cloud provider used is stored in terraform/clouds.yaml and (optionally) in terraform/secure.yaml (same structure, if you want to store credentials on a separate place).

    In file terraform/clouds.yaml.sample you will find examples of typical setups. Settings that are identical for all users of a cloud can be defined centrally via the profiles of the file terraform/clouds-public.yaml. You can reference these settings by using the profile parameter in cloud-specific definition in terraform/clouds.yaml.

    The user specific settings of the clouds.yaml file are provided by the cloud provider. Please check the documentation of the cloud provider you are using or their support for details. is used as an example here. The cloud name in clouds.yaml and the environment name (value of ENVIRONMENT) are regiocloud in this case. It is important that the name of the cloud in clouds.yaml matches the name of the environment to be used. The names must be identical. It is currently not possible to name the cloud regiocloud-123 in clouds.yaml if the environment is regiocloud.

    If another cloud is used, replace regiocloud with the respective profile name from the table above.

    The use of application credentials is preferred. This way it is not necessary to store details like username, project name or sensitive information like the password in the clouds.yaml file.

    The application credentials can be found in Horizon under Identity. Use OSISM Testbed as name and click Create Application Credential.

    profile: regiocloud
    application_credential_id: ID
    application_credential_secret: SECRET
    auth_type: "v3applicationcredential"

    If you want to make use of terraform/secure.yaml add your application credential secret there instead of terraform/clouds.yaml.

    application_credential_secret: SECRET
  4. Prepare the deployment.

    The versions of Ansible and OpenTofu are managed automatically and necessary dependencies are cloned.

    make prepare

    If any error occurs during preparation and you want to run the preparation again, it is important to run make wipe-local-install first. Otherwise the preparation will not be redone completely and necessary parts will be missing later on.

  5. Create the infrastructure with OpenTofu.

    make ENVIRONMENT=regiocloud create
  6. Deploy the OSISM manager and bootstrap all nodes.

    make ENVIRONMENT=regiocloud manager
  7. After the bootstrap, you can log in to the manager via SSH.

    make ENVIRONMENT=regiocloud login

    Yo can log in to the nodes of the cluster via the manager.

    osism console testbed-node-0
  8. Deploy all services.

    It is also possible to deploy the services step by step on the manager. To do this, first log in to the manager with make ENVIRONMENT=regiocloud login and then execute the deploy scripts one after the other. It is recommended to do this within a screen session.

    Deploying the services takes some time and depends on how much bandwidth is available, how the instances are equipped, etc. 90-120 minutes is not unusual when Ceph and OpenStack are fully deployed.

    To speed up the Ansible playbooks, ARA can be disabled. This is done by executing /opt/configuration/scripts/ Run this script before the deployment scripts. Afterwards no more logs are available in the ARA web interface. To re-enable ARA use /opt/configuration/scripts/

    There is also the option of pre-population of images with /opt/configuration/scripts/ so that deployments do not have to be lengthy. Run this script before the deployment scripts.


    Prepare OpenStack resources like public network, flavors and images by running /opt/configuration/scripts/ Run this script after the deployment scripts.


    If you only want to deploy the monitoring services with /opt/configuration/scripts/deploy/, a few dependencies must be deployed first. You can then use the monitoring services without having to install a complete OpenStack & Ceph environment.

    osism apply common
    osism apply loadbalancer
    osism apply opensearch
    osism apply mariadb
  9. If you want to verify the deployment with refstack run /opt/configuration/scripts/ This step will take some time and is optional.

  10. The machine images required for the use of Kubernetes Cluster API and the amphora driver of OpenStack Octavia service are not provided by default to save resources on the OSISM Testbed and improve deployment time. These can be provisioned if required.

  11. If you want you can create a test project with a test user after login. It also creates an instance with a volume attached to a network with a router. This step is optional.

    osism apply --environment openstack test
  12. When the OSISM Testbed is no longer needed, it can be deleted.

    make ENVIRONMENT=regiocloud clean


Deployment must be completed at this point.

Custom CA

The OSISM Testbed deployment currently uses hostnames in the domain This is a real domain and we provide the DNS records matching the addresses used in the OSISM Testbed, so that once you connect to your testbed via a direct link or Wireguard, you can access hosts and servers by their hostname (e.g. ssh

We also provide a wildcard TLS certificate signed by a custom CA for and * This CA is always used for each testbed. The CA is not regenerated and it is not planned to change this for the next 10 years.

In order for these certificates to be recognized locally as valid, the CA environments/kolla/certificates/ca/testbed.crt must be imported locally.

VPN access


Install wireguard on your workstation, if you have not done this before. For instructions how to do it on your workstation, please have a look on the documentation of your used distribution. The wireguard documentation you will find here.

Start the wireguard tunnel. (Press CTRL+c to keep the tunnel running forever. The make target also launches a browser tab with references to all services)

make vpn-wireguard ENVIRONMENT=regiocloud

If you want to connect to the OSISM Testbed from multiple clients, change the client IP address in the downloaded configuration file to be different on each client.

If you only want to download the Wireguard configuration, you can use the vpn-wireguard-config target. The configuration is then available in the file wg-testbed-regiocloud.conf, for example.

make vpn-wireguard-config ENVIRONMENT=regiocloud


If you do not want to use Wireguard you can also work with sshuttle.

make vpn-sshuttle ENVIRONMENT=regiocloud
killall sshuttle

Static entries in /etc/hosts

If you are unable to access the following domains, you can customize your local /etc/hosts with the following static entries. This may be necessary, for example, if you use Pi-hole and all DNS entries from a public DNS with a non-public IP address are filtered.

# OSISM Testbed hosts ara cgit flower homer netbox testbed-manager nexus phpmyadmin api-int testbed-node-0 testbed-node-1 testbed-node-2 testbed-node-3 testbed-node-4 testbed-node-5 testbed-node-6 testbed-node-7 testbed-node-8 testbed-node-9 keycloak api


All SSL enabled services within the OSISM Testbed use certs which are signed by the self-signed OSISM Testbed CA (Download the file and import it as certification authority to your browser).

If you want to access the services please choose the URL from the following table.

HAProxy (testbed-node-0)
HAProxy (testbed-node-1)
HAProxy (testbed-node-2)
Horizon (via Keycloak)https://api.testbed.osism.xyzalicepassword
Horizon (via Keystone)https://api.testbed.osism.xyzadminpassworddomain: default
Horizon (via Keystone)https://api.testbed.osism.xyztesttestdomain: test
OpenSearch Dashboards

Authentication with OIDC

Authentication with OpenID Connect (OIDC) is possible via Keycloak, which is automatically configured for the OIDC mechanism.

OpenStack web dashboard (Horizon) login via OIDC

For logging in via OIDC, open your browser at OpenStack Dashboard Login Page, select Authenticate via Keycloak, after being redirected to the Keycloak login page, perform the login with the credentials alice and password. After that you will be redirected back to the Horizon dashboard, where you will be logged in with the user alice.

OpenStack web dashboard (Horizon) logout

Keep in mind, that clicking Sign Out on the Horizon dashboard currently doesn't revoke your OIDC token, and any consequent attempt to Authenticate via Keycloak will succeed without providing the credentials.

The expiration time of the Single Sign On tokens can be controlled on multiple levels in Keycloak.

  1. On realm level under Realm Settings > Tokens. Assuming the keycloak_realm ansible variable is the default osism, and keycloak is listening on, then the configuration form is available here.

    Detailed information is available in the Keycloak Server Administrator Documentation Session and Token Timeouts section.

  2. In a realm down on the client level select the client (keystone), and under Settings > Advanced Settings.

    It is recommended to keep the Access Token Lifespan on a relatively low value, with the trend of blocking third party cookies. For further information see the Keycloak documentation's Browsers with Blocked Third-Party Cookies section.

Usage of the OpenStack CLI

The environments/openstack folder contains the needed files for the openstack client:

cd environments/openstack
export OS_CLOUD=<the cloud environment> # i.e. admin
openstack floating ip list

OpenStack CLI operations with OpenID Connect password

Using the OpenStack cli is also possible via OIDC, assuming you provisioned the user alice with password password, then you can perform a simple project list operation like this:

See chapter "Usage the OpenStack CLI" for basic openstack usage.

openstack \
--os-cacert /etc/ssl/certs/ca-certificates.crt \
--os-auth-url \
--os-auth-type v3oidcpassword \
--os-client-id keystone \
--os-client-secret 0056b89c-030f-486b-a6ad-f0fa398fa4ad \
--os-username alice \
--os-password password \
--os-identity-provider keycloak \
--os-protocol openid \
--os-identity-api-version 3 \
--os-discovery-endpoint \
project list

OpenStack CLI token issue with OpenID Connect

It is also possible to exchange your username/password to a token, for further use with the cli. The token issue subcommand returns an SQL table, in which the id column's value field contains the token:

See chapter "Usage the OpenStack CLI" for basic openstack usage.

openstack \
--os-cacert /etc/ssl/certs/ca-certificates.crt \
--os-auth-url \
--os-auth-type v3oidcpassword \
--os-client-id keystone \
--os-client-secret 0056b89c-030f-486b-a6ad-f0fa398fa4ad \
--os-username alice \
--os-password password \
--os-identity-provider keycloak \
--os-protocol openid \
--os-identity-api-version 3 \
--os-discovery-endpoint \
--os-openid-scope "openid profile email" \
token issue \
-c id
-f value

An example token is like:

  • TODO: OpenStack CLI operations with token
  • TODO: OpenStack CLI token revoke

Advanced Usage

External API

It is possible to provide the OpenStack APIs and the OpenStack Dashboard via the manager's public IP address. This is not enabled by default, with the exception of the OTC profile. To provide the OpenStack APIs and the OpenStack dashboard via the public IP address of the manager, the following changes are necessary in the terraform/environments/regiocloud.tfvars file. If a cloud other than the is used, the profile of the other cloud is changed accordingly.

  1. Add the customisation external_api. This customisation makes sure that the required security group rules are created for the various OpenStack APIs and the OpenStack dashboard.

    # customisation:external_api
  2. Set parameter external_api to true. This makes sure that all necessary changes are made in the configuration repository when the Manager service is deployed. It is correct that this is added as a comment.

    external_api = true
  3. After the deployment of the Manager service and the OpenStack services, the OpenStack APIs and the OpenStack dashboard can be reached via a DNS name. The service is used for the DNS record. Run the following two commands on the manager node to get the DNS record.

    $ source /opt/
    $ echo "api-${MANAGER_PUBLIC_IP_ADDRESS//./-}"

Change versions

  1. Go to /opt/configuration on testbed-manager
  2. Run ./scripts/ 2023.2 to set the OpenStack version to 2023.2
  3. Run ./scripts/ reef to set the Ceph version to reef
  4. Run osism update manager to update the Manager service

Deploy services

/opt/configuration/scripts/deploy/100-ceph-services-basic.shalternative to ceph

Upgrade services

/opt/configuration/scripts/upgrade/100-rook-services.shalternative to ceph

Ceph via Rook (technical preview)

Please have a look at Deploy Guide - Services - Rook and Configuration Guide - Rook for details on how to configure Rook.

To deploy this in the testbed, you can use an environment variable in your make target.

make CEPH_STACK=rook manager
make CEPH_STACK=rook ceph

This will make sure /opt/ gets CEPH_STACK=rook set which is later being used by:



Ansible errors

Ansible errors that have something to do with undefined variables (e.g. AnsibleUndefined) are most likely due to cached facts that are no longer valid. The facts can be updated by running osism apply facts.

Unsupported locale setting

$ make prepare
ansible-playbook -i localhost, ansible/check-local-versions.yml
ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
make: *** [prepare] Error 1

To solve the problem you have to modify the Makefile. Change the 1st line as follows.

export LC_ALL=en_US.UTF-8

To find out the locale used on the system printenv can be used.

$ printenv | grep -i lang|locale



This section describes how to configure and customise the OSISM Testbed.


The defaults for the OpenTofu variables are intended for

dns_nameservers["", ""]
imageUbuntu 22.04Only Ubuntu 22.04 is currently supported
image_nodeUbuntu 22.04Only Ubuntu 22.04 is currently supported






  • The configuration is intentionally kept quite static. Please create no PRs to make the configuration more flexible/dynamic.
  • The OSISM documentation uses hostnames, examples, addresses etc. from OSISM Testbed.
  • The third volume (/dev/sdd) is not enabled for Ceph by default. This is to test the scaling of Ceph.
  • The manager is used as pull through cache for Docker images and Ubuntu packages. This reduces the amount of traffic consumed.

Supported releases

The following stable Ceph and OpenStack releases are supported.

The deployment of Ceph is based on ceph-ansible.

  • Ceph Quincy (default)
  • Ceph Reef

The deployment of OpenStack is based on kolla-ansible.

  • OpenStack 2023.1
  • OpenStack 2023.2 (default)
  • OpenStack 2024.1

The deployment of Kubernetes is based on k3s-ansible.

  • Kubernetes v1.29 (default)

Included services

The following services can currently be used with the OSISM Testbed without further adjustments.


  • Ceph
  • Cluster API Management Cluster
  • Fluentd
  • Gnocchi
  • Grafana
  • Haproxy
  • Influxdb
  • Keepalived
  • Keycloak
  • Kubernetes
  • Mariadb
  • Memcached
  • Netbox
  • Netdata
  • Opensearch
  • Openvswitch
  • Patchman
  • Prometheus exporters
  • Rabbitmq
  • Redis


  • Barbican
  • Ceilometer
  • Cinder
  • Designate
  • Glance
  • Heat
  • Horizon
  • Ironic
  • Keystone
  • Magnum
  • Manila
  • Neutron
  • Nova (with Libvirt/KVM)
  • Octavia
  • Senlin
  • Skyline

Makefile reference

$ make help

make <target>
help Display this help.
clean Destroy infrastructure with OpenTofu.
wipe-local-install Wipe the software dependencies in `venv`.
create Create required infrastructure with OpenTofu.
login Log in on the manager.
vpn-wireguard Establish a wireguard vpn tunnel.
vpn-sshuttle Establish a sshuttle vpn tunnel.
bootstrap Bootstrap everything.
manager Deploy only the manager service.
identity Deploy only identity services.
ceph Deploy only ceph services.
deploy Deploy everything and then check it.
prepare Run local preperations.
deps Install software preconditions to `venv`.

$ make <TAB> <TAB>

CI jobs

You can inspect the results of the daily zuul jobs.