Skip to main content

Cloud in a Box - CiaB

💡 Cloud in a Box (CiaB) is a minimalistic installation of the latest stable OSISM release with only services which are needed to make it work with Kubernetes. It is intended for use as a development system on bare-metal or for use in edge environments.


At the moment the secrets are stored in plain text in the osism/cloud-in-a-box repository and are not secure. Do not use for public accessible systems. In the future, the secrets will be generated automatically.


The system to be used as Cloud in a Box must fulfill these minimum requirements.

Type of resourceAmountNote
CPUat least 1 socket with 4 coresMore is better here. This is the minimum where you can't use much payload (LBaaS, VMs). The use of Kubernetes with Cluster API is not possible with this minimum size.
RAMat least 32 GByteMore is better here. In principle, it also works with 8 GByte, but then no payload (LBaaS, VMs) can be used. Kubernetes with Cluster API cannot be used then.
Storageat least 1 TByteHas to be available as /dev/sda or /dev/nvme0n1. Less than 1 TByte is also possible, the smaller the less storage is available for use in Ceph.
Networkat least 1 network interface (DHCP and internet access)An optional 2nd network interface can be used for external connectivity.
USB stickat least 2 GByteInstallation media for Cloud in a Box bootstrapping


There are two types of Cloud in a Box.

  1. The sandbox type is intended for developers and demonstrations. A full OSISM installation is one there which also includes Ceph and OpenSearch, for example. In the course of the installation, necessary images, networks, etc. are also created.

  2. The edge type is intended to be deployed as an appliance to provide an edge cloud on a single node. Compared to the sandbox, certain services are not provided there or are implemented differently. For example, OpenSearch is not deployed because the logs are delivered to a central location. The storage backend will also be implemented differently there in the future instead of Ceph.

General notes and limitations

  • Load balancing in Octavia is only possible via OVN. The Amphora driver is not supported. This is due to the limited resources we have. With the Amphora driver, a dedicated instance is started for each load balancer, each of which consumes 1 GByte of memory. This represents a very high consumption in relation to the usual sizes of the Cloud in a Box.


The images currently download and install the latest state of the installation scripts, therefore it is mandatory to update the installation media at least when the underlying Ubuntu operating system release changes. The installation of older releases is currently not supported.

  1. Download one of the Cloud in a Box images of type sandbox

  2. Use a tool like balenaEtcher or dd to create a bootable USB stick with the Cloud in a Box image.

  3. Boot from the USB stick. Make sure that the boot from USB is activated in the BIOS.


    When booting from this USB stick, all data on the hard disks will be destroyed without confirmation.

  4. The installation of the operating system (Ubuntu 22.04) will start and take a few minutes. After that the system will shutdown.

  5. The first start of the system

    • Remove the USB storage device (The USB stick is only needed again if the Cloud in a Box system is to be fully reinstalled.)
    • Connect the first network interface to an ethernet interface that provides access to the internet via DHCP configuration
    • Boot the system from the internal hard disk device
  6. The deployment will start. This takes some time and the system will shutdown when the deployment is finished. This takes roughly an hour, possibly longer depending on the hardware and internet connection.

  7. Start the system again. System is ready for use, by default DHCP is tried on the first network device.

  8. Login via SSH. Use the user dragon with the password password. (You can obtain the ip address by inspecting the logs of your dhcp server or from the issue text of the virtual consoles of the system)

    ssh dragon@IP_FROM_YOUR_SERVER

    CiaB Issue Text

Manual installation

  1. Follow the provisioning guide, skip the part about disk layout and do it this way:

    Disk layout

    1. Create a 1 GByte ext4 partition mounted in /boot
    2. Create a 8 GByte swap partition
    3. Create a 120 GByte unformatted partition
    4. Use a Create volume group (LVM) to create a volume group called system with the size of 120 GByte on the partition 4 you just created
    5. Create a logical volume by selecting the Free Space option under system LVM. This volume should be mounted in / and have size of 100 GByte
    6. Create a partition with the size of the rest of the drive's space
    7. Create a new LVM volume group on partition 5 called osd-vg (will be used for Ceph)
  2. After the Ubuntu installation, the system will be rebooted

  3. Log into the machine via console to get its IP address and then use SSH to connect to the machine

  4. Clone the osism/cloud-in-a-box repository into /opt/cloud-in-a-box

    sudo git clone /opt/cloud-in-a-box
  5. Disable conflicting services from the default Ubuntu installation

    sudo /opt/cloud-in-a-box/
  6. Install upgrades

    sudo apt update
    sudo apt upgrade
  7. Run the script with the required type (use of sandbox is recommended)

    sudo /opt/cloud-in-a-box/ sandbox
  8. Run the script with the same type as in step 8 to deploy services like Ceph and OpenStack

    sudo /opt/cloud-in-a-box/ sandbox
  9. Shutdown the system

    sudo shutdown -h now
  10. Start the system again. System is ready for use, by default DHCP is tried on the first network device.

  11. Login via SSH. Use the user dragon with the password password. (You can obtain the ip address by inspecting the logs of your dhcp server or from the issue text of the virtual consoles of the system)

    ssh dragon@IP_FROM_YOUR_SERVER

    CiaB Issue Text


The scripts are not idempotent yet. In case there is any fail during or you have to start over with fresh installation.


Wireguard VPN service access

Copy the /home/dragon/wireguard-client.conf file from Cloud in a Box to your workstation. This is necessary for using the web endpoints on your workstation. Rename the wireguard config file to something like cloud-in-a-box.conf.

If you want to connect to the Cloud in a Box system from multiple clients, change the client IP address in the config file to be different on each client.

scp dragon@IP_FROM_YOUR_SERVER:/home/dragon/wireguard-client.conf $HOME/cloud-in-a-box.conf

Install wireguard on your workstation, if you have not done this before. For instructions how to do it on your workstation, please have a look on the documentation of your used distribution. The wireguard documentation you will find here.

Start the wireguard tunnel.

sudo wg-quick up $HOME/cloud-in-a-box.conf


If you want to access the services please choose the URL from the following list:

Horizon - admin project default
Horizon - test project test
OpenSearch Dashboards
PhpMyAdmin with OSISM 7, root_shard_0 is used as the user name
Skyline - admin project
Skyline - test project

Command-line interfaces

Login to Cloud in a Box as described in step 8 of the installation chapter.

  • Select one of the preconfigured environments:
    • system
    • admin
    • test
  • Set the environment by exporting the environment variable: OS_CLOUD:
    export OS_CLOUD=admin
  • Use OpenStack CLI via the command openstack.
    openstack availability zone list
    openstack image list
    openstack server list # After installation there are no servers

Import of additional images

The OpenStack Image Manager is used to manage images. In the example, the Garden Linux image is imported.

osism manage images --cloud=admin --filter 'Garden Linux'

All available images can be found in the osism/openstack-image-manager repository.


It is best to execute the commands within a screen session, it takes some time. Please note that you cannot update the Ceph deployment at the moment. This will be enabled in the future.

osism apply configuration
docker system prune -a


Use of 2nd NIC for external network

In the default configuration, the Cloud in a Box is built in such a way that an internal VLAN101 is used as an simulated external network and this is made usable via the 1st network interface using masquerading. This makes it possible for instances running on the Cloud in a Box to reach the internet. The disadvantage of this is that the instances themselves can only be reached via floating IP addresses from the Cloud in a Box system itself or via the Wireguard tunnel. Especially in edge environments, however, one would usually like to have this differently and the instances should be directly accessible via the local network.

To make this work, first identify the name of a 2nd network card to be used.

dragon@manager:~$ sudo lshw -class network -short
H/W path Device Class Description
/0/100/2.2/0 eno7 network Ethernet Connection X552 10 GbE SFP+
/0/100/2.2/0.1 eno8 network Ethernet Connection X552 10 GbE SFP+
/0/100/1c/0 eno1 network I210 Gigabit Network Connection
/0/100/1c.1/0 eno2 network I210 Gigabit Network Connection
/0/100/1c.4/0 eno3 network I350 Gigabit Network Connection
/0/100/1c.4/0.1 eno4 network I350 Gigabit Network Connection
/0/100/1c.4/0.2 eno5 network I350 Gigabit Network Connection
/0/100/1c.4/0.3 eno6 network I350 Gigabit Network Connection

In the following we use eno7. Activate the device manually with sudo ip link set up dev eno7. Then check that a link is actually present.

dragon@manager:~$ ethtool eno7
Settings for eno7:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Link detected: yes

Now this device is made permanently known in the network configuration. Select the MTU accordingly. For 1 GBit rather 1500 than 9100. The 2nd network interface should be configured without IP configuration (neither static nor DHCP).

  • /opt/configuration/inventory/group_vars/generic/network.yml
  • /opt/configuration/environments/manager/group_vars/manager.yml
dhcp4: true
mtu: 9100

Then, this change is deployed and applied.

osism apply network
sudo netplan apply

Now the configuration for Neutron and OVN is prepared. network_workload_interface is expanded by the 2nd network interface. The order is not random, first vlan101 then eno7. neutron_bridge_name is added.

  • /opt/configuration/inventory/group_vars/generic/network.yml
  • /opt/configuration/environments/manager/group_vars/manager.yml
network_workload_interface: "vlan101,eno7"
neutron_bridge_name: "br-ex,br-add"

Then, this change is deployed.

osism reconciler sync
osism apply openvswitch
osism apply ovn
osism apply neutron

Now segments and/or subnets can be configured. In this case, eno7 is configured as an untagged port on the remote side.

  • /opt/configuration/environments/openstack/playbook-additional-public-network.yml
- name: Create additional public network
hosts: localhost
connection: local

- name: Create additional public network
cloud: admin
state: present
name: public-add
external: true
provider_network_type: flat
provider_physical_network: physnet2

- name: Create additional public subnet
cloud: admin
state: present
name: subnet-public-add
network_name: public-add
enable_dhcp: false

The additional public network can now be made known with osism apply -e openstack additional-public-network.

There is now a 2nd floating IP address pool with the name public-add available for use. If instances are to be started directly in this network, enable_dhcp: true must be set. In this case, it should be clarified in advance with the provider of the external network whether the use of DHCP is permitted there.

Running on a Virtual Machine

The Cloud in a Box has been tested to run on a virtual machine. However, the Cloud in a Box is mainly made for running on bare metal, the automated installation does not work, and other things may not work either.

Nested virtualization

You likely want to run virtual machines on top of your Cloud in a Box. The host machine has to support and enabled nested virtualization.

To enable nested virtualization the CPU configuration of the VM has to be host-passthrough or host-model

The linked guide can be used in other distributions as well.

Disk space saving

When using Cloud in a Box in a VM, you can utilize the qcow2 disk image or similar technology to save space. In that case, the base installation requires just around 70 GB instead of a full 1 TB. (The drive still needs to be made with a capacity of at least 1TB; however, the actual disk space usage is lower.)

Also in case you want to experiment a bit more and "hack around" using the manual installation you can make disk snapshots when turned off after the Ubuntu installs, and to speed up your progress.

If you use qemu, you can use following command to do snapshots.

sudo virsh snapshot-create-as --domain cib bootstrap "run of" --disk-only --diskspec sda,snapshot=external,file=/var/lib/libvirt/images/ub2022_cib_boostrap.qcow2 --atomic

QEMU guest agent

When running inside QEMU, it may be worth it to install the QEMU guest agent.

sudo apt -y install qemu-guest-agent
sudo systemctl enable qemu-guest-agent
sudo systemctl start qemu-guest-agent


Broken disk setup

This error means that your disk setup is broken. Use cfdisk and delete all partitions on the system on which you want to install the Cloud in a Box image.

With lsblk you can verify if the partitions are empty.


For the further development of the scripts and the mechanisms of the Cloud in a Box, you need to know the following.

  • The operating system is brought onto the node via an automatic Ubuntu installation that uses cloud-init
  • The installation starts the script which performs an initial clone of the osism/cloud-in-a-box repository and a checkout of the main branch. It also executes the and scripts.
  • The installation persists the kernel parameters of the initial boot to the file /etc/.initial-kernel-commandline
  • The status and activities of the deployment are logged in /var/log/install-cloud-in-a-box.log. For proper colors use less -r. Search for OVERALL STATUS to find the result of the specific installation steps.
  • Branch and location of the osism/cloud-in-a-box repository can be overriden by setting the kernel parameters ciab_repo_url (a public repository address without authentication) and ciab_branch (a name of a branch, use only ASCII chars, -, and _).