Skip to main content


Image tags

Sometimes it is necessary to specify the image tag to be used for a specific service or a specific image of a service. All available images and tags are listed in the 002-images-kolla.yml file.

The image tags can be set in the environments/kolla/images.yml file.

  • Use a specific tag for all images of a service:

    barbican_tag: "2023.1"
  • Use a specific tag for a specific image of a service:

    barbican_worker_tag: "2023.1"


Public endpoints

The public endpoints used for the individual OpenStack services can be configured via the public_endpoint parameters. These are defined as follows.

ParameterDefault value
aodh_public_endpointaodh_external_fqdn | kolla_url(public_protocol, aodh_api_public_port)
blazar_public_endpointblazar_external_fqdn | kolla_url(public_protocol, blazar_api_public_port, '/v1')
ceph_rgw_public_endpointceph_rgw_external_fqdn | kolla_url(public_protocol, ceph_rgw_public_port, ceph_rgw_endpoint_path)
cinder_v3_public_endpoint{{ cinder_public_base_endpoint }}/v3/%(tenant_id)s
cloudkitty_public_endpointcloudkitty_external_fqdn | kolla_url(public_protocol, cloudkitty_api_public_port)
cyborg_public_endpointcyborg_external_fqdn | kolla_url(public_protocol, cyborg_api_port, '/v2')
gnocchi_public_endpointgnocchi_external_fqdn | kolla_url(public_protocol, gnocchi_api_public_port)
heat_cfn_public_endpoint{{ heat_cfn_public_base_endpoint }}/v1
heat_public_endpointheat_external_fqdn | kolla_url(public_protocol, heat_api_public_port, '/v1/%(tenant_id)s')
ironic_inspector_public_endpointironic_inspector_external_fqdn | kolla_url(public_protocol, ironic_inspector_public_port)
magnum_public_endpointmagnum_external_fqdn | kolla_url(public_protocol, magnum_api_public_port, '/v1')
manila_public_endpoint{{ manila_public_base_endpoint }}/v1/%(tenant_id)s
manila_v2_public_endpoint{{ manila_public_base_endpoint }}/v2
masakari_public_endpointmasakari_external_fqdn | kolla_url(public_protocol, masakari_api_public_port)
mistral_public_endpointmistral_external_fqdn | kolla_url(public_protocol, mistral_api_public_port, '/v2')
nova_legacy_public_endpoint{{ nova_public_base_endpoint }}/v2/%(tenant_id)s
nova_public_endpoint{{ nova_public_base_endpoint }}/v2.1
placement_public_endpointplacement_external_fqdn | kolla_url(public_protocol, placement_api_public_port)
tacker_public_endpointtacker_external_fqdn | kolla_url(public_protocol, tacker_server_public_port)
trove_public_endpointtrove_external_fqdn | kolla_url(public_protocol, trove_api_public_port, '/v1.0/%(tenant_id)s')
venus_public_endpointvenus_external_fqdn | kolla_url(public_protocol, venus_api_port)
watcher_public_endpointwatcher_external_fqdn | kolla_url(public_protocol, watcher_api_public_port)
zun_public_endpointzun_external_fqdn | kolla_url(public_protocol, zun_api_public_port, '/v1/')

Some of the previous default values refer to a public_base_endpoint parameter. These are defined as follows.

ParameterDefault value
cinder_public_base_endpointcinder_external_fqdn | kolla_url(public_protocol, cinder_api_public_port)
heat_cfn_public_base_endpointheat_cfn_external_fqdn | kolla_url(public_protocol, heat_api_cfn_public_port)
manila_public_base_endpointmanila_external_fqdn | kolla_url(public_protocol, manila_api_public_port)
nova_public_base_endpointnova_external_fqdn | kolla_url(public_protocol, nova_api_public_port)
skyline_apiserver_public_base_endpointskyline_apiserver_external_fqdn | kolla_url(public_protocol, skyline_apiserver_public_port)

Example for the use of name-based endpoints

DNS records pointing to the kolla_external_vip_address are created in advance.

Additional configuration parameters to overwrite the public endpoints are added in the environments/kolla/configuration.yml file. If certain services are not used, they are removed. If other services are used, these are added (see the table above).


Since we bind the name_based_external_front frontend to the same ports as the horizon_external_front, the external Horizon frontend must be disabled. This is only possible as of OSISM 7.0.6.

haproxy_enable_horizon_external: false

Additional HAProxy configuration in haproxy/services.d/haproxy.cfg is required to map the DNS records to the correct backends. Here too, unused services are removed or additional services are added.

frontend name_based_external_front
mode http
http-request del-header X-Forwarded-Proto
option httplog
option forwardfor
http-request set-header X-Forwarded-Proto https if { ssl_fc }
bind {{ kolla_external_vip_address }}:80
bind {{ kolla_external_vip_address }}:443 ssl crt /etc/haproxy/certificates/haproxy.pem
default_backend horizon_back

acl hdr(host) -i
use_backend keystone_external_back if

acl hdr(host) -i
use_backend glance_api_external_back if

acl hdr(host) -i
use_backend neutron_server_external_back if

acl hdr(host) -i
use_backend placement_api_external_back if

acl hdr(host) -i
use_backend nova_api_external_back if

acl hdr(host) -i
use_backend nova_novncproxy_external_back if

acl hdr(host) -i
use_backend designate_api_external_back if

acl hdr(host) -i
use_backend cinder_api_external_back if

acl hdr(host) -i
use_backend octavia_api_external_back if

acl hdr(host) -i
use_backend swift_api_external_back if

acl hdr(host) -i
use_backend ironic_api_external_back if

Additional Nova configuration in nova.conf is required to use the URL for the NoVNC service.

novncproxy_base_url =

Network interfaces

neutron_external_interface{{ network_interface }}
kolla_external_vip_interface{{ network_interface }}
api_interface{{ network_interface }}
migration_interface{{ api_interface }}
tunnel_interface{{ network_interface }}
octavia_network_interface{{ 'o-hm0' if octavia_network_type == 'tenant' else api_interface }}
dns_interface{{ network_interface }}
dpdk_tunnel_interface{{ neutron_external_interface }}
ironic_http_interface{{ api_interface }}
ironic_tftp_interface{{ api_interface }}

Customization of the service configurations


The following content is based on the kolla-ansible uptream documentation.

OSISM will generally look for files in environments/kolla/files/overlays/CONFIGFILE, environments/kolla/files/overlays/SERVICENAME/CONFIGFILE or environments/kolla/files/overlays/SERVICENAME/HOSTNAME/CONFIGFILE in the configuration repository. These locations sometimes vary and you should check the config task in the appropriate Ansible role for a full list of supported locations. For example, in the case of nova.conf the following locations are supported, assuming that you have services using nova.conf running on hosts called ctl1, ctl2 and ctl3:

  • environments/kolla/files/overlays/nova.conf
  • environments/kolla/files/overlays/nova/ctl1/nova.conf
  • environments/kolla/files/overlays/nova/ctl2/nova.conf
  • environments/kolla/files/overlays/nova/ctl3/nova.conf
  • environments/kolla/files/overlays/nova/nova-scheduler.conf

Using this mechanism, overrides can be configured per-project (Nova), per-project-service (Nova scheduler service) or per-project-service-on-specified-host (Nova servies on ctl1).

Overriding an option is as simple as setting the option under the relevant section. For example, to set override scheduler_max_attempts in the Nova scheduler service, the operator could create environments/kolla/files/overlays/nova/nova-scheduler.conf in the configuration repository with this content:

scheduler_max_attempts = 100

If the operator wants to configure the initial disk, cpu and ram allocation ratio on compute node com1, the operator needs to create the file environments/kolla/files/overlays/nova/com1/nova.conf with this content:

initial_cpu_allocation_ratio = 3.0
initial_ram_allocation_ratio = 1.0
initial_disk_allocation_ratio = 1.0

Note that the numbers shown here with an initial_cpu_allocation_ratio of 3.0 do match the requirements of the SCS-nV-* (moderate oversubscription) flavors. If you do not use SMT/hyperthreading, SCS would allow 5.0 here (for the V flavors).

This method of merging configuration sections is supported for all services using oslo.config, which includes the vast majority of OpenStack services, and in some cases for services using YAML configuration. Since the INI format is an informal standard, not all INI files can be merged in this way. In these cases OSISM supports overriding the entire config file.

Additional flexibility can be introduced by using Jinja conditionals in the config files. For example, you may create Nova cells which are homogeneous with respect to the hypervisor model. In each cell, you may wish to configure the hypervisors differently, for example the following override shows one way of setting the bandwidth_poll_interval variable as a function of the cell:

{% if 'cell0001' in group_names %}
bandwidth_poll_interval = 100
{% elif 'cell0002' in group_names %}
bandwidth_poll_interval = -1
{% else %}
bandwidth_poll_interval = 300
{% endif %}

An alternative to Jinja conditionals would be to define a variable for the bandwidth_poll_interval and set it in according to your requirements in the inventory group or host vars:

bandwidth_poll_interval = {{ bandwidth_poll_interval }}

OSISM allows the operator to override configuration globally for all services. It will look for a file called environments/kolla/files/overlays/global.conf in the configuration repository.

For example to modify database pool size connection for all services, the operator needs to create environments/kolla/files/overlays/global.conf in the configuration repository with this content:

max_pool_size = 100

How does the configuration get into services?

It is explained with example of OpenSearch Service how the configuration for OpenSearch is created and gets into the container.

  • The task Copying over opensearch service config file merges the individual sources of the files.

    Copying over opensearch service config file task
    - name: Copying over opensearch service config file
    - "{{ role_path }}/templates/opensearch.yml.j2"
    - "{{ node_custom_config }}/opensearch.yml"
    - "{{ node_custom_config }}/opensearch/opensearch.yml"
    - "{{ node_custom_config }}/opensearch/{{ inventory_hostname }}/opensearch.yml"
    dest: "{{ node_config_directory }}/opensearch/opensearch.yml"
    mode: "0660"
    become: true
    - inventory_hostname in groups['opensearch']
    - opensearch_services['opensearch'].enabled | bool
    - Restart opensearch container
  • As a basis a template opensearch.yml.j2 is used which is part of the OpenSearch service role.

    opensearch.yml.j2 template
    {% set num_nodes = groups['opensearch'] | length %}
    {% set recover_after_nodes = (num_nodes * 2 / 3) | round(0, 'floor') | int if num_nodes > 1 else 1 %} "true" "{{ 'api' | kolla_address | put_address_in_context('url') }}" "{{ 'api' | kolla_address | put_address_in_context('url') }}" "{{ opensearch_cluster_name }}"
    cluster.initial_master_nodes: [{% for host in groups['opensearch'] %}"{{ 'api' | kolla_address(host) }}"{% if not loop.last %},{% endif %}{% endfor %}]
    node.master: true true
    discovery.seed_hosts: [{% for host in groups['opensearch'] %}"{{ 'api' | kolla_address(host) | put_address_in_context('url') }}"{% if not loop.last %},{% endif %}{% endfor %}]

    http.port: {{ opensearch_port }}
    gateway.expected_nodes: {{ num_nodes }}
    gateway.recover_after_time: "5m"
    gateway.recover_after_nodes: {{ recover_after_nodes }} "/var/lib/opensearch/data"
    path.logs: "/var/log/kolla/opensearch"
    indices.fielddata.cache.size: 40%
    action.auto_create_index: "true"
  • For OpenSearch, overlay files can additionally be stored in 3 places in the configuration repository.

    • environments/kolla/files/overlays/opensearch.yml
    • environments/kolla/files/overlays/opensearch/opensearch.yml
    • environments/kolla/files/overlays/opensearch/{{ inventory_hostname }}/opensearch.yml

    When merging files, the last file found has the most weight. If there is a parameter node.master: true in the service role template opensearch.yml.j2 of the OpenSearch service and you set e.g. node.master: false in environments/kolla/files/overlays/opensearch.yml then accordingly in the finished opensearch.yml node.master: false is used.

  • After the merge the task Copying over opensearch service config file copies the content into the configuration directory /etc/kolla/opensearch of the service.

    action.auto_create_index: 'true'
    - kolla_logging
    gateway.expected_nodes: 1
    gateway.recover_after_nodes: 1
    gateway.recover_after_time: 5m
    http.port: 9200
    indices.fielddata.cache.size: 40% true
    node.master: true /var/lib/opensearch/data
    path.logs: /var/log/kolla/opensearch 'true'
  • The configuration directory /etc/kolla/opensearch is mounted in each container of the OpenSearch service to /var/lib/kolla/config_files.

    Output of docker inspect opensearch
    "Mounts": [
    "Type": "bind",
    "Source": "/etc/kolla/opensearch",
    "Destination": "/var/lib/kolla/config_files",
    "Mode": "rw",
    "RW": true,
    "Propagation": "rprivate"
  • Entrypoint of a service is always kolla_start. This script calls a script This script takes care of copying files from /var/lib/kolla/config_files to the right place inside the container. For this purpose, the container has a config.json in which the individual actions are configured.

    The file /var/lib/kolla/config_files/opensearch.yml is copied to /etc/opensearch/opensearch.yml.

    The permissions of /var/lib/opensearch and /var/log/kolla/opensearch are set accordingly.

    "command": "/usr/share/opensearch/bin/opensearch",
    "config_files": [
    "source": "/var/lib/kolla/config_files/opensearch.yml",
    "dest": "/etc/opensearch/opensearch.yml",
    "owner": "opensearch",
    "perm": "0600"
    "permissions": [
    "path": "/var/lib/opensearch",
    "owner": "opensearch:opensearch",
    "recurse": true
    "path": "/var/log/kolla/opensearch",
    "owner": "opensearch:opensearch",
    "recurse": true
  • In the config.json of the service is also defined the command which will be executed after finishing the preparations. In the case of OpenSearch this is /usr/share/opensearch/bin/opensearch.

    "command": "/usr/share/opensearch/bin/opensearch",
    "config_files": [
    "source": "/var/lib/kolla/config_files/opensearch.yml",
    "dest": "/etc/opensearch/opensearch.yml",
    "owner": "opensearch",
    "perm": "0600"
    "permissions": [
    "path": "/var/lib/opensearch",
    "owner": "opensearch:opensearch",
    "recurse": true
    "path": "/var/log/kolla/opensearch",
    "owner": "opensearch:opensearch",
    "recurse": true

Number of service workers

The number of workers used for the individual services can generally be configured using two parameters.

openstack_service_workers: "{{ [ansible_facts.processor_vcpus, 5] | min }}"
openstack_service_rpc_workers: "{{ [ansible_facts.processor_vcpus, 3] | min }}

The default for openstack_service_workers is set to 5 when using the cookiecutter for the initial creation of the configuration.

This value can be overwritten for individual services. The default for all parameters in the following table is {{ openstack_service_workers }}. The parameter aodh_api_workers can then be used to explicitly set the number of workers for the AODH API, for example. A reconfigure must be made for the particular services in the case of a change. osism apply -a reconfigure aodh in this example.

These parameters are all set in environments/kolla/configuration.yml.


Back-end TLS configuration