Skip to main content

Configuration Repository

The configuration required for an OSISM managed cluster is stored in a single Git monorepo, the configuration repository.

Creating a new configuration repository

The initial content for this configuration repository is generated using the Cookiecutter.

Cookiecutter generates a simple initial configuration for your new cluster by prompting you for the basic details of the new cluster.

The configuration repository is not created on the future Manager node. It is created on a local workstation. If the local workstation cannot be used for this purpose, a dedicated virtual system can be used. For more information on this topic, refer to the Seed Deploy Guide.

Step 1: Preparation

First decide where to store your Git repository The content generated by the cookiecutter in the output/configuration directory is committed to a new Git repository. By default, the configuration repository is assumed to be on GitHub. This can also be GitLab or an internal Git service as well.

Host and path to the Git repository are specified by the git_ parameters. These are requested in step 2. The git_ parameters do not specify the path to the cookiecutter to use.

  [8/20] git_host (github.com):
[9/20] git_port (22):
[10/20] git_repository (YOUR_ORG/YOUR_NEW_CONFIGURATION_REPOSITORY): regiocloud/configuration
[11/20] git_username (git):
[12/20] git_version (main):

In this case, the generated configuration in the output/configuration directory is stored on GitHub in the regiocloud/configuration repository.

See the parameter reference for more details. The parameters listed there will be queried during the execution of Cookiecutter.

Step 2: Run Cookiecutter

  1. The directory output is created and used as output volume. It is only necessary to create the empty directory here.

    mkdir output
  2. The Cookiecutter runs inside a container. Docker must be usable on the system where the Cookiecutter will be used. It should also work with Podman.

    docker run \
    -e TARGET_UID="$(id -u)" \
    -e TARGET_GID="$(id -g)" \
    -v $(pwd)/output:/output \
    --rm -it quay.io/osism/cookiecutter
  3. A few parameters are requested. The parameters are documented in detail in the parameter reference.

    If you want to use the latest version, this is done using the manager_version parameter. By default, this is always set to the latest stable version.

    manager_version [7.0.4]: latest

    If the manager_version parameter is set to latest it is also possible to explicitly set the openstack_version and the ceph_version explicitly.

    [1/19] with_ceph (1):
    [2/19] with_keycloak (0):
    [3/19] ceph_network(192.168.16.0/20):
    [4/19] ceph_version (quincy):
    [5/19] domain (osism.xyz):
    [6/19] fqdn_external (api.osism.xyz):
    [7/19] fqdn_internal (api-int.osism.xyz):
    [8/19] git_host (github.com):
    [9/19] git_port (22):
    [10/19] git_repository (YOUR_ORG/YOUR_NEW_CONFIGURATION_REPOSITORY):
    [11/19] git_username (git):
    [12/19] git_version (main):
    [13/19] ip_external (192.168.16.254):
    [14/19] ip_internal (192.168.16.9):
    [15/19] manager_version (7.0.4):
    [16/19] name_server (149.112.112.112):
    [17/19] ntp_server (de.pool.ntp.org):
    [18/19] openstack_version (2023.2):
    [19/19] project_name (configuration):

Step 3: Upload the new configuration to the remote git repository

Add the initial configuration state to the repository. How to add a deploy key on GitHub is documented in Managing deploy keys. Read permissions are sufficient.

$ git clone git@github.com:YOUR_ORG/YOUR_NEW_CONFIGURATION_REPOSITORY.git YOUR_NEW_CONFIGURATION_REPOSITORY
$ cp -r output/configuration/{*,.gitignore} YOUR_NEW_CONFIGURATION_REPOSITORY
$ cd YOUR_NEW_CONFIGURATION_REPOSITORY
$ git add -A .
$ git commit -m "Initial commit after bootstrap"
$ git push

The content is now committed to the Git repository that was created earlier in the process.

warning

The secrets directory is not stored in the Git repository. Its contents can be stored in a trusted location.

The secrets directory contains an SSH key pair which is used as the deploy key to make the configuration repository available on the manager node later. Write access is not required. The public SSH key is stored in the secrets/id_rsa.configuration.pub file.

Step 4: Post-processing of the generated configuration

The configuration repository that is initially created with the Cookiecutter is not immediately usable. For example, the inventory needs to be built. All other information can be found in the Configuration Guide. Use git to version all your configuration changes.

The following 6 points must be changed after the initial creation of the configuration repository.

  1. Secrets
  2. Manager inventory
  3. Global inventory
  4. DNS servers
  5. NTP servers
  6. Certificates

Secrets

The password for Ansible Vault encrypted files, is stored in secrets/vaultpass. Since the secrets directory is not added to the configuration repository, it is important to store it in a password vault of your choice.

The password of the generated Keepass file is password. This should be changed when using the Keepass file. If possible, an existing password vault should be used.

Manager inventory

The information required to perform the initial bootstrap of the manager node and the initial deployment of the manager service from the seed Node is provided in the inventory of the manager environment.

In the Cookiecutter, a node node01 is defined as an example in the manager inventory as well as in the global inventory. The name of this node must be changed to match the name of the node used as manager in your own cluster.

Roles

  • Manager role

    The name of the node on which the manager service is to be deployed is added to inventory group manager in file environments/manager/hosts.

    Only the manager inventory group is available in environments/manager/hosts. There are no other groups there.

    environments/manager/hosts
    [manager]
    node01

Host vars

  • Ansible section

    The IP address where the node can be reached via SSH from the manager node. If DHCP is used after the initial provisioning to assign an initial IP address to the nodes, the address assigned via DHCP is initially used here and later changed to the static IP address.

    environments/manager/host_vars/node01.yml
    ansible_host: 192.168.16.10
  • Generic section

    The network interface on which the internal communication of the cluster will take place. If the internal interface does not yet exist at the time the configuration is created, e.g. because it is a bond interface or VLAN interface that is only created by the static network configuration, it can be already used here.

    environments/manager/host_vars/node01.yml
    internal_interface: eno1
  • Network section

    The static and complete network configuration of the node. Further details on creating the network configuration in the network configuration guide.

    environments/manager/host_vars/node01.yml
    network_ethernets:
    eno1:
    addresses:
    - "192.168.16.10/20"
    gateway4: "192.168.16.1"
    mtu: 1500

Global inventory

In the Cookiecutter, a node node01 is defined as an example in the manager inventory as well as in the global inventory. The name of this node must be changed to match the name of the node used as manager in your own cluster.

Roles

  • Generic role

    inventory/20-roles
    # The "all" group is not used in OSISM. Therefore it is important
    # that all nodes are explicitly listed here.
    [generic]
    node01
  • Manager role

    inventory/20-roles
    # Nodes that act as manager (sometimes called deployment node)
    # are included in this group.
    [manager]
    node01
  • Monitoring role

    inventory/20-roles
    # Nodes which are intended for monitoring services belong to
    # this group
    [monitoring]
  • Control role

    inventory/20-roles
    # Nodes that serve as controllers, so things like scheduler,
    # API or database run there, of the environment.
    [control]
  • Compute role

    inventory/20-roles
    # Virtual systems managed by OpenStack Nova are placed on
    # nodes in this group.
    [compute]
  • Network role

    inventory/20-roles
    # Network resources managed by OpenStack Neutron, such as
    # L3 routers, are placed on these nodes. This group has nothing
    # to do with the general network configuration.
    [network]
  • Ceph control role

    inventory/20-roles
    # Nodes that serve as controllers for Ceph, so things like the
    # Ceph Monitor service run here.
    [ceph-control]
  • Ceph resource role

    inventory/20-roles
    # The storage available in these systems is provided in the
    # form of OSDs for Ceph.
    [ceph-resource]
  • Ceph rgw role

    inventory/20-roles
    [ceph-rgw:children]
    ceph-control

Host vars

  • Ansible section

    inventory/host_vars/node01.yml
    # NOTE: Address where the node can be reached via SSH.
    ansible_host: 192.168.16.10
  • Generic section

    inventory/host_vars/node01.yml
    internal_interface: eno1

    # NOTE: The address of the internal interface.
    internal_address: 192.168.16.10
  • Netdata section

    inventory/host_vars/node01.yml
    netdata_host_type: client

    # NOTE: Uncomment this when this node should be a Netdata server.
    # netdata_host_type: server
  • Network section

    inventory/host_vars/node01.yml
    # NOTE: This is the initial management interface. Further interfaces can be added.
    # DOCS: https://osism.tech/docs/guides/configuration-guide/network

    network_ethernets:
    eno1:
    addresses:
    - "192.168.16.10/20"
    gateway4: "192.168.16.1"
    mtu: 1500
  • Kolla section

    inventory/host_vars/node01.yml
    network_interface: eno1

    # api_interface:
    # bifrost_network_interface:
    # dns_interface:
    # kolla_external_vip_interface:
    # migration_interface:
    # neutron_external_interface:
    # octavia_network_interface:
    # storage_interface:
    # tunnel_interface:
  • Ceph section

    inventory/host_vars/node01.yml
    # NOTE: Uncomment this when this node is a part of the Ceph cluster.
    # monitor_address:
    # radosgw_address:
    inventory/host_vars/node01.yml
    # NOTE: Uncomment this when this node should be a OSD node.
    # DOCS: https://osism.tech/docs/guides/configuration-guide/ceph#lvm-devices

    # ceph_osd_devices:
    # sdb:
    # sdc:
    # sdd:
    # sde:

DNS servers

environments/configuration.yml
resolvconf_nameserver:
- 8.8.8.8
- 9.9.9.9

NTP servers

environments/configuration.yml
chrony_servers:
- 1.de.pool.ntp.org
- 2.de.pool.ntp.org
- 3.de.pool.ntp.org
- 4.de.pool.ntp.org

Certificates

The certificates must be created and added in the configuration repository in the files environments/kolla/certificates/haproxy.pem and environments/kolla/certificates/haproxy-internal.pem. Further information in the Loadbalancer Configuration Guide.

If no certificates are to be used, the encryption must be deactivated. This is not recommended.

environments/kolla/configuration.yml
kolla_enable_tls_external: "yes"
kolla_enable_tls_internal: "yes"

Using latest

If you want to use the latest version, this is done using the manager_version parameter. By default, this is always set to the latest stable version.

manager_version [7.0.0]: latest

If the manager_version parameter is set to latest it is also possible to explicitly set the openstack_version and the ceph_version explicitly.

Parameter reference

ParameterDescriptionDefault
ceph_networkAddress range for Ceph's network192.168.16.0/20
ceph_versionThe version of Ceph. When using a stable OSISM release (manager_version != latest), this value is ignoredquincy
domainThe domain used by hostnamesosism.xyz
fqdn_externalExternal API FQDNapi.osism.xyz
fqdn_internalInternal API FQDNapi-int.osism.xyz
git_hostAddress of the used Git servergithub.com
git_portPort of the used Git server22
git_repositoryPath to the git configuration repositoryYOUR_ORG/YOUR_CONFIGURATION_REPOSITORY
git_usernameUsername of the git repositorygit
git_versionGit branch namemain
ip_externalThe external IP address of the API (resolves to fqdn_external)192.168.16.254
ip_internalThe internal IP address of the API (resolves to fqdn_internal)192.168.16.9
manager_versionThe version of OSISM. An overview of available OSISM releases can be found here7.0.4
name_serverNameserver. Only one nameserver is set here because the query of multiple values in Cookiecutter is weird. Add more nameservers afterward.149.112.112.112
ntp_serverNTP server. Only one NTP server is set here because the query of multiple values in Cookiecutter is weird. Add more NTP servers afterward.de.pool.ntp.org
openstack_versionThe version of OpenStack. When using a stable OSISM release (manager_version != latest), this value is ignored2023.2
project_nameName of the configuration repository directoryconfiguration
with_ceph1 to use Ceph, 0 to not use Ceph1
with_keycloak1 to prepare Keycloak integration , 0 to not prepare Keycloak integration0

Configuration repository layout

A configuration repository always has the same layout. This section describes the content available in a configuration repository. In the section Creating a new configuration repository is the creation of a new configuration repository documented.

Directory/FileDescription
environments
inventory
netboxoptional
requirements.txtIn the requirements.txt the necessary dependencies are listed to be able to execute Gilt.
gilt.yml
Makefile
gilt.yamlGilt is a Git layering tool. We use Gilt to maintain the image versions, Ansible configuration and scripts within the environments/manager directory.

Synchronising the configuration repository

Once the manager has been deployed and the configuration repository has been initially transferred to the manager node, the configuration repository can be updated using osism apply configuration.

If local changes were made directly in the configuration repository on the manager node, these are overwritten.

Locks

It is possible to lock parts of the configuration repository or the complete configuration repository. It is then no longer possible to execute plays assigned to these parts in the locked parts. This makes it possible to prevent the execution of plays in specific areas.

To lock an environment, a .lock file is created in the corresponding directory of the environment. For example, the file environments/kolla/.lock locks the Kolla environment.

If you try to execute a play in the Kolla environment, an error message is displayed.

$ osism apply common
2024-06-02 10:52:44 | INFO | Task 2f25f55f-96ae-4a6c-aeb4-c1c01e716d91 (common) was prepared for execution.
2024-06-02 10:52:44 | INFO | It takes a moment until task 2f25f55f-96ae-4a6c-aeb4-c1c01e716d91 (common) has been started and output is visible here.
ERROR: The environment kolla is locked via the configuration repository.

File environments/.lock is created to lock everything.

If you try to execute a play, an error message is displayed.

$ osism apply facts
2024-06-02 10:53:08 | INFO | Task 6ac9a526-f88d-4756-bf46-2179636dfb42 (facts) was prepared for execution.
2024-06-02 10:53:08 | INFO | It takes a moment until task 6ac9a526-f88d-4756-bf46-2179636dfb42 (facts) has been started and output is visible here.
ERROR: The configuration repository is locked.

Working with encrypted files

To make it easier to work with encrypted files, the configuration repository has several make targets that can be used to view encrypted files and to edit encrypted files.

  • Show secrets in all encrypted files.

    This opens a pager, e.g. less, and you can search with / for specific files, keys and passwords.

    make ansible_vault_show
  • Change or add secrets in an encrypted file with the editor set in $EDITOR.

    make ansible_vault_edit FILE=environments/secrets.yml EDITOR=nano
  • Re-encrypt all encrypted files with a new key.

    This creates a new secrets/vaultpass and creates backups of the old to secrets/vaultpass_backup_<timestamp>.

    make ansible_vault_rekey