Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Add the following key to hiera:
    1. placement::policy::purge_config: true
  2. Install the first node; either by resintaling it with the xena modules/tags, or follow this list:
    1. Run puppet with xena modules/tags
    2. Run systemctl stop puppet apache2
    3. Run apt-get purge placement-api placement-common python3-placement && apt-get autoremove && apt-get dist-upgrade
    4. Run puppet again
  3. Run placement-manage db sync; placement-manage db online_data_migrations on the new node.
  4. upgrade the rest of the nodes, skipping step 23.

Nova

To upgrade nova without any downtime, follow this procedure

...

  1. Run puppet with the xena modules/tags
  2. Run apt dist-upgrade && apt-get autoremove
  3. Run puppet and restart services

Heat

The rolling upgrade procedure for heat includes a step where you are supposed to create a new rabbit vhost. I don't want that. Therefore, this is the cold upgrade steps.

  1. Set heat::api::enabled: false and heat::engine::enabled: false and heat::api_cfn::enabled: false in hiera to stop all services
  2. Do one of:
    1. Run puppet with xena modules/tags, Run apt-get update && apt-get dist-upgrade && apt-get autoremove
    2. Reinstall the nodes with xena modules/tags
  3. Run heat-manage db_sync on one of the api-nodes.
  4. Remove the hiera keys that disabled the services and re-run puppet

Barbican

Barbican must be stopped for upgrades, and can thus be performed on all barbican hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.

  1. Stop all barbican-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the barbican hosts:
    1. barbican::worker::enabled: false

    2. apache::service_ensure: 'stopped'

  2. Run puppet with the xena modules/tags

  3. Run apt dist-upgrade && apt-get autoremove

  4. Run barbican-db-manage upgrade

  5. Re-start barbican services by removing the keys added in step 1 and re-run puppet.

Magnum

Magnum must be stopped for upgrades, and can thus be performed on all magnum-hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.

  1. Stop all magnum-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the magnum hosts:
    1. magnum::conductor::enabled: false

    2. apache::service_ensure: 'stopped'

  2. Run puppet with the xena modules/tags

  3. Run dnf upgrade

  4. Run su -s /bin/sh -c "magnum-db-manage upgrade" magnum

  5. Re-start magnum services by removing the keys added in step 1 and re-run puppet.

  6. Check if a new Fedora CoreOS image is required, and if new public cluster templates should be deployed. I.e to support a newer k8s version

Octavia

Octavia must be stopped for upgrades, and can thus be performed on all octavia-hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.

  1. Stop all octavia-services by adding the following keys to hiera, and then make sure to run puppet on the octavia hosts:
    1. octavia::housekeeping::enabled: false

    2. octavia::health_manager::enabled: false

    3. octavia::api::enabled: false

    4. octavia::worker::enabled: false

  2. Do one of:

    1. Reinstall the node with xena modules/tags
    2. Run puppet with the wallaby modules/tags, Run apt-get dist-upgrade && apt-get autoremoveRun puppet

  3. Run octavia-db-manage upgrade head

  4. Re-start octavia services by removing the keys added in step 1 and re-run puppet.

  5. Build a xena-based octavia-image and upload to glance. Tag it and make octavia start to replace the amphora.

Horizon

  1. Run puppet with the xena modules/tags
  2. run dnf upgrade
  3. Yes this is weird: Login to all memcached servers, and run systemctl restart memcached
    1. This is only necessary when upgrading the first horizon server
  4. Run puppet again
  5. restart httpd

Compute-nodes

When all APIs etc. are upgraded, it is time to do the same on the compute-nodes. Compute nodes are simple to upgrade:

  1. Do one of:
    1. Reinstall the node with xena modules/tags
    2. Run puppet with the xena modules/tags, Run apt dist-upgrade && apt-get autoremove
  2. Reboot the compute-node
    1. When it comes up, see that the storage-interface is up. It it isnt, run a manual puppet-run to fix it.

GPU-nodes

  1. Copy the vgpu-mapping key in hiera:
    1. Copy: nova::compute::vgpu::vgpu_types_device_addresses_mapping
    2. To: nova::compute::mdev::mdev_types_device_addresses_mapping:
  2. Run puppet with the xena modules/tags
  3. Run apt dist-upgrade && apt autoremove
  4. Run puppet again
  5. Restart openstack services and openvswitch-services

Finalizing

  • Run nova-manage db online_data_migrations on a nova API node. Ensure that it reports that nothing more needs to be done.
  • Rotate octavia images.
  • Update hiera with nova::upgrade_level_compute: '6.0'