You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 18 Next »

This article summarizes the steps required to upgrade from the yoga release to the zed release of openstack.

Prerequisites:

  • This documents expects that your cloud is deployed with a recent yoga tag of the ntnuopenstack repository.
  • You have a recent mysql backup in case things go south.
  • If you want to do a rolling upgrade, the following key should be set in hiera long enough in advance that all hosts have had an puppet-run to apply it:
    • nova::upgrade_level_compute: '6.0'

    • When the upgrade is finished - the key should still be set to '6.1'
      • These version-numbers can be correlated to release-name in the file /usr/lib/python3/dist-packages/nova/compute/rpcapi.py

The recommended order to upgrade the services are listed below:

Keystone

This is the zero downtime approach

Before you begin

  • Login to a mysql node, start the mysql CLI, and run set global log_bin_trust_function_creators=1;

Upgrade-steps (start with a single node):

  1. Set apache::service_ensure: 'stopped' in hiera for the node that you are upgrading
  2. Run puppet with the zed modules/tags, run apt-get dist-upgrade, and run puppet again
  3. Run keystone-manage doctor and ensure nothing is wrong
  4. Run keystone-manage db_sync --expand
    1. Returns nothing
  5. At this point, you may restart apache2 on this node
    1. Remove the  apache::service_ensure: 'stopped' previously set in hiera.
  6. Upgrade keystone on the other nodes, one at a time
    1. Basically run step 1, 2 and 5 on the other nodes
  7. When all nodes are upgraded, perform the final DB sync
    1. keystone-manage db_sync --contract

Glance

To upgrade glance without any downtime you would need to follow the following procedure:

  1. Select which glance-server to upgrade first.
    1. In the node-specific hiera for this host you should set: 
      1. glance::api::enabled: false
      2. apache::service_ensure: 'stopped'
      3. apache::mod::wsgi::package_name: 'libapache2-mod-wsgi-py3'
      4. apache::mod::wsgi::mod_path: '/usr/lib/apache2/modules/mod_wsgi.so'
  2. Run puppet with the zed modules/tags, run apt-get dist-upgrade, and run puppet again
  3. Remove the glance::api::enable: false and apache::service_ensure: 'stopped' from the node-specific hiera, and run puppet again. This would re-start the glance api-server on this host.
    1. Test that this api-server works.
  4. Upgrade the rest of the glance hosts (ie; step 2 for each of the remaining glance hosts)

Cinder

To upgrade cinder without any downtime, follow this procedure

  1. Add the following three lines to the node-file of the first node you would like to upgrade:
    1. apache::service_ensure: 'stopped'
    2. cinder::scheduler::enabled: false
    3. cinder::volume::enabled: false
  2. Run puppet with the zed modules/tags, run apt-get dist-upgrade, and run puppet again
  3. Run cinder-manage db sync && cinder-manage db online_data_migrations
  4. Remove the lines added at step 1, re-run puppet, and test that the upgraded cinder version works.
  5. Perfom step 2 for the rest of the cinder nodes

Neutron

API-nodes

  1. Add the following to the node-specific hiera-file for neutronapi-hosts:
    1. apache::mod::wsgi::package_name: 'libapache2-mod-wsgi-py3'
    2. apache::mod::wsgi::mod_path: '/usr/lib/apache2/modules/mod_wsgi.so'
  2. Pick the first node, and add the following to the nodes hiera-file:
    1. apache::service_ensure: 'stopped'
    2. neutron::server::enabled: false
  3. Run puppet with the zed modules/tags, Run apt-get autoremove && apt-get dist-upgrade
  4. Run neutron-db-manage upgrade --expand
  5. Remove the lines stopping neutron-server.service and apache2 in the hiera node-file, and re-run puppet
  6. Upgrade the rest of the API-nodes (repeating step 3 and then reboot.)
  7. Stop all neutron-server and apache processes for a moment, and run:
    1. neutron-db-manage upgrade --contract
  8. Re-start the neutron-server and apache processes

BGP-agents

Either you simply reinstall the node with yoga modules/tags; or you follow the following list:

  1. Run puppet with the zed modules/tags
  2. Run apt dist-upgrade
  3. Rerun puppet and restart the service
    1. systemctl restart neutron-bgp-dragent.service
    2. or simply reboot

Network-nodes

Either you simply reinstall the node with zed modules/tags; or you follow the following list:

  1. Run puppet with the zed modules/tags
  2. Run apt dist-upgrade
  3. Rerun puppet and restart the service (or simply reboot the host).
    1. systemctl restart ovsdb-server
    2. systemctl restart neutron-dhcp-agent.service neutron-l3-agent.service neutron-metadata-agent.service neutron-openvswitch-agent.service neutron-ovs-cleanup.service
  4. Verify that routers on the node actually work.

Placement

  1. Install one node at a time, either by reinstalling it using the zedmodules/tags or by folloing this list::
    1. Run puppet with zed modules/tags
    2. Run systemctl stop puppet apache2
    3. Run apt-get purge placement-api placement-common python3-placement && apt-get autoremove && apt-get dist-upgrade
    4. Run puppet again

Heat

The rolling upgrade procedure for heat includes a step where you are supposed to create a new rabbit vhost. I don't want that. Therefore, this is the cold upgrade steps.

  1. Set apache::service_ensure: false, heat::api::enabled: falseheat::engine::enabled: false and heat::api_cfn::enabled: false in hiera to stop all services
  2. Add the following to the node-specific hiera file for heat-api nodes:
    1. apache::mod::wsgi::package_name: 'libapache2-mod-wsgi-py3'
    2. apache::mod::wsgi::mod_path: '/usr/lib/apache2/modules/mod_wsgi.so'
  3. Do one of:
    1. Run puppet with zed modules/tags, Run apt-get update && apt-get dist-upgrade && apt-get autoremove
    2. Reinstall the nodes with zed modules/tags
  4. Run heat-manage db_sync on one of the api-nodes.
  5. Remove the hiera keys that disabled the services and re-run puppet

Barbican

Barbican must be stopped for upgrades, and can thus be performed on all barbican hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.

  1. Stop all barbican-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the barbican hosts:
    1. barbican::worker::enabled: false

    2. apache::service_ensure: 'stopped'

  2. Run puppet with the zed modules/tags

  3. Run apt dist-upgrade && apt-get autoremove

  4. Run barbican-db-manage upgrade

  5. Re-start barbican services by removing the keys added in step 1 and re-run puppet.

Magnum

Magnum must be stopped for upgrades, and can thus be performed on all magnum-hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.

  1. Stop all magnum-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the magnum hosts:
    1. magnum::conductor::enabled: false

    2. apache::service_ensure: 'stopped'

  2. Run puppet with the zed modules/tags

  3. Run apt dist-upgrade && apt autoremove

  4. Run su -s /bin/sh -c "magnum-db-manage upgrade" magnum

  5. Re-start magnum services by removing the keys added in step 1 and re-run puppet.

  6. Check if a new Fedora CoreOS image is required, and if new public cluster templates should be deployed. I.e to support a newer k8s version
    1. The official documentation provides a nice bit of help with this.

Horizon

Reinstall the horizon servers to Ubuntu 22.04 if not already done

  1. Run puppet with the zed modules/tags
  2. run apt dist-upgrade && apt autoremove
  3. Run puppet again
  4. restart apache2
  • No labels