This article summarizes the steps required to upgrade from the rocky release to the stein release of openstack.

Prerequisites:

  • This documents expects that your cloud is deployed with the latest rocky tag(vR.n.n) of the ntnuopenstack repository.
  • Your cloud is designed with one of the architecture:
    • Each openstack project have their own VM(s) for their services
  • You have a recent mysql backup in case things go south.
  • If you want to do a rolling upgrade, the following key should be set in hiera long enough in advance that all hosts have had an puppet-run to apply it:
    • nova::upgrade_level_compute: 'auto'

The recommended order to upgrade the services are listed below:

Keystone

This is the zero downtime approach

Before you begin

  • Set apache::service_ensure: 'stopped' in hiera for the node that you plan to run the rolling upgrade from
  • Login to a mysql node, start the mysql CLI, and run set global log_bin_trust_function_creators=1;

Upgrade-steps (start with a single node):

  1. Add the following lines to the node-specific hiera:
    • apache::mod::wsgi::package_name: 'libapache2-mod-wsgi-py3'
    • apache::mod::wsgi::mod_path: '/usr/lib/apache2/modules/mod_wsgi.so'
  2. Run puppet with the stein modules/tags
  3. Purge the keystone and apache2 package
  4. Run apt dist-upgrade && apt-get autoremove
  5. Run puppet again
    1. This will re-install keystone (ensure that apache2 does not start - should be ensured by puppet as of the enable: false flag in hiera)
  6. Run keystone-manage doctor and ensure nothing is wrong
  7. Run keystone-manage db_sync --expand
    1. Returns nothing
  8. Run keystone-manage db_sync --migrate
    1. Returns nothing
  9. At this point, you may restart apache2 on this node
    1. Remove the  apache::service_ensure: 'stopped' previously set in hiera.
  10. Upgrade keystone on the other nodes, one at a time
    1. Basically run step 1-5 on the other nodes
  11. When all nodes are upgraded, perform the final DB sync
    1. keystone-manage db_sync --contract

Glance

To upgrade glance without any downtime you would need to follow the following procedure:

  1. Select which glance-server to upgrade first.
    1. In the node-specific hiera for this host you should set: glance::api::enabled: false followed by a puppet-run. This would stop the glance-api service on the host.
  2. Run puppet on the first host with the stein modules/tags
  3. Run apt dist-upgrade && apt-get autoremove
  4. Run glance-manage db expand
  5. Run glance-manage db migrate
  6. Remove the glance::api::enable: false from the node-specific hiera, and run puppet again. This would re-start the glance api-server on this host.
    1. Test that this api-server works.
  7. Upgrade the rest of the glance hosts (ie; step 2 + 3 for each of the remaining glance hosts)

Cinder

To upgrade cinder without any downtime, follow this procedure

  1. Add the following three lines to the node-file of the first node you would like to upgrade:
    1. apache::service_ensure: 'stopped'

    2. cinder::scheduler::enabled: false

    3. cinder::volume::enabled: false

  2. Add the following two lines to the node-specific hiera-file for the node you are upgrading
    1. apache::mod::wsgi::package_name: 'libapache2-mod-wsgi-py3'
    2. apache::mod::wsgi::mod_path: '/usr/lib/apache2/modules/mod_wsgi.so'
  3. Run puppet on the first host with stein modules/tags
  4. Run apt dist-upgrade && apt-get autoremove
  5. Run cinder-manage db sync
  6. Remove the lines added at step 1, re-run puppet, and test that the upgraded cinder version works.
  7. Perfom step 2-4 for the rest of the cinder nodes

Neutron

To upgrade neutron with minimal downtime, follow this procedure

API-nodes

  1. Pick the first node, and run puppet with the stein modules/tags
  2. Run apt dist-upgrade && apt-get autoremove
  3. Run neutron-db-manage upgrade --expand
  4. Run neutron-db-manage --subproject neutron-fwaas upgrade head
  5. Restart neutron-server.service and rerun puppet
  6. Upgrade the rest of the API-nodes (repeating step 1, 2, 5)
  7. Stop all neutron-server processes for a moment, and run:
    1. neutron-db-manage upgrade --contract
  8. Re-start the neutron-server processes

Network-nodes

WARNING: Upgrading from queens→stein directly does not work automaticly, so if this is your upgrade-path you should expect the need for some 'apt-get purge neutron-* && apt-get autoremove' and then re-run puppet. Alternatively just simply reinstall the network-nodes.

  1. Run puppet with the stein modules/tags
  2. Run apt dist-upgrade
  3. Rerun puppet and restart the service
    1. systemctl restart ovsdb-server
    2. systemctl restart neutron-dhcp-agent.service neutron-l3-agent.service neutron-lbaasv2-agent.service neutron-metadata-agent.service neutron-openvswitch-agent.service neutron-ovs-cleanup.service

Nova

To upgrade nova without any downtime, follow this procedure

Preperations

Before the upgrades can be started it is important that all data from previous nova-releases are migrated to rocky's release. This is done like so:

  • Run nova-manage db online_data_migrations on an API node. Ensure that it reports that nothing more needs to be done.

Nova API

  1. In the node-specific hiera, disable the services at the first node you would like to upgrade with the keys
    1. apache::service_ensure: 'stopped'

  2. Add the following two lines to the node-specific hiera-file for the node you are upgrading
    1. apache::mod::wsgi::package_name: 'libapache2-mod-wsgi-py3'
    2. apache::mod::wsgi::mod_path: '/usr/lib/apache2/modules/mod_wsgi.so'
  3. Run puppet with the stein modules/tags
  4. Run apt dist-upgrade && apt-get autoremove
  5. Run nova-manage api_db sync
  6. Run nova-manage db sync
  7. Re-enable placement API on the upgraded node and disable it on the other nodes. This is because the other services needs the placement API to be updated first
    1. Remove apache::service_ensure: 'stopped' from the upgraded node's hiera file
    2. Set it on all the other nodes and run puppet
  8. Upgrade the rest of the nodes (basically run step 2-4, re-run puppet and restart nova-api and apache2)
  9. Remove the hiera keys that disabled the services, and re-run puppet

Nova-services

  1. Run puppet with the stein modules/tags
  2. Run apt dist-upgrade && apt-get autoremove
  3. Run puppet and restart services

Heat

The rolling upgrade procedure for heat includes a step where you are supposed to create a new rabbit vhost. I don't want that. Therefore, this is the cold upgrade steps.

Step 4 is only for the API-nodes, so the routine should be run on the API-nodes first

  1. Set heat::api::enabled: false and heat::engine::enabled: false and heat::api_cfn::enabled: false in hiera to stop all services
  2. Run puppet with stein modules/tags
  3. Run apt dist-upgrade
  4. Run heat-manage db_sync on one of the api-nodes.
  5. Remove the hiera keys that disabled the services and re-run puppet

Horizon

  1. Add the following lines to the node-specific hiera:
    • apache::mod::wsgi::package_name: 'libapache2-mod-wsgi-py3'
    • apache::mod::wsgi::mod_path: '/usr/lib/apache2/modules/mod_wsgi.so'
  2. Run puppet with the stein modules/tags
  3. run apt dist-upgrade
  4. Run puppet again
  5. restart apache2

Compute nodes

When all APIs etc. are upgraded, it is time to do the same on the compute-nodes. Compute nodes are simple to upgrade:

  1. Run puppet with the stein modules/tags
  2. Perform a dist-upgrade
  3. Run puppet again
  4. Restart openstack services and ovsdb-server

Finalizing:

After all nodes are upgraded (including nova-compute), run:

  1. Run nova-manage db online_data_migrations on a nova API node. Ensure that it reports that nothing more needs to be done.
  • No labels