This article summarizes the steps required to upgrade from the train release to the ussuri release of openstack.
Prerequisites:
- This documents expects that your cloud is deployed with a recent train tag of the ntnuopenstack repository.
- You have a recent mysql backup in case things go south.
- If you want to do a rolling upgrade, the following key should be set in hiera long enough in advance that all hosts have had an puppet-run to apply it:
nova::upgrade_level_compute: 'train'
- When the upgrade is finished - set this key to 'ussuri'
The recommended order to upgrade the services are listed below:
Keystone
This is the zero downtime approach
Before you begin
- Set
apache::service_ensure: 'stopped'
in hiera for the node that you plan to run the rolling upgrade from - Login to a mysql node, start the mysql CLI, and run
set global log_bin_trust_function_creators=1;
Upgrade-steps (start with a single node):
- Run puppet with the ussuri modules/tags
- Purge the keystone and apache2 package
- Run
apt dist-upgrade && apt clean && apt autoclean
&&apt-get autoremove
- Run puppet again
- This will re-install keystone (ensure that apache2 does not start - should be ensured by puppet as of the enable: false flag in hiera)
- Run
keystone-manage doctor
and ensure nothing is wrong - Run
keystone-manage db_sync --expand
- Returns nothing
- Run
keystone-manage db_sync --migrate
- Returns nothing
- At this point, you may restart apache2 on this node
- Remove the
apache::service_ensure: 'stopped'
previously set in hiera.
- Remove the
- Upgrade keystone on the other nodes, one at a time
- Basically run step 1-5 on the other nodes
- When all nodes are upgraded, perform the final DB sync
keystone-manage db_sync --contract
Glance
To upgrade glance without any downtime you would need to follow the following procedure:
- Select which glance-server to upgrade first.
- In the node-specific hiera for this host you should set:
glance::api::enabled: false
followed by a puppet-run. This would stop the glance-api service on the host.
- In the node-specific hiera for this host you should set:
- Run puppet on the first host with the ussuri modules/tags
- Run
apt-get autoremove && apt-get dist-upgrade
- Run puppet again.
- Run
glance-manage db expand
- Run
glance-manage db migrate
- Remove the
glance::api::enable: false
from the node-specific hiera, and run puppet again. This would re-start the glance api-server on this host.- Test that this api-server works.
- Upgrade the rest of the glance hosts (ie; step 2-4 for each of the remaining glance hosts)
- Run
glance-manage db contract
on one of the glance-nodes.
Cinder
To upgrade cinder without any downtime, follow this procedure
- Purge old DB-records
cinder-manage db purge 0
- Add the following three lines to the node-file of the first node you would like to upgrade:
apache::service_ensure: 'stopped'
cinder::scheduler::enabled: false
cinder::volume::enabled: false
- Run puppet on the first host with ussuri modules/tags
- Run
apt-get autoremove && apt-get dist-upgrade
- Run puppet again
- Run
cinder-manage db sync && cinder-manage db online_data_migrations
- Remove the lines added at step 1, re-run puppet, and test that the upgraded cinder version works.
- Perfom step 2-5 for the rest of the cinder nodes
Neutron
API-nodes
- Pick the first node, and run puppet with the ussuri modules/tags
- Run
apt-get autoremove && apt-get dist-upgrade
- Run
neutron-db-manage upgrade --expand
- Run
neutron-db-manage --subproject neutron-fwaas upgrade head
- Restart neutron-server.service and rerun puppet
- Upgrade the rest of the API-nodes (repeating step 1, 2, 5)
- Stop all neutron-server processes for a moment, and run:
neutron-db-manage upgrade --contract
- Re-start the neutron-server processes
BGP-agents
- Run puppet with the ussuri modules/tags
- Run
apt dist-upgrade
- Rerun puppet and restart the service
systemctl restart neutron-bgp-dragent.service
Network-nodes
- Run puppet with the ussuri modules/tags
- Run
apt dist-upgrade
- Rerun puppet and restart the service
systemctl restart ovsdb-server
systemctl restart neutron-dhcp-agent.service neutron-l3-agent.service neutron-metadata-agent.service neutron-openvswitch-agent.service neutron-ovs-cleanup.service
- Verify that routers on the node actually work. We had to reinstall a node in skylow to make it work.
Placement
- Run puppet with ussuri modules/tags
- Run
apt-get purge placement-api placement-common python3-placement && apt-get autoremove && apt-get dist-upgrade
- Run puppet again
- Run
placement-manage db sync; placement-manage db online_data_migrations
Nova
To upgrade nova without any downtime, follow this procedure
Preperations
Before the upgrades can be started it is important that all data from previous nova-releases are migrated to stein's release. This is done like so:
- Run
nova-manage db online_data_migrations
on an API node. Ensure that it reports that nothing more needs to be done.- Make sure there is no errors. Particulary anything related to "virtual interface table". See https://bugs.launchpad.net/nova/+bug/182443
Nova API
- In the node-specific hiera, disable the services at the first node you would like to upgrade with the keys
apache::service_ensure: 'stopped'
- Run puppet with the ussuri modules/tags
- Run
apt dist-upgrade && apt-get autoremove
- Run
nova-manage api_db sync
- Run
nova-manage db sync
- Re-enable placement API on the upgraded node:
- Remove
apache::service_ensure: 'stopped'
from the upgraded node's hiera file
- Remove
- Upgrade the rest of the nodes (basically run step 1-3, re-run puppet and restart nova-api and apache2)
Nova-services
- Run puppet with the ussuri modules/tags
- Run
apt dist-upgrade &&
apt-get autoremove
- Run puppet and restart services
Heat
The rolling upgrade procedure for heat includes a step where you are supposed to create a new rabbit vhost. I don't want that. Therefore, this is the cold upgrade steps.
Step 4 is only for the API-nodes, so the routine should be run on the API-nodes first
- Set
heat::api::enabled: false
andheat::engine::enabled: false
andheat::api_cfn::enabled: false
in hiera to stop all services - Run puppet with ussuri modules/tags
- Run
apt-get update && apt-get dist-upgrade && apt-get autoremove
- Run
heat-manage db_sync
on one of the api-nodes. - Remove the hiera keys that disabled the services and re-run puppet
Barbican
Barbican must be stopped for upgrades, and can thus be performed on all barbican hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.
- Stop all barbican-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the barbican hosts:
barbican::worker::enabled: false
apache::service_ensure: 'stopped'
Run puppet with the ussuri modules/tags
Run
apt dist-upgrade && apt-get autoremove
Run
barbican-db-manage upgrade
Re-start barbican services by removing the keys added in step 1 and re-run puppet.
Magnum
Magnum must be stopped for upgrades, and can thus be performed on all magnum-hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.
- Reinstall the server to CentOS 8
- Stop all magnum-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the magnum hosts:
magnum::conductor::enabled: false
apache::service_ensure: 'stopped'
- Add
magnum::keystone::keystone_auth::auth_url: "%{alias('magnum::keystone::authtoken::auth_url')}"
andmagnum::keystone::keystone_auth::password: "%{alias('magnum::keystone::authtoken::password')}"
tontnuopenstack.yaml
Run puppet with the ussuri modules/tags
Run
yum upgrade
Run
su -s /bin/sh -c "magnum-db-manage upgrade" magnum
Re-start magnum services by removing the keys added in step 1 and re-run puppet.
Octavia
Octavia must be stopped for upgrades, and can thus be performed on all octavia-hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.
- Stop all magnum-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the magnum hosts:
octavia::housekeeping::enabled
: false
octavia::health_manager::enabled
: false
octavia::api::enabled
: false
octavia::worker::enabled: false
Run puppet with the ussuri modules/tags
Run
apt-get dist-upgrade && apt-get autoremove
- Run puppet
Run
octavia-db-manage upgrade head
Re-start octavia services by removing the keys added in step 1 and re-run puppet.
- Build a ussuri-based octavia-image and upload to glance. Tag it and make octavia start to replace the amphora.
Horizon
- Reinstall the server to CentOS 8
- Run puppet with the ussuri modules/tags
- run
yum upgrade
- Run puppet again
- restart httpd
Compute-nodes
When all APIs etc. are upgraded, it is time to do the same on the compute-nodes. Compute nodes are simple to upgrade:
- Run puppet with the ussuri modules/tags
- Run
apt dist-upgrade && apt-get autoremove
- Reboot the compute-node
GPU-nodes
- Reinstall the server to CentOS 8
- Run puppet with the ussuri modules/tags
- Run
yum upgrade && yum autoremove
- Run puppet again
- Restart openstack services and openvswitch-services
Finalizing
- Run
nova-manage db online_data_migrations
on a nova API node. Ensure that it reports that nothing more needs to be done.