...
- This documents expects that your cloud is deployed with a recent ussuri tag of the ntnuopenstack repository.
- You have a recent mysql backup in case things go south.
- If you want to do a rolling upgrade, the following key should be set in hiera long enough in advance that all hosts have had an puppet-run to apply it:
nova::upgrade_level_compute: 'trainussuri'
- When the upgrade is finished - set this key to 'ussurivictoria'
The recommended order to upgrade the services are listed below:
...
- Add the following three lines to the node-file of the first node you would like to upgrade:
apache::service_ensure: 'stopped'
cinder::scheduler::enabled: false
cinder::volume::enabled: false
- Do one of these two alternatives:
- Run puppet with the victoria modules/tags, run apt-get dist-upgrade, and run puppet again
- Reinstall the node with the victoria modules/tags
- Run
cinder-manage db sync && cinder-manage db online_data_migrations
- Remove the lines added at step 1, re-run puppet, and test that the upgraded cinder version works.
- Perfom step 2 -5 for the rest of the cinder nodes
...
- Pick the first node, and do one of the following:
- run puppet with the victoria modules/tags, Run
apt-get autoremove && apt-get dist-upgrade
- Reinstall the node with victoria modules/tags.
- run puppet with the victoria modules/tags, Run
- Run
neutron-db-manage upgrade --expand
- Run
neutron-db-manage --subproject neutron-fwaas upgrade head
- Restart neutron-server.service and rerun puppet
- Upgrade the rest of the API-nodes (repeating step 1, and 43)
- Stop all neutron-server processes for a moment, and run:
neutron-db-manage upgrade --contract
- Re-start the neutron-server processes
...
- Run puppet with the ussuri modules/tags
- Run
apt dist-upgrade &&
apt-get autoremove
- Run puppet and restart services
Heat
The rolling upgrade procedure for heat includes a step where you are supposed to create a new rabbit vhost. I don't want that. Therefore, this is the cold upgrade steps.
- Set
heat::api::enabled: false
andheat::engine::enabled: false
andheat::api_cfn::enabled: false
in hiera to stop all services - Do one of:
- Run puppet with victoria modules/tags, Run
apt-get update && apt-get dist-upgrade && apt-get autoremove
- Reinstall the nodes with victoria modules/tags
- Run puppet with victoria modules/tags, Run
- Run
heat-manage db_sync
on one of the api-nodes. - Remove the hiera keys that disabled the services and re-run puppet
Barbican
Barbican must be stopped for upgrades, and can thus be performed on all barbican hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.
- Stop all barbican-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the barbican hosts:
barbican::worker::enabled: false
apache::service_ensure: 'stopped'
Run puppet with the victoria modules/tags
Run
apt dist-upgrade && apt-get autoremove
Run
barbican-db-manage upgrade
Re-start barbican services by removing the keys added in step 1 and re-run puppet.
Magnum
Magnum must be stopped for upgrades, and can thus be performed on all magnum-hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.
- Reinstall the server to CentOS 8
- Stop all magnum-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the magnum hosts:
magnum::conductor::enabled: false
apache::service_ensure: 'stopped'
Run puppet with the victoria modules/tags
Run
yum upgrade
Run
su -s /bin/sh -c "magnum-db-manage upgrade" magnum
Re-start magnum services by removing the keys added in step 1 and re-run puppet.
Octavia
Octavia must be stopped for upgrades, and can thus be performed on all octavia-hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.
- Stop all octavia-services by adding the following keys to hiera, and then make sure to run puppet on the octavia hosts:
octavia::housekeeping::enabled
: false
octavia::health_manager::enabled
: false
octavia::api::enabled
: false
octavia::worker::enabled: false
Do one of:
- Reinstall the node with victoria modules/tags
Run puppet with the victoria modules/tags, Run
apt-get dist-upgrade && apt-get autoremove,
Run puppet
Run
octavia-db-manage upgrade head
Re-start octavia services by removing the keys added in step 1 and re-run puppet.
- Build a victoria-based octavia-image and upload to glance. Tag it and make octavia start to replace the amphora.
Horizon
- Run puppet with the victoria modules/tags
- run
yum upgrade
- Run puppet again
- restart httpd
Compute-nodes
When all APIs etc. are upgraded, it is time to do the same on the compute-nodes. Compute nodes are simple to upgrade:
- Do one of:
- Reinstall the node with victoria modules/tags
- Run puppet with the victoria modules/tags, Run
apt dist-upgrade && apt-get autoremove
- Reboot the compute-node
GPU-nodes
- Run puppet with the victoria modules/tags
- Run
yum upgrade && yum autoremove
- Run puppet again
- Restart openstack services and openvswitch-services
Finalizing
- Run
nova-manage db online_data_migrations
on a nova API node. Ensure that it reports that nothing more needs to be done. - Rotate octavia images.