This article summarizes the steps required to upgrade from the queens release to the rocky release of openstack.
Prerequisites:
- This documents expects that your cloud is deployed with the latest queens tag(vQ.n.n) of the ntnuopenstack repository.
- Your cloud is designed with one of the two architectures:
- Each openstack project have their own VM(s) for their services
- You have a recent mysql backup in case things go south.
If you want to do a rolling upgrade, the following key should be set in hiera long enough in advance that all hosts have had an puppet-run to apply it:nova::upgrade_level_compute: 'queens'
- ^ WiP Lars Erik vet ikke om dette stemmer enda
VM-based architecture
If you use the VM based infrastructure you have the luxury of upgrading one service at a time and test that the upgrade works before doing the next service. This allows for ~zero downtime. If the services are redundantly deployed it is also very easy to do a rollback.
The recommended order to upgrade the services are listed below:
Keystone
This is the zero downtime approach
Before you begin:
- Set
keystone::sync_db: false
andkeystone::manage_service: false
globally in hiera - Set
keystone::enabled: false
in hiera for the node that you plan to run the rolling upgrade from - Login to a mysql node, start the mysql CLI, and run
set global log_bin_trust_function_creators=1;
On the node you plan to run the rolling upgrade from
- Run puppet with the rocky modules/tags
- Stop apache2 and puppet
- Purge the keystone package
- Run
apt dist-upgrade
- Run puppet again
- This will re-install keystone (ensure that apache2 does not start - should be ensured by puppet as of the enable: false flag in hiera)
- Run keystone-manage doctor and ensure nothing is wrong
- Run
keystone-manage db_sync --expand
- Returns nothing
- Run
keystone-manage db_sync --migrate
- Returns nothing
- At this point, you may restart apache2 on this node
- Upgrade keystone on the other nodes, one at a time
- Basically run step 1-5 on the other nodes
- When all nodes are upgraded, perform the final DB sync
keystone-manage db_sync --contract
- Remove the
keystone::enabled: false
and thekeystone::manage_service: false
hiera key from the first node, and re-run puppet - Remove the
keystone::sync_db: false
key from hiera
Glance
To upgrade glace without any downtime you would need to follow the following procedure:
- Set
glance::sync_db: false
in a global hiera-file - Select which glance-server to upgrade first.
- In the node-specific hiera for this host you should set:
glance::api::enable: false
followed by a puppet-run. This would stop the glance-api service on the host.
- In the node-specific hiera for this host you should set:
- Run puppet on the first host with the rocky modules/tags
- Run
apt dist-upgrade
- Run
glance-manage db_sync expand
- Run
glance-manage db_sync migrate
- Remove the
glance::api::enable: false
from the node-specific hiera, and run puppet again. This would re-start the glance api-server on this host.- Test that this api-server works.
- Upgrade the rest of the glance hosts (ie; step 3 + 4 for each of the remaining glance hosts)
- Run
glance-manage db_sync
contract
- Remove
glance::sync_db: false
in a global hiera-file