...
- This documents expects that your cloud is deployed with the latest queens tag(vQ.n.n) of the ntnuopenstack repository.
- Your cloud is designed with one of the two architectures:
- Each openstack project have their own VM(s) for their services
- You have a recent mysql backup in case things go south.
- If you want to do a rolling upgrade, the following key should be set in hiera long enough in advance that all hosts have had an puppet-run to apply it:
nova::upgrade_level_compute: 'auto'
Background Color color red ^ WiP Lars Erik vet ikke om dette stemmer enda
VM-based architecture
If you use the VM based infrastructure you have the luxury of upgrading one service at a time and test that the upgrade works before doing the next service. This allows for ~zero downtime. If the services are redundantly deployed it is also very easy to do a rollback.
...
- Set
glance::sync_db: false
in a global hiera-file - Select which glance-server to upgrade first.
- In the node-specific hiera for this host you should set:
glance::api::enableenabled: false
followed by a puppet-run. This would stop the glance-api service on the host.
- In the node-specific hiera for this host you should set:
- Run puppet on the first host with the rocky modules/tags
- Run
apt dist-upgrade
- Run
glance-manage db expand
- Run
glance-manage db migrate
- Remove the
glance::api::enable: false
from the node-specific hiera, and run puppet again. This would re-start the glance api-server on this host.- Test that this api-server works.
- Upgrade the rest of the glance hosts (ie; step 3 + 4 for each of the remaining glance hosts)
- Run
glance-manage db_
contract
- Remove
glance::sync_db: false
in a global hiera-file
...
- Pick the first node, and run puppet with the rocky modules/tags
- Run
apt dist-upgrade
- Run
neutron-db-manage upgrade --expand
- Rocky will upgrade to FWaaS V2, run
neutron-db-manage --subproject neutron-fwaas upgrade head
to prepare the database - Restart neutron-server.service and rerun puppet
- Upgrade the rest of the API-nodes (repeating step 1-4, 2 and 5)
- When all API-nodes are upgraded, run
neutron-db-
mangemanage has_offline_migrations
- When the above command reports "
No offline migrations pending
" it is safe to: - Run
neutron-db-manage upgrade --contract
- When the above command reports "
...
- Run puppet with the rocky modules/tags
- Run
apt dist-upgrade
- Rerun puppet and restart the service
systemctl restart ovsdb-server
systemctl restart neutron-dhcp-agent.service neutron-l3-agent.service neutron-lbaasv2-agent.service neutron-metadata-agent.service neutron-openvswitch-agent.service neutron-ovs-cleanup.service
Nova
Note: In rocky, all nova-APIs will run in WSGI with apache2.
To upgrade nova without any downtime, follow this procedure
...
- Run puppet with the rocky modules/tags
- Run
apt dist-upgrade
- Run puppet and restart services
Once everything is upgraded, including the compute-nodes:
- Delete nova-consoleauth from the catalog
openstack compute service list
- Delete all rows with nova-consoleauth:
openstack compute service delete <id>
- Run
nova-manage db online_data_migrations
on an API node. Ensure that it reports that nothing more needs to be done.
Heat
The rolling upgrade procedure for heat includes a step where you are supposed to create a new rabbit vhost. I don't want that. Therefore, this is the cold upgrade steps.
...
- Set heat::api::enabled: false and heat::engine::enabled: false and heat::api_cfn: false in hiera to stop all services
- Run puppet with rocky modules/tags
- Run
apt
-dist-upgrade
- Run
heat-manage db_sync
- In hiera, add
heat::keystone::authtoken::www_authenticate_uri: "%{alias('ntnuopenstack::keystone::auth::uri')}"
to ntnuopenstack.yaml in hiera- And remove
heat::keystone::authtoken::auth_uri: "%{alias('ntnuopenstack::keystone::auth::uri')}"
- And remove
- Remove the hiera keys that disabled the services and re-run puppet
Horizon
...
- Run puppet with the roky modules/tags
- run
apt dist-upgrade
- Run puppet again
- restart apache2
Compute nodes
When all APIs etc. are upgraded, it is time to do the same on the compute-nodes. Compute nodes are simple to upgrade:
- Run puppet with the rocky modules/tags
- Perform a dist-upgrade
- Run puppet again
- Restart openstack services and ovsdb-server