This article summarizes the steps required to upgrade from the xena release to the yoga release of openstack.
Prerequisites:
- This documents expects that your cloud is deployed with a recent xena tag of the ntnuopenstack repository.
- You have a recent mysql backup in case things go south.
- If you want to do a rolling upgrade, the following key should be set in hiera long enough in advance that all hosts have had an puppet-run to apply it:
nova::upgrade_level_compute: '6.0'
- When the upgrade is finished - the key should still be set to '6.0'
- (Yoga is 6.0; zed is 6.1, so the next release needs a change here...)
- These version-numbers can be correlated to release-name in the file /usr/lib/python3/dist-packages/nova/compute/rpcapi.py
The recommended order to upgrade the services are listed below:
Keystone
This is the zero downtime approach
Before you begin
- Login to a mysql node, start the mysql CLI, and run
set global log_bin_trust_function_creators=1;
Upgrade-steps (start with a single node):
- Set
apache::service_ensure: 'stopped'
in hiera for the node that you are upgrading - Run puppet with the yoga modules/tags, run apt-get dist-upgrade, and run puppet again
- The first puppet-run complains a lot; as it changes its logic for openstack auth all openstack-changes fails; run puppet once more if this bugs you
- The first puppet-run complains a lot; as it changes its logic for openstack auth all openstack-changes fails; run puppet once more if this bugs you
- Run
keystone-manage doctor
and ensure nothing is wrong - Run
keystone-manage db_sync --expand
- Returns nothing
- Run
keystone-manage db_sync --migrate
- Returns nothing
- At this point, you may restart apache2 on this node
- Remove the
apache::service_ensure: 'stopped'
previously set in hiera.
- Remove the
- Upgrade keystone on the other nodes, one at a time
- Basically run step 1, 2 and 6 on the other nodes
- When all nodes are upgraded, perform the final DB sync
keystone-manage db_sync --contract
Glance
To upgrade glance without any downtime you would need to follow the following procedure:
- Select which glance-server to upgrade first.
- In the node-specific hiera for this host you should set:
glance::api::enabled: false
- In the node-specific hiera for this host you should set:
- Run puppet with the yoga modules/tags, run apt-get dist-upgrade, and run puppet again
- Run
glance-manage db expand
- Run
glance-manage db migrate
- Remove the
glance::api::enable: false
from the node-specific hiera, and run puppet again. This would re-start the glance api-server on this host.- Test that this api-server works.
- Upgrade the rest of the glance hosts (ie; step 2 for each of the remaining glance hosts)
- Run
glance-manage db contract
on one of the glance-nodes.
Enable glance quotas through keystone unified limits
If you want to add quotas to limit tenants possibility to use too much storage for their images you need to register default-quotas in keystone. Substitute "SkyLow" with the relevant region-name:
# Default-quota of 10 images and 50GB openstack registered limit create --service glance --region SkyLow --default-limit 50000 image_size_total openstack registered limit create --service glance --region SkyLow --default-limit 10 image_count_total # Default-quota of 5 images and 50GB which is currently being uploaded. openstack registered limit create --service glance --region SkyLow --default-limit 50000 image_stage_total openstack registered limit create --service glance --region SkyLow --default-limit 5 image_count_uploading
Enable the unified limit integration for glance by adding the following lines in hiera:
ntnuopenstack::glance::endpoint::internal::id: '<GLANCE INTERNAL ENDPOINT ID>' ntnuopenstack::glance::keystone::limits: true
Cinder
To upgrade cinder without any downtime, follow this procedure
- Add the following three lines to the node-file of the first node you would like to upgrade:
apache::service_ensure: 'stopped'
cinder::scheduler::enabled: false
cinder::volume::enabled: false
- Run puppet with the yoga modules/tags, run apt-get dist-upgrade, and run puppet again
- Run
cinder-manage db sync && cinder-manage db online_data_migrations
- Remove the lines added at step 1, re-run puppet, and test that the upgraded cinder version works.
- Perfom step 2 for the rest of the cinder nodes
Neutron
API-nodes
- Pick the first node, and run puppet with the yoga modules/tags, Run
apt-get autoremove && apt-get dist-upgrade
- Run
neutron-db-manage upgrade --expand
- Restart neutron-server.service and rerun puppet
- Upgrade the rest of the API-nodes (repeating step 1, and 3)
- Stop all neutron-server processes for a moment, and run:
neutron-db-manage upgrade --contract
- Re-start the neutron-server processes
BGP-agents
Either you simply reinstall the node with yoga modules/tags; or you follow the following list:
- Run puppet with the yoga modules/tags
- Run
apt dist-upgrade
- Rerun puppet and restart the service
systemctl restart neutron-bgp-dragent.service
or simply reboot
Network-nodes
Either you simply reinstall the node with yoga modules/tags; or you follow the following list:
- Run puppet with the yoga modules/tags
- Run
apt dist-upgrade
- Rerun puppet and restart the service (or simply reboot the host).
systemctl restart ovsdb-server
systemctl restart neutron-dhcp-agent.service neutron-l3-agent.service neutron-metadata-agent.service neutron-openvswitch-agent.service neutron-ovs-cleanup.service
- Verify that routers on the node actually work.