...
- Run puppet with the yoga modules/tags
- Run
apt dist-upgrade
- Rerun puppet and restart the service (or simply reboot the host).
systemctl restart ovsdb-server
systemctl restart neutron-dhcp-agent.service neutron-l3-agent.service neutron-metadata-agent.service neutron-openvswitch-agent.service neutron-ovs-cleanup.service
- Verify that routers on the node actually work.
Placement
- Install the first node; either by resintaling it with the yoga modules/tags, or follow this list:
- Run puppet with yoga modules/tags
Run systemctl stop puppet apache2
- Run
apt-get purge placement-api placement-common python3-placement && apt-get autoremove && apt-get dist-upgrade
- Run puppet again
- Run
placement-manage db sync; placement-manage db
online_data_migrations
on the new node. - upgrade the rest of the nodes (Step 1)
Nova
To upgrade nova without any downtime, follow this procedure
Preperations
Before the upgrades can be started it is important that all data from previous nova-releases are migrated to wallaby release. This is done like so:
- Run
nova-manage db online_data_migrations
on an API node. Ensure that it reports that nothing more needs to be done.
Nova API
- In the node-specific hiera, disable the services at the first node you would like to upgrade with the keys
apache::service_ensure: 'stopped'
- Do one of:
- Run puppet with the yoga modules/tags, Run
apt dist-upgrade && apt-get autoremove
- Reinstall the node with yoga modules/tags
- Run puppet with the yoga modules/tags, Run
- Run
nova-manage api_db sync
- Run
nova-manage db sync
- Re-enable nova API on the upgraded node:
- Remove
apache::service_ensure: 'stopped'
from the upgraded node's hiera file
- Remove
- Upgrade the rest of the nodes (basically run step 2)
Nova-services
- Run puppet with the yoga modules/tags
- Run
apt dist-upgrade && apt-get autoremove
- Run puppet and restart services
Enable nova quotas through keystone unified limits
Warning | ||
---|---|---|
| ||
The nova-project are currently testing the unified quota system, but are currently not recommending it for production use! |
If you want to test the new unified quota system you first need to register some relevant limits. Substitute "SkyLow" with the relevant region-name:
Code Block |
---|
# Default-quota of 10 images and 50GB
openstack registered limit create --service nova --region SkyLow --default-limit 20 class:VCPU
openstack registered limit create --service nova --region SkyLow --default-limit 0 class:VGPU
openstack registered limit create --service nova --region SkyLow --default-limit 40960 class:MEMORY_MB
openstack registered limit create --service nova --region SkyLow --default-limit 20 servers |
Enable the unified limit integration for glance by adding the following lines in hiera:
Code Block |
---|
ntnuopenstack::nova::endpoint::internal::id: '<NOVA INTERNAL ENDPOINT ID>'
ntnuopenstack::nova::keystone::limits: true |
Compute-nodes
When all APIs etc. are upgraded, it is time to do the same on the compute-nodes. Compute nodes are simple to upgrade:
- Do one of:
- Reinstall the node with yoga modules/tags
- Run puppet with the yoga modules/tags, Run
apt dist-upgrade && apt-get autoremove
- Reboot the compute-node
- When it comes up, see that the storage-interface is up. It it isnt, run a manual puppet-run to fix it.
GPU-nodes
- The mdev-mappings need yet another change in hiera. This time you should:
Change the nova::compute::mdev::mdev_types_device_addresses_mapping parameter to something like this::
Code Block nova::compute::mdev::mdev_types: nvidia-45: device_addresses: [ '0000:3d:00.0', '0000:3e:00.0', '0000:3f:00.0', '0000:40:00.0' ]
- Remove the old keys:
nova::compute::mdev::mdev_types_device_addresses_mapping
nova::compute::vgpu::vgpu_types_device_addresses_mapping
- Run puppet with the yoga modules/tags
- Run
apt dist-upgrade && apt autoremove
- Run puppet again
- Restart openstack services and openvswitch-services
Finalizing
- Run
nova-manage db online_data_migrations
on a nova API node. Ensure that it reports that nothing more needs to be done. - Rotate octavia images.