Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Run puppet with the ussuri modules/tags
  2. Purge the keystone and apache2 package
  3. Run apt-get autoremove && apt dist-upgrade && apt clean && apt autoclean
  4. Run puppet again
    1. This will re-install keystone (ensure that apache2 does not start - should be ensured by puppet as of the enable: false flag in hiera)
  5. Run keystone-manage doctor and ensure nothing is wrong
  6. Run keystone-manage db_sync --expand
    1. Returns nothing
  7. Run keystone-manage db_sync --migrate
    1. Returns nothing
  8. At this point, you may restart apache2 on this node
    1. Remove the  apache::service_ensure: 'stopped' previously set in hiera.
  9. Upgrade keystone on the other nodes, one at a time
    1. Basically run step 1-5 on the other nodes
  10. When all nodes are upgraded, perform the final DB sync
    1. keystone-manage db_sync --contract

...

  1. Purge old DB-records
    1. cinder-manage db purge 0
  2. Add the following three lines to the node-file of the first node you would like to upgrade:
    1. apache::service_ensure: 'stopped'

    2. cinder::scheduler::enabled: false

    3. cinder::volume::enabled: false

  3. Run puppet on the first host with ussuri modules/tags
  4. Run apt-get autoremove && apt-get dist-upgrade
  5. Run puppet again
  6. Run cinder-manage db sync && cinder-manage db online_data_migrations
  7. Remove the lines added at step 1, re-run puppet, and test that the upgraded cinder version works.
  8. Perfom step 2-5 for the rest of the cinder nodes

Neutron

API-nodes

  1. Pick the first node, and run puppet with the ussuri modules/tags
  2. Run apt-get autoremove && apt-get dist-upgrade
  3. Run neutron-db-manage upgrade --expand
  4. Run neutron-db-manage --subproject neutron-fwaas upgrade head
  5. Restart neutron-server.service and rerun puppet
  6. Upgrade the rest of the API-nodes (repeating step 1, 2, 5)
  7. Stop all neutron-server processes for a moment, and run:
    1. neutron-db-manage upgrade --contract
  8. Re-start the neutron-server processes

BGP-agents

  1. Run puppet with the ussuri modules/tags
  2. Run apt dist-upgrade
  3. Rerun puppet and restart the service
    1. systemctl restart neutron-bgp-dragent.service

Network-nodes

  1. Run puppet with the ussuri modules/tags
  2. Run apt dist-upgrade
  3. Rerun puppet and restart the service
    1. systemctl restart ovsdb-server
    2. systemctl restart neutron-dhcp-agent.service neutron-l3-agent.service neutron-metadata-agent.service neutron-openvswitch-agent.service neutron-ovs-cleanup.service
  4. Verify that routers on the node actually work. We had to reinstall a node in skylow to make it work.

Placement

  1. Run puppet with ussuri modules/tags
  2. Run apt-get purge placement-api placement-common python3-placement && apt-get autoremove && apt-get dist-upgrade
  3. Run puppet again
  4. Run placement-manage db sync; placement-manage db online_data_migrations

Nova

To upgrade nova without any downtime, follow this procedure

Preperations

Before the upgrades can be started it is important that all data from previous nova-releases are migrated to stein's release. This is done like so:

  • Run nova-manage db online_data_migrations on an API node. Ensure that it reports that nothing more needs to be done.

Nova API

  1. In the node-specific hiera, disable the services at the first node you would like to upgrade with the keys
    1. apache::service_ensure: 'stopped'

  2. Run puppet with the ussuri modules/tags
  3. Run apt dist-upgrade && apt-get autoremove
  4. Run nova-manage api_db sync
  5. Run nova-manage db sync
  6. Re-enable placement API on the upgraded node:
    1. Remove apache::service_ensure: 'stopped' from the upgraded node's hiera file
  7. Upgrade the rest of the nodes (basically run step 1-3, re-run puppet and restart nova-api and apache2)

Nova-services

  1. Run puppet with the ussuri modules/tags
  2. Run apt dist-upgrade && apt-get autoremove
  3. Run puppet and restart services

Heat

The rolling upgrade procedure for heat includes a step where you are supposed to create a new rabbit vhost. I don't want that. Therefore, this is the cold upgrade steps.

Step 4 is only for the API-nodes, so the routine should be run on the API-nodes first

  1. Set heat::api::enabled: false and heat::engine::enabled: false and heat::api_cfn::enabled: false in hiera to stop all services
  2. Run puppet with ussuri modules/tags
  3. Run apt-get update && apt-get dist-upgrade && apt-get autoremove
  4. Run heat-manage db_sync on one of the api-nodes.
  5. Remove the hiera keys that disabled the services and re-run puppet

Barbican

Barbican must be stopped for upgrades, and can thus be performed on all barbican hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.

  1. Stop all barbican-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the barbican hosts:
    1. barbican::worker::enabled: false

    2. apache::service_ensure: 'stopped'

  2. Run puppet with the ussuri modules/tags

  3. Run apt dist-upgrade && apt-get autoremove

  4. Run barbican-db-manage upgrade

  5. Re-start barbican services by removing the keys added in step 1 and re-run puppet.

Magnum

Magnum must be stopped for upgrades, and can thus be performed on all magnum-hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.

  1. Reinstall the server to CentOS 8
  2. Stop all magnum-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the magnum hosts:
    1. magnum::conductor::enabled: false

    2. apache::service_ensure: 'stopped'

  3. Add magnum::keystone::keystone_auth::auth_url: "%{alias('magnum::keystone::authtoken::auth_url')}" and magnum::keystone::keystone_auth::password: "%{alias('magnum::keystone::authtoken::password')}" to ntnuopenstack.yaml
  4. Run puppet with the ussuri modules/tags

  5. Run yum upgrade

  6. Run su -s /bin/sh -c "magnum-db-manage upgrade" magnum

  7. Re-start magnum services by removing the keys added in step 1 and re-run puppet.

Octavia

Octavia must be stopped for upgrades, and can thus be performed on all octavia-hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.

  1. Stop all magnum-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the magnum hosts:
    1. octavia::housekeeping::enabled: false

    2. octavia::health_manager::enabled: false

    3. octavia::api::enabled: false

    4. octavia::worker::enabled: false

  2. Run puppet with the ussuri modules/tags

  3. Run apt-get dist-upgrade && apt-get autoremove

  4. Run puppet
  5. Run octavia-db-manage upgrade head

  6. Re-start octavia services by removing the keys added in step 1 and re-run puppet.

  7. Build a ussuri-based octavia-image and upload to glance. Tag it and make octavia start to replace the amphora.

Horizon

  1. Reinstall the server to CentOS 8
  2. Run puppet with the ussuri modules/tags
  3. run yum upgrade
  4. Run puppet again
  5. restart httpd

Compute-nodes

When all APIs etc. are upgraded, it is time to do the same on the compute-nodes. Compute nodes are simple to upgrade:

  1. Run puppet with the ussuri modules/tags
  2. Run apt dist-upgrade && apt-get autoremove
  3. Reboot the compute-node

GPU-nodes

  1. Reinstall the server to CentOS 8
  2. Run puppet with the ussuri modules/tags
  3. Run yum upgrade && yum autoremove
  4. Run puppet again
  5. Restart openstack services and openvswitch-services

Finalizing

  • Run nova-manage db online_data_migrations on a nova API node. Ensure that it reports that nothing more needs to be done.