You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

This article summarizes the steps required to upgrade from the ussuri release to the victoria release of openstack.

Prerequisites:

  • This documents expects that your cloud is deployed with a recent ussuri tag of the ntnuopenstack repository.
  • You have a recent mysql backup in case things go south.
  • If you want to do a rolling upgrade, the following key should be set in hiera long enough in advance that all hosts have had an puppet-run to apply it:
    • nova::upgrade_level_compute: 'train'

    • When the upgrade is finished - set this key to 'ussuri'

The recommended order to upgrade the services are listed below:

Keystone

This is the zero downtime approach

Before you begin

  • Login to a mysql node, start the mysql CLI, and run set global log_bin_trust_function_creators=1;

Upgrade-steps (start with a single node):

  1. Set apache::service_ensure: 'stopped' in hiera for the node that you are upgrading
  2. Do one of these two alternatives:
    1. Run puppet with the victoria modules/tags, run apt-get dist-upgrade, and run puppet again
    2. Reinstall the node with the victoria modules/tags
  3. Run keystone-manage doctor and ensure nothing is wrong
  4. Run keystone-manage db_sync --expand
    1. Returns nothing
  5. Run keystone-manage db_sync --migrate
    1. Returns nothing
  6. At this point, you may restart apache2 on this node
    1. Remove the  apache::service_ensure: 'stopped' previously set in hiera.
  7. Upgrade keystone on the other nodes, one at a time
    1. Basically run step 1, 2 and 6 on the other nodes
  8. When all nodes are upgraded, perform the final DB sync
    1. keystone-manage db_sync --contract

Glance

To upgrade glance without any downtime you would need to follow the following procedure:

  1. Select which glance-server to upgrade first.
    1. In the node-specific hiera for this host you should set: glance::api::enabled: false followed by a puppet-run. This would stop the glance-api service on the host.
  2. Do one of these two alternatives:
    1. Run puppet with the victoria modules/tags, run apt-get dist-upgrade, and run puppet again
    2. Reinstall the node with the victoria modules/tags
  3. Run glance-manage db expand
  4. Run glance-manage db migrate
  5. Remove the glance::api::enable: false from the node-specific hiera, and run puppet again. This would re-start the glance api-server on this host.
    1. Test that this api-server works.
  6. Upgrade the rest of the glance hosts (ie; step 2 for each of the remaining glance hosts)
  7. Run glance-manage db contract on one of the glance-nodes.

Cinder

To upgrade cinder without any downtime, follow this procedure

  1. Add the following three lines to the node-file of the first node you would like to upgrade:
    1. apache::service_ensure: 'stopped'

    2. cinder::scheduler::enabled: false

    3. cinder::volume::enabled: false

  2. Do one of these two alternatives:
    1. Run puppet with the victoria modules/tags, run apt-get dist-upgrade, and run puppet again
    2. Reinstall the node with the victoria modules/tags
  3. Run cinder-manage db sync && cinder-manage db online_data_migrations
  4. Remove the lines added at step 1, re-run puppet, and test that the upgraded cinder version works.
  5. Perfom step 2-5 for the rest of the cinder nodes

Neutron

API-nodes

  1. Pick the first node, and do one of the following:
    1. run puppet with the victoria modules/tags, Run apt-get autoremove && apt-get dist-upgrade
    2. Reinstall the node with victoria modules/tags.
  2. Run neutron-db-manage upgrade --expand
  3. Run neutron-db-manage --subproject neutron-fwaas upgrade head
  4. Restart neutron-server.service and rerun puppet
  5. Upgrade the rest of the API-nodes (repeating step 1, and 4)
  6. Stop all neutron-server processes for a moment, and run:
    1. neutron-db-manage upgrade --contract
  7. Re-start the neutron-server processes

BGP-agents

Either you simply reinstall the node with victoria modules/tags; or you follow the following list:

  1. Run puppet with the victoria modules/tags
  2. Run apt dist-upgrade
  3. Rerun puppet and restart the service
    1. systemctl restart neutron-bgp-dragent.service
    2. or simply reboot

Network-nodes

Either you simply reinstall the node with victoria modules/tags; or you follow the following list:

  1. Run puppet with the victoria modules/tags
  2. Run apt dist-upgrade
  3. Rerun puppet and restart the service (or simply reboot the host).
    1. systemctl restart ovsdb-server
    2. systemctl restart neutron-dhcp-agent.service neutron-l3-agent.service neutron-metadata-agent.service neutron-openvswitch-agent.service neutron-ovs-cleanup.service
  4. Verify that routers on the node actually work.

Placement

  1. Install the first node; either by resintaling it with the victoria modules/tags, or follow this list:
    1. Run puppet with victoria modules/tags
    2. Run apt-get purge placement-api placement-common python3-placement && apt-get autoremove && apt-get dist-upgrade
    3. Run puppet again
  2. Run placement-manage db sync; placement-manage db online_data_migrations on the new node.
  3. upgrade the rest of the nodes, skipping step 2.

Nova

To upgrade nova without any downtime, follow this procedure

Preperations

Before the upgrades can be started it is important that all data from previous nova-releases are migrated to stein's release. This is done like so:

  • Run nova-manage db online_data_migrations on an API node. Ensure that it reports that nothing more needs to be done.
  • Make sure that none of the following schedule-filters are used:
    • Aggregatefilter
    • AggregateRAMFilter
    • AggregateDiskFilter
    • RetryFilter

Nova API

  1. In the node-specific hiera, disable the services at the first node you would like to upgrade with the keys
    1. apache::service_ensure: 'stopped'

  2. Do one of:
    1. Run puppet with the victoria modules/tags, Run apt dist-upgrade && apt-get autoremove
    2. Reinstall the node with victoria modules/tags
  3. Run nova-manage api_db sync
  4. Run nova-manage db sync
  5. Re-enable placement API on the upgraded node:
    1. Remove apache::service_ensure: 'stopped' from the upgraded node's hiera file
  6. Upgrade the rest of the nodes (basically run step 2)

Nova-services

  1. Run puppet with the ussuri modules/tags
  2. Run apt dist-upgrade && apt-get autoremove
  3. Run puppet and restart services

Heat

The rolling upgrade procedure for heat includes a step where you are supposed to create a new rabbit vhost. I don't want that. Therefore, this is the cold upgrade steps.

  1. Set heat::api::enabled: false and heat::engine::enabled: false and heat::api_cfn::enabled: false in hiera to stop all services
  2. Do one of:
    1. Run puppet with victoria modules/tags, Run apt-get update && apt-get dist-upgrade && apt-get autoremove
    2. Reinstall the nodes with victoria modules/tags
  3. Run heat-manage db_sync on one of the api-nodes.
  4. Remove the hiera keys that disabled the services and re-run puppet

Barbican

Barbican must be stopped for upgrades, and can thus be performed on all barbican hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.

  1. Stop all barbican-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the barbican hosts:
    1. barbican::worker::enabled: false

    2. apache::service_ensure: 'stopped'

  2. Run puppet with the victoria modules/tags

  3. Run apt dist-upgrade && apt-get autoremove

  4. Run barbican-db-manage upgrade

  5. Re-start barbican services by removing the keys added in step 1 and re-run puppet.

Magnum

Magnum must be stopped for upgrades, and can thus be performed on all magnum-hosts at the same time. It might be an idea to keep one set of hosts stopped at old code in case of the need for a sudden roll-back.

  1. Reinstall the server to CentOS 8
  2. Stop all magnum-services by adding the following keys to node-specific hiera, and then make sure to run puppet on the magnum hosts:
    1. magnum::conductor::enabled: false

    2. apache::service_ensure: 'stopped'

  3. Run puppet with the victoria modules/tags

  4. Run yum upgrade

  5. Run su -s /bin/sh -c "magnum-db-manage upgrade" magnum

  6. Re-start magnum services by removing the keys added in step 1 and re-run puppet.

  • No labels