You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

This article summarizes the steps required to upgrade from the xena release to the yoga release of openstack.

Prerequisites:

  • This documents expects that your cloud is deployed with a recent xena tag of the ntnuopenstack repository.
  • You have a recent mysql backup in case things go south.
  • If you want to do a rolling upgrade, the following key should be set in hiera long enough in advance that all hosts have had an puppet-run to apply it:
    • nova::upgrade_level_compute: '6.0'

    • When the upgrade is finished - the key should still be set to '6.0'
      • (Yoga is 6.0; zed is 6.1, so the next release needs a change here...)
      • These version-numbers can be correlated to release-name in the file /usr/lib/python3/dist-packages/nova/compute/rpcapi.py

The recommended order to upgrade the services are listed below:

Keystone

This is the zero downtime approach

Before you begin

  • Login to a mysql node, start the mysql CLI, and run set global log_bin_trust_function_creators=1;

Upgrade-steps (start with a single node):

  1. Set apache::service_ensure: 'stopped' in hiera for the node that you are upgrading
  2. Run puppet with the yoga modules/tags, run apt-get dist-upgrade, and run puppet again
    1. The first puppet-run complains a lot; as it changes its logic for openstack auth all openstack-changes fails; run puppet once more if this bugs you (tongue)
  3. Run keystone-manage doctor and ensure nothing is wrong
  4. Run keystone-manage db_sync --expand
    1. Returns nothing
  5. Run keystone-manage db_sync --migrate
    1. Returns nothing
  6. At this point, you may restart apache2 on this node
    1. Remove the  apache::service_ensure: 'stopped' previously set in hiera.
  7. Upgrade keystone on the other nodes, one at a time
    1. Basically run step 1, 2 and 6 on the other nodes
  8. When all nodes are upgraded, perform the final DB sync
    1. keystone-manage db_sync --contract

Glance

To upgrade glance without any downtime you would need to follow the following procedure:

  1. Select which glance-server to upgrade first.
    1. In the node-specific hiera for this host you should set: glance::api::enabled: false 
  2. Run puppet with the yoga modules/tags, run apt-get dist-upgrade, and run puppet again
  3. Run glance-manage db expand
  4. Run glance-manage db migrate
  5. Remove the glance::api::enable: false from the node-specific hiera, and run puppet again. This would re-start the glance api-server on this host.
    1. Test that this api-server works.
  6. Upgrade the rest of the glance hosts (ie; step 2 for each of the remaining glance hosts)
  7. Run glance-manage db contract on one of the glance-nodes.

Enable glance quotas through keystone unified limits

If you want to add quotas to limit tenants possibility to use too much storage for their images you need to register default-quotas in keystone. Substitute "SkyLow" with the relevant region-name:

# Default-quota of 10 images and 50GB
openstack registered limit create --service glance --region SkyLow --default-limit 50000 image_size_total 
openstack registered limit create --service glance --region SkyLow --default-limit 10 image_count_total
# Default-quota of 5 images and 50GB which is currently being uploaded.
openstack registered limit create --service glance --region SkyLow --default-limit 50000 image_stage_total
openstack registered limit create --service glance --region SkyLow --default-limit 5 image_count_uploading

Enable the unified limit integration for glance by adding the following lines in hiera:

ntnuopenstack::glance::endpoint::internal::id: '<GLANCE INTERNAL ENDPOINT ID>'
ntnuopenstack::glance::keystone::limits: true

Cinder

To upgrade cinder without any downtime, follow this procedure

  1. Add the following three lines to the node-file of the first node you would like to upgrade:
    1. apache::service_ensure: 'stopped'
    2. cinder::scheduler::enabled: false
    3. cinder::volume::enabled: false
  2. Run puppet with the yoga modules/tags, run apt-get dist-upgrade, and run puppet again
  3. Run cinder-manage db sync && cinder-manage db online_data_migrations
  4. Remove the lines added at step 1, re-run puppet, and test that the upgraded cinder version works.
  5. Perfom step 2 for the rest of the cinder nodes

Neutron

API-nodes

  1. Pick the first node, and run puppet with the yoga modules/tags, Run apt-get autoremove && apt-get dist-upgrade
  2. Run neutron-db-manage upgrade --expand
  3. Restart neutron-server.service and rerun puppet
  4. Upgrade the rest of the API-nodes (repeating step 1, and 3)
  5. Stop all neutron-server processes for a moment, and run:
    1. neutron-db-manage upgrade --contract
  6. Re-start the neutron-server processes

BGP-agents

Either you simply reinstall the node with yoga modules/tags; or you follow the following list:

  1. Run puppet with the yoga modules/tags
  2. Run apt dist-upgrade
  3. Rerun puppet and restart the service
    1. systemctl restart neutron-bgp-dragent.service
    2. or simply reboot

Network-nodes

Either you simply reinstall the node with yoga modules/tags; or you follow the following list:

  1. Run puppet with the yoga modules/tags
  2. Run apt dist-upgrade
  3. Rerun puppet and restart the service (or simply reboot the host).
    1. systemctl restart ovsdb-server
    2. systemctl restart neutron-dhcp-agent.service neutron-l3-agent.service neutron-metadata-agent.service neutron-openvswitch-agent.service neutron-ovs-cleanup.service
  4. Verify that routers on the node actually work.




  • No labels