Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Schedule windows to licensed hosts

Do not fill windows-hosts unless all other hosts are full

We pay for the ability to launch licensed windows server VM's on some of our compute-nodes. To be in line with the license-agreements we need to limit Windows VM's to the compute-nodes which we pay the fees for. This is done using the AggregateImagePropertiesIsolation scheduling filter. We create a host-aggregate called windowscompute, which have the following property set:

  • os_type='windows'

When we upload windows-images we make sure to set the same property on the image. Openstack will then make sure to boot VM's based on that image on one of the hosts in the host-aggregate. If none of the hosts in the aggregate is able to fullfill the request the VM will fail to be scheduled.

Do not fill windows-hosts unless all other hosts are full

Limiting some images to a set of hosts give us the possibility that these images cannot be booted when these hosts are full of other machinesOne issue with filling hypervisors before attempting the next is if we fill the hypervisors liscenced for windows with non-windows VM's before filling the non-windows compute-nodes. This would end up in a situation where we cannot schedule new Windows-VM's even though we have plenty of space left for it on other nodes. To avoid this we can add metadata to the host-aggregate for windows-compute setting a very low weight on the host. This would make sure that we only use the windows-hosts if the VM's cannot be placed elsewhere (because all other hypervisors are full, or because we are scheduling a windows VM).

...

  • ram_weight_multiplier='-2000'

Schedule GPU-instances to GPU-equipped compute-nodes

To make sure that we schedule GPU-based flavors to GPU-equipped compute-nodes (and general-purpose VM's to general-purpose compute-nodes) we employ the AggregateInstanceExtraSpecsFilter. We create host-aggregates with metadata under the key "node_type" describing what kind of compute-node this is. For instance we have the following host-aggregate in SkyHiGh:

Code Block
$ openstack aggregate show general-purpose
+-------------------+------------------------------------------------------------------------------------------------------+
| Field             | Value                                                                                                |
+-------------------+------------------------------------------------------------------------------------------------------+
| availability_zone | nova                                                                                                 |
| created_at        | 2019-05-06T11:59:45.000000                                                                           |
| deleted           | False                                                                                                |
| deleted_at        | None                                                                                                 |
| hosts             | compute01, compute02, compute03, compute04, compute05, compute06, compute07, compute08, compute09,   |
|                   | compute10, compute11, compute12, compute13, compute14, compute15, compute16, compute17               |
| id                | 5                                                                                                    |
| name              | general-purpose                                                                                      |
| properties        | node_type='general', os_type='any'                                                                   |
| updated_at        | 2020-06-25T08:05:31.000000                                                                           |
+-------------------+------------------------------------------------------------------------------------------------------+

We then tag flavors which should be able to run on this compute-node with the same value. In skyhigh the m1.medium is for instance considered to be a general-purpose flavor, and should thus be placed on a node with the type 'general':

Code Block
$ openstack flavor show m1.medium
+----------------------------+---------------------------------------------------------------------------------------------+
| Field                      | Value                                                                                       |
+----------------------------+---------------------------------------------------------------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                                                                       |
| OS-FLV-EXT-DATA:ephemeral  | 0                                                                                           |
| access_project_ids         | None                                                                                        |
| disk                       | 40                                                                                          |
| id                         | 1ff86526-c425-4b48-87ac-83826e1b7136                                                        |
| name                       | m1.medium                                                                                   |
| os-flavor-access:is_public | True                                                                                        |
| properties                 | aggregate_instance_extra_specs:node_type='general', hw:cpu_cores='1', hw:cpu_sockets='2',   |
|                            | hw:cpu_threads='1', hw_rng:allowed='true', hw_rng:rate_bytes='24',                          |
|                            | hw_rng:rate_period='5000', quota:disk_read_iops_sec='300', quota:disk_write_iops_sec='300'  |
| ram                        | 8192                                                                                        |
| rxtx_factor                | 1.0                                                                                         |
| swap                       |                                                                                             |
| vcpus                      | 2                                                                                           |
+----------------------------+---------------------------------------------------------------------------------------------+

Our strategy to make sure all placement is correct is to tag all general-purpose flavors with 'general', and then create specific host-aggregates for specific flavors. For instance GPU-enabled flavors would be tagged with another key, which also wil be used to tag the compute-node with these GPUs installed.