When a VM is scheduled on a certain compute-node the VM's resources are reserved on the compute-node. The scheduler takes care to not schedule more VM's than the compute-node is able to host. The placement-service keeps track of these resources, and it allows a certain amount of overcommitment. When alloving a higher degree of overcommitment you should keep an eye to the real resource-usage of the compute-node.
Display current resource commitment
To manipulate the inventory of a compute-node you need the uuid of that compute-node's resource-provider in placement:
$ openstack resource provider list +--------------------------------------+-------------------------------------------------+------------+ | uuid | name | generation | +--------------------------------------+-------------------------------------------------+------------+ | 038a659d-6577-4479-b450-a6953ebcae13 | compute02.infra.skylow.iik.ntnu.no | 4620 | | e5956494-254f-4556-b942-2a899700f173 | compute01.infra.skylow.iik.ntnu.no | 5245 | | 2b349c50-da83-402b-a95c-87df42125041 | compute03.infra.skylow.iik.ntnu.no | 5365 | | 84beb0c5-e1d1-48eb-b545-cc091812c335 | gpu01.infra.skylow.iik.ntnu.no | 604 | | 0f9253b5-089e-400c-9b49-2957b0aa6668 | gpu01.infra.skylow.iik.ntnu.no_pci_0000_3f_00_0 | 9 | | 193f92d9-d46f-4d8e-93ba-08283c285e4d | gpu01.infra.skylow.iik.ntnu.no_pci_0000_3d_00_0 | 16 | | b648ecbf-50b4-4f76-ba23-26404b8517c0 | gpu01.infra.skylow.iik.ntnu.no_pci_0000_3e_00_0 | 10 | | 16ad55c0-aeda-4d4d-a2ff-9eba51dbfdc4 | gpu01.infra.skylow.iik.ntnu.no_pci_0000_40_00_0 | 18 | +--------------------------------------+-------------------------------------------------+------------+
To see the resource-settings for compute01 you can use the following command:
$ openstack resource provider inventory list e5956494-254f-4556-b942-2a899700f173 +----------------+------------------+----------+----------+----------+-----------+-------+ | resource_class | allocation_ratio | min_unit | max_unit | reserved | step_size | total | +----------------+------------------+----------+----------+----------+-----------+-------+ | VCPU | 16.0 | 1 | 16 | 0 | 1 | 16 | | MEMORY_MB | 1.0 | 1 | 48281 | 512 | 1 | 48281 | | DISK_GB | 1.0 | 1 | 8518 | 0 | 1 | 8518 | +----------------+------------------+----------+----------+----------+-----------+-------+
From the output above we can see the column allocation_ratio, which shows us that we overcommit CPU's 16-to-1, while we do not allow any overcommitment of RAM.
Modify resource overcommitment
To change the allocation-ratio for memory you could use the following command:
$ openstack resource provider inventory set --resource MEMORY_MB:allocation_ratio=1.1 --amend e5956494-254f-4556-b942-2a899700f173 +----------------+------------------+----------+----------+----------+-----------+-------+ | resource_class | allocation_ratio | min_unit | max_unit | reserved | step_size | total | +----------------+------------------+----------+----------+----------+-----------+-------+ | VCPU | 16.0 | 1 | 16 | 0 | 1 | 16 | | MEMORY_MB | 1.1 | 1 | 48281 | 512 | 1 | 48281 | | DISK_GB | 1.0 | 1 | 8518 | 0 | 1 | 8518 | +----------------+------------------+----------+----------+----------+-----------+-------+
Now the compute-node allows an 10% overcommitment with the regards to memory.
It is also possible to change the allocation_ratio for all compute-nodes with a oneliner:
$ for id in $(openstack resource provider list --resource VCPU=1 -f value -c uuid); do openstack resource provider inventory set --resource MEMORY_MB:allocation_ratio=1.1 --amend $id; done