anastop's profile - activity

2020-04-26 23:34:01 -0600 received badge  Notable Question (source)
2019-11-25 02:56:58 -0600 received badge  Popular Question (source)
2019-07-05 06:03:07 -0600 commented question Force scheduling into same NUMA node

Thank you, Bernd, for following up. I will try the mailing list.

2019-07-03 09:25:25 -0600 commented question Force scheduling into same NUMA node

Unfortunately, no. This doc describes various NUMA-aware policies but for a single guest at a time. Moreover, none of these policies allow specifying certain NUMA nodes IDs to be used at schedule time. What I need is a way to express affinity/anti-affinity policies but at the NUMA-node level.

2019-07-03 09:25:25 -0600 commented question Expected behavior when dynamically changing VCPU allocation_ratio

I've checked out commit a928853692 of devstack, 2019-06-25, so I guess it fetches the master branches from the individual projects at that time? If so, my OpenStack version should have Stein fixes included.

2019-07-02 02:40:09 -0600 asked a question Force scheduling into same NUMA node

Is there a way in OpenStack to force two instances to be scheduled on the same NUMA node of a host? (assuming they have been scheduled on the same host, using e.g. proper affinity filters).

This is important when efficient inter-VM communication is a strong requirement (e.g. for certain NFV workloads).

2019-07-02 02:40:09 -0600 asked a question Expected behavior when dynamically changing VCPU allocation_ratio

I am experimenting with the Placement API in a devstack VM (commit a928853692, June 25 2019), consisting of 16 vCPUs.

Using the Placement API, I have successfully changed the default allocation_ratio of the VCPU class from the default value (16.0) to 1.0.

$ openstack resource provider inventory set  --resource VCPU:allocation_ratio=1.0 --resource VCPU:total=16 e79d839a-71e9-4b13-bc21-6633bd44d8b4

+----------------+------------------+------------+----------+-----------+----------+-------+
| resource_class | allocation_ratio |   max_unit | reserved | step_size | min_unit | total |
+----------------+------------------+------------+----------+-----------+----------+-------+
| VCPU           |              1.0 | 2147483647 |        0 |         1 |        1 |    16 |
+----------------+------------------+------------+----------+-----------+----------+-------+

After that, I will attempt to "overcommit" the 16 VCPUs. For this purpose I create a new flavor with 7 VCPUs, and try to launch 3 instances of it. As expected, the 3rd launch will fail.

$ openstack flavor create many.vcpus --id auto --ram 256 --disk 1 --vcpus 7

$ openstack server create --image cirros-0.4.0-x86_64-disk --flavor many.vcpus foo1
# OK, now using 7 VCPUS

$ openstack server create --image cirros-0.4.0-x86_64-disk --flavor many.vcpus foo2
# OK, now using 14 VCPUS

$ openstack server create --image cirros-0.4.0-x86_64-disk --flavor many.vcpus foo3
# FAILS
Quota exceeded for cores: Requested 7, but already used 14 of 20 cores (HTTP 403) (Request-ID: req-de145340-8f53-46db-a838-5fd9019baaf5)

(As a side note, openstack incorrectly reports a total of 20 cores available)

Now, if I change the allocation_ratio to 2.0, just to be able to accommodate the 3rd instance, my new request fails with the same response as before.

$ openstack resource provider inventory set  --resource VCPU:allocation_ratio=2.0 --resource VCPU:total=16 e79d839a-71e9-4b13-bc21-6633bd44d8b4

+----------------+------------------+------------+----------+-----------+----------+-------+
| resource_class | allocation_ratio |   max_unit | reserved | step_size | min_unit | total |
+----------------+------------------+------------+----------+-----------+----------+-------+
| VCPU           |              2.0 | 2147483647 |        0 |         1 |        1 |    16 |
+----------------+------------------+------------+----------+-----------+----------+-------+

$ openstack server create --image cirros-0.4.0-x86_64-disk --flavor many.vcpus  foo3
Quota exceeded for cores: Requested 7, but already used 14 of 20 cores (HTTP 403) (Request-ID: req-9efeabd9-f881-4fd0-8dad-85ae08fc306a)

Is this behavior expected?