Instance balance across NUMA nodes on same host

asked 2017-05-04 22:44:43 -0600

davidchen gravatar image

Hi All,

I have a question about instance placement strategy across NUMA nodes

When create instances with flavor property "hw:numa_nodes": "1", all instances with this flavor will running on NUMA node0. Until NUMA node0 reach some resource limit, e.g. cpu or memory, new created instance will run on NUMA node1.

With cpu overcommit, above behavior may cause host resource waste on some scenario (e.g. NUMA node0 is busy, but NUMA node1 is idle). In my expectation, when select available NUMA node, NUMA node with more resource should be select first so host resource utilization will be more balance and efficiency between NUMA nodes.

I find there is a comment on nova/virt/hardware.py, which is add on link:https://bugs.launchpad.net/nova/+bug/1386236 (bug#1386236)

"TODO(ndipanov): We may want to sort permutations differently depending on whether we want packing/spreading over NUMA nodes"

So it seems the placement strategy will be different based on using scenario.

My question is

  1. What scenario will prefer packing strategy ?

  2. Are there any way (like configure option or some setting) can control this behavior ?

Thank you for reading my question :)

To recreate:

1) Ensure you have a compute node with at least 2 sockets

2) Create a flavor with property "hw:numa_nodes": "1"

3) Boot more than 1 instances with flavor create on step2, when NUMA node0 still have resource to run instance, all instances will pinned to NUMA node0

edit retag flag offensive close merge delete