Ask Your Question
1

Flavor creation for cpu pinning, numa and huge page

asked 2015-04-26 11:59:42 -0600

Veena gravatar image

updated 2015-04-28 10:36:09 -0600

sgordon gravatar image

Hi, I need to use cpu pinning, NUMA and Huge page features while spawning a VM. How do I specify all three properties in a single flavor. I could achieve NUMA and CPU pinning following http://docs.openstack.org/developer/n... for NUMA and http://osdir.com/ml/openstack-dev/201... for CPU pinning. Specifying "hw:mem_page_size=large" in the flavor for huge pages and spawning the VM, gives me error as no suitable host found. In host I have

cat /proc/meminfo | grep Huge AnonHugePages: 1351680 kB HugePages_Total: 25000 HugePages_Free: 25000 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB

I'm using Ubuntu 15.04, OpenStack Kilo, libvirt version - libvirtd (libvirt) 1.2.12, QEMU version 2.2.0. Please suggest me what extra specs I should provide to use all 3 (CPU pinning, NUMA and Hue Pages) in a single flavor.

Hi, below is the output of virsh capabilities

<capabilities>

  <host>
    <uuid>00000000-0000-0000-0000-0025908f24da</uuid>
    <cpu>
      <arch>x86_64</arch>
      <model>SandyBridge</model>
      <vendor>Intel</vendor>
      <topology sockets='1' cores='6' threads='2'/>
      <feature name='invtsc'/>
      <feature name='erms'/>
      <feature name='smep'/>
      <feature name='fsgsbase'/>
      <feature name='pdpe1gb'/>
      <feature name='rdrand'/>
      <feature name='f16c'/>
      <feature name='osxsave'/>
      <feature name='dca'/>
      <feature name='pcid'/>
      <feature name='pdcm'/>
      <feature name='xtpr'/>
      <feature name='tm2'/>
      <feature name='est'/>
      <feature name='smx'/>
      <feature name='vmx'/>
      <feature name='ds_cpl'/>
      <feature name='monitor'/>
      <feature name='dtes64'/>
      <feature name='pbe'/>
      <feature name='tm'/>
      <feature name='ht'/>
      <feature name='ss'/>
      <feature name='acpi'/>
      <feature name='ds'/>
      <feature name='vme'/>
      <pages unit='KiB' size='4'/>
      <pages unit='KiB' size='2048'/>
    </cpu>
    <power_management>
      <suspend_mem/>
      <suspend_disk/>
      <suspend_hybrid/>
    </power_management>
    <migration_features>
      <live/>
      <uri_transports>
        <uri_transport>tcp</uri_transport>
        <uri_transport>rdma</uri_transport>
      </uri_transports>
    </migration_features>
    <topology>
      <cells num='2'>
        <cell id='0'>
          <memory unit='KiB'>65943068</memory>
          <pages unit='KiB' size='4'>16420231</pages>
          <pages unit='KiB' size='2048'>128</pages>
          <distances>
            <sibling id='0' value='10'/>
            <sibling id='1' value='21'/>
          </distances>
          <cpus num='12'>
            <cpu id='0' socket_id='0' core_id='0' siblings='0,12'/>
            <cpu id='1' socket_id='0' core_id='1' siblings='1,13'/>
            <cpu id='2' socket_id='0' core_id='2' siblings='2,14'/>
            <cpu id='3' socket_id='0' core_id='3' siblings='3,15'/>
            <cpu id='4' socket_id='0' core_id='4' siblings='4,16'/>
            <cpu id='5' socket_id='0' core_id='5' siblings='5,17'/>
            <cpu id='12' socket_id='0' core_id='0' siblings='0,12'/>
            <cpu id='13' socket_id='0' core_id='1' siblings='1,13'/>
            <cpu id='14' socket_id='0' core_id='2' siblings='2,14'/>
            <cpu id='15' socket_id='0' core_id='3' siblings='3,15'/>
            <cpu id='16' socket_id='0' core_id='4' siblings='4,16'/>
            <cpu id='17' socket_id='0' core_id='5' siblings='5,17'/>
          </cpus>
        </cell>
        <cell id='1'>
          <memory unit='KiB'>66058216</memory>
          <pages unit='KiB' size='4'>16449018</pages>
          <pages unit='KiB' size='2048'>128</pages>
          <distances>
            <sibling id='0' value='21'/>
            <sibling id='1' value='10'/>
          </distances>
          <cpus num='12'>
            <cpu id='6' socket_id='1' core_id='0' siblings='6,18'/>
            <cpu id='7' socket_id='1' core_id='1' siblings='7,19'/>
            <cpu id='8' socket_id ...
(more)
edit retag flag offensive close merge delete

Comments

Hi, can you add the output of virsh capabilities for your hypervisor to help with diagnosis? Thanks!

sgordon gravatar imagesgordon ( 2015-04-27 13:15:35 -0600 )edit

3 answers

Sort by ยป oldest newest most voted
1

answered 2015-04-28 03:14:16 -0600

ndipanov gravatar image

updated 2015-04-29 03:14:48 -0600

Hey - so the reason seems to be that libvirt is only reporting 128 x 2Mb pages per NUMA node, this is 256 Mb of memory, or in other words - unlikely to be enough for your flavor.

If you request for a guest to be backed by huge pages, it's whole memory needs to be able to be backed by huge pages.

Hope it helps

edit flag offensive delete link more
0

answered 2016-05-02 14:17:18 -0600

Asm0deus gravatar image

Although this is an old thread, it's a common enough problem that I wanted it to be answered here.

The default hugepages size ("hugepagesz") is only 2048KB, so counting the number of hugepages usually throws people off their intended path of "easy/fast" 1GB hugepages.

Your kernel boot options (/etc/default/grub, in my case) should include "default_hugepagesz=1G hugepagesz=1G hugepages=20", where the last number (20 here) is the number of GB you intend to reserve as 1GB hugepages. Don't forget to rebuild grub as per your OS & BIOS/EFI boot needs.

cat /proc/meminfo, and look toward the bottom for Hugepages allocated.

edit flag offensive delete link more
0

answered 2015-04-28 23:34:31 -0600

Veena gravatar image

updated 2015-04-29 05:26:01 -0600

Hi, Thanks for the response. Even if I create a flavor with 2048Mb also VM fails to spawn. Flavor details are below

 +----------------------------+-------------------------------+
 | Property                   | Value                         |
 +----------------------------+-------------------------------+
 | OS-FLV-DISABLED:disabled   | False                         |
 | OS-FLV-EXT-DATA:ephemeral  | 0                             |
 | disk                       | 20                            |
 | extra_specs                | {"hw:mem_page_size": "large"} |
 | id                         | 70                            |
 | name                       | huge                          |
 | os-flavor-access:is_public | True                          |
 | ram                        | 2048                          |
 | rxtx_factor                | 1.0                           |
 | swap                       |                               |
 | vcpus                      | 2                             |
 +----------------------------+-------------------------------+

nova boot --image cirros-0.3.2-x86_64-uec --flavor huge test_huge results in VM going to error state. The error is as below

+--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property                             | Value                                                                                                                                                                                     |
+--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                                                                                                                                    |
| OS-EXT-AZ:availability_zone          | nova                                                                                                                                                                                      |
| OS-EXT-SRV-ATTR:host                 | -                                                                                                                                                                                         |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                                                                                                                                                         |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000006                                                                                                                                                                         |
| OS-EXT-STS:power_state               | 0                                                                                                                                                                                         |
| OS-EXT-STS:task_state                | -                                                                                                                                                                                         |
| OS-EXT-STS:vm_state                  | error                                                                                                                                                                                     |
| OS-SRV-USG:launched_at               | -                                                                                                                                                                                         |
| OS-SRV-USG:terminated_at             | -                                                                                                                                                                                         |
| accessIPv4                           |                                                                                                                                                                                           |
| accessIPv6                           |                                                                                                                                                                                           |
| config_drive                         |                                                                                                                                                                                           |
| created                              | 2015-04-28T20:27:00Z                                                                                                                                                                      |
| fault                                | {"message": "No valid host was found. There are not enough hosts available.", "code": 500, "details": "  File \"/opt/stack/nova/nova/conductor/manager.py\", line 676, in build_instances |
|                                      |     request_spec, filter_properties)                                                                                                                                                      |
|                                      |   File \"/opt/stack/nova/nova/scheduler/utils.py\", line 337, in wrapped                                                                                                                  |
|                                      |     return func(*args, **kwargs)                                                                                                                                                          |
|                                      |   File \"/opt/stack/nova/nova/scheduler/client/__init__.py\", line 52, in select_destinations                                                                                             |
|                                      |     context, request_spec, filter_properties)                                                                                                                                             |
|                                      |   File \"/opt/stack/nova/nova/scheduler/client/__init__.py\", line 37, in __run_method                                                                                                    |
|                                      |     return getattr(self.instance, __name)(*args, **kwargs)

Scheduler log also shows that no host was found to satisfy the requirement. Am I missing any configurations?

I could spawn a VM with hugepages using Virsh. So I'm assuming that the hugepage configurations are done correctly. Using nova boot command to spawn VMs fails. The filters am using in nova.conf are as below

scheduler_default_filters=RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,NUMATopologyFilter

Am I missing any configurations to use Hugepages?

Thanks,

Veena

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

4 followers

Stats

Asked: 2015-04-26 11:59:42 -0600

Seen: 2,946 times

Last updated: Apr 29 '15