Ask Your Question

masber's profile - activity

2019-05-15 07:52:24 -0500 received badge  Notable Question (source)
2019-05-15 07:52:24 -0500 received badge  Popular Question (source)
2019-05-11 21:02:11 -0500 received badge  Notable Question (source)
2019-05-11 21:02:11 -0500 received badge  Famous Question (source)
2019-05-11 21:02:11 -0500 received badge  Popular Question (source)
2019-04-19 16:47:55 -0500 asked a question why block device as pci-passthrough is not giving same performance as BM?

Hi,

I have a vm with a nvme brock device connected as a paci-passthrough device.

I have done fio testing and the performance is worse than base metal.

Could someone please give me an idea of why is this happenning?

thank you very much

2019-04-19 08:19:00 -0500 asked a question vm missing psi-passthrough device if cpu pinning is in place

Dear Openstack community,

I am trying to setup a vm with bith pci-passthrough and cpu pinning.

This is my flavor:

[root@openstack-deployment ~]# openstack flavor show kudu.test.dedicated.tiny

+----------------------------+--------------------------------------------------------------+

| Field                      | Value                                                        |

+----------------------------+--------------------------------------------------------------+

| OS-FLV-DISABLED:disabled   | False                                                        |

| OS-FLV-EXT-DATA:ephemeral  | 0                                                            |

| access_project_ids         | None                                                         |

| disk                       | 10                                                           |

| id                         | 7e241e94-1c50-4a92-9ce6-8abb272d79d8                         |

| name                       | kudu.test.dedicated.tiny                                     |

| os-flavor-access:is_public | True                                                         |

| properties                 | hw:cpu_policy='dedicated', hw:pci_passthrough:alias='nvme:1' |

| ram                        | 4096                                                         |

| rxtx_factor                | 1.0                                                          |

| swap                       |                                                              |

| vcpus                      | 2                                                            |

+----------------------------+--------------------------------------------------------------+

And this is how I create the vm:

[root@openstack-deployment ~]# openstack server create --flavor kudu.test.dedicated.tiny --image centos7.5-image --nic net-id=hpc --security-group admin --key-name mykey --availability-zone nova:zeus-53.localdomain kudu-test-3-1

The problem is that the vm does not have the nvme pci device attached to it.

[root@zeus-53 test]# docker exec -it nova_libvirt virsh dumpxml instance-000000ad

<domain type="kvm" id="9">

<name>instance-000000ad</name>

<uuid>5d8f6cc1-e716-4f4d-ac55-df24d4cffd38</uuid>

<metadata>

<nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">

  <nova:package version="18.2.0-1.el7"/>

  <nova:name>kudu-test-3-1</nova:name>

  <nova:creationTime>2019-04-17 05:17:02</nova:creationTime>

  <nova:flavor name="kudu.test.dedicated.tiny">

    <nova:memory>4096</nova:memory>

    <nova:disk>10</nova:disk>

    <nova:swap>0</nova:swap>

    <nova:ephemeral>0</nova:ephemeral>

    <nova:vcpus>2</nova:vcpus>

  </nova:flavor>

  <nova:owner>

    <nova:user uuid="91e83343e9834c8ba0172ff369c8acac">admin</nova:user>

    <nova:project uuid="b91520cff5bd45c59a8de07c38641582">admin</nova:project>

  </nova:owner>

  <nova:root type="image" uuid="5ec18792-05f2-4bce-b5d8-cb3480cbb35a"/>

</nova:instance>

</metadata>

<memory unit="KiB">4194304</memory>

<currentmemory unit="KiB">4194304</currentmemory>

<vcpu placement="static">2</vcpu>

<cputune>

<shares>2048</shares>

<vcpupin vcpu='0' cpuset='9'/>

<vcpupin vcpu='1' cpuset='37'/>

<emulatorpin cpuset='9,37'/>

</cputune>

<numatune>

<memory mode='strict' nodeset='0'/>

<memnode cellid='0' mode='strict' nodeset='0'/>

</numatune>

<resource>

<partition>/machine</partition>

</resource>

<sysinfo type="smbios">

<system>

  <entry name='manufacturer'>RDO</entry>

  <entry name='product'>OpenStack Compute</entry>

  <entry name='version'>18.2.0-1.el7</entry>

  <entry name='serial'>00000000-0000-0000-0000-0cc47aa482cc</entry>

  <entry name='uuid'>5d8f6cc1-e716-4f4d-ac55-df24d4cffd38</entry>

  <entry name='family'>Virtual Machine</entry>

</system>

</sysinfo>

<os>

<type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type>

<boot dev='hd'/>

<smbios mode='sysinfo'/>

</os>

<features>

<acpi/>

<apic/>

</features>

<cpu mode="custom" match="exact" check="full">

<model fallback='forbid'>Skylake-Client-IBRS</model>

<vendor>Intel</vendor>

<topology sockets='1' cores='1' threads='2'/>

<feature policy='require' name='ss'/>

<feature policy='require' name='hypervisor'/>

<feature policy='require' name='tsc_adjust'/>

<feature policy='require' name='stibp'/>

<feature policy='require' name='ssbd'/>

<feature policy='require' name='pdpe1gb'/>

<feature policy='disable' name='mpx'/>

<feature policy='disable' name='xsavec'/>

<feature policy='disable' name='xgetbv1'/>

<numa>

  <cell id='0' cpus='0-1' memory='4194304' unit='KiB'/>

</numa>

</cpu>

<clock offset="utc">

<timer name='pit' tickpolicy='delay'/>

<timer name='rtc' tickpolicy='catchup'/>

<timer name='hpet' present='no'/>

</clock>

<on_poweroff>destroy</on_poweroff>

<on_reboot>restart</on_reboot>

<on_crash>destroy</on_crash>

<devices>

<emulator>/usr/libexec/qemu-kvm</emulator>

<disk type='file' device='disk'>

  <driver name='qemu' type='qcow2' cache='none'/>

  <source file='/var/lib/nova/instances/5d8f6cc1-e716-4f4d-ac55-df24d4cffd38/disk'/>

  <backingStore type='file' index='1'>

    <format type='raw'/>

    <source file='/var/lib/nova/instances/_base/4cc6eebe175e35178cb81853818a1eb103cea937'/>

    <backingStore/>

  </backingStore>

  <target dev ...
(more)
2019-04-19 08:17:03 -0500 asked a question server shelve takes too long

Dear Openstack community,

I am trying to shelve a server but the process seems to be stuck in shelving_image_pending_upload state.

is it normal for the shelve process to take several hours?

thank you

2019-04-14 09:17:02 -0500 received badge  Famous Question (source)
2019-04-11 19:50:19 -0500 received badge  Notable Question (source)
2019-04-11 19:19:48 -0500 answered a question vm filesystem shutting down

ok, I don't have a good explanation for this but I got this issue fixed by replacing the filesystem on the physical/hypervisor nodes from btrfs to mdadm + XFS. Now the system is much more stable.

2019-04-11 19:18:49 -0500 received badge  Popular Question (source)
2019-04-10 11:16:54 -0500 commented question vm filesystem shutting down

libvirt logs shows errors from a different day 2019-04-09 07:57:28.013+0000: 3977: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor 2019-04-09 08:07:19.385+0000: 4227: error : qemuDomainAgentAvailable:9139 : argument unsupported: QEMU guest agent is not configur

2019-04-10 07:16:02 -0500 commented question vm filesystem shutting down

yes vms are on local storage of the physical host. Also I am using nvme drives for storage so the disks should be fast enough

2019-04-10 06:41:53 -0500 asked a question vm filesystem shutting down

Dear Openstack community,

I have an openstack environment using qemu + kvm as a hypervisor. My physical hosts are running Centos 7.5 with filesystem BTRFS and XFS on vms My problem is that the vm becomes unresponsive when I run heavy jobs like software compilation. Console errors shown below:

kudu-test-1 login: [   43.865030] random: crng init done
[10769.996980] blk_update_request: I/O error, dev vda, sector 63329504
[10770.003649] blk_update_request: I/O error, dev vda, sector 63329680
[10770.005974] blk_update_request: I/O error, dev vda, sector 63329832
[10770.008282] Buffer I/O error on dev vda1, logical block 7915932, lost async page write
[10770.011173] Buffer I/O error on dev vda1, logical block 7915933, lost async page write
[10770.014903] Buffer I/O error on dev vda1, logical block 7915934, lost async page write
[10770.019257] Buffer I/O error on dev vda1, logical block 7915935, lost async page write
[10770.024083] Buffer I/O error on dev vda1, logical block 7915936, lost async page write
[10770.028534] Buffer I/O error on dev vda1, logical block 7915937, lost async page write
[10770.032472] Buffer I/O error on dev vda1, logical block 7915938, lost async page write
[10770.036191] Buffer I/O error on dev vda1, logical block 7915939, lost async page write
[10770.040515] Buffer I/O error on dev vda1, logical block 7915940, lost async page write
[10770.043443] Buffer I/O error on dev vda1, logical block 7915941, lost async page write
[11963.873944] blk_update_request: I/O error, dev vda, sector 106841432
[11963.922842] buffer_io_error: 9 callbacks suppressed
[11963.926768] Buffer I/O error on dev vda1, logical block 13354411, lost async page write
[11963.930445] Buffer I/O error on dev vda1, logical block 13354412, lost async page write
[11963.933580] Buffer I/O error on dev vda1, logical block 13354413, lost async page write
[11963.937348] Buffer I/O error on dev vda1, logical block 13354414, lost async page write
[11963.941300] Buffer I/O error on dev vda1, logical block 13354415, lost async page write
[11963.944535] Buffer I/O error on dev vda1, logical block 13354416, lost async page write
[11963.947552] Buffer I/O error on dev vda1, logical block 13354417, lost async page write
[11963.953007] Buffer I/O error on dev vda1, logical block 13354418, lost async page write
[11963.956181] Buffer I/O error on dev vda1, logical block 13354419, lost async page write
[11963.959760] Buffer I/O error on dev vda1, logical block 13354420, lost async page write
[12644.472090] blk_update_request: I/O error, dev vda, sector 98732744
[12644.486750] blk_update_request: I/O error, dev vda, sector 98733768
[12644.488994] buffer_io_error: 2507 callbacks suppressed
[12644.490653] Buffer I/O error on dev vda1, logical block 12335719, lost async page write
[12644.493055] Buffer I/O error on dev vda1, logical block 12335720, lost async page write
[12644.495430] Buffer I/O error on dev vda1, logical block 12335721, lost async page write
[12644.498325] Buffer ...
(more)
2019-04-05 06:10:06 -0500 received badge  Popular Question (source)
2019-02-26 22:16:39 -0500 asked a question nova can't find my SR-IOV VF

Hi,

I am trying to setup SR-IOV in Openstack through mellanox connectx-4 Lx nics.

I use Openstack Rocky deployed with Kolla-ansible.

For some reason nova can't find any PCI decive:

The server does not gets created with the following error:

| fault                               | {u'message': u'PCI device not found for request ID 419c5fa0-1b7a-4a83-b691-2fcb0fba94cc.', u'code': 500, u'details': u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1940, in _do_build_and_run_instance\n    filter_properties, request_spec)\n  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2229, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'created': u'2019-02-26T04:21:01Z'} |

Nova logs

2019-02-26 15:36:42.595 7 ERROR nova.compute.manager Traceback (most recent call last):
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7778, in _update_available_resource_for_node
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager     rt.update_available_resource(context, nodename)
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 705, in update_available_resource
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager     resources = self.driver.get_available_resource(nodename)
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6551, in get_available_resource
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager     self._get_pci_passthrough_devices()
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5978, in _get_pci_passthrough_devices
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager     pci_info.append(self._get_pcidev_info(name))
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5939, in _get_pcidev_info
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager     device.update(_get_device_capabilities(device, address))
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5910, in _get_device_capabilities
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager     pcinet_info = self._get_pcinet_info(address)
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5853, in _get_pcinet_info
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager     virtdev = self._host.device_lookup_by_name(devname)
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 873, in device_lookup_by_name
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager     return self.get_connection().nodeDeviceLookupByName(name)
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager     result = proxy_call(self._autowrap, f, *args, **kwargs)
2019-02-26 15:36:42.595 7 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
2019-02-26 ...
(more)
2019-02-26 22:12:51 -0500 commented answer Why we choose OpenStack cloud platform to build a private cloud ?

esxi itself is just a hypervisor, openstack sits on top of many hypervisors, it is a set of projects that helps you to manage your infrastructure. Most private companies uses Openstack to build their private cloud business, examples are Oracle and IBM/Redhat just to mention 2 relevant players

2018-12-16 03:06:28 -0500 received badge  Famous Question (source)
2018-12-16 03:06:28 -0500 received badge  Popular Question (source)
2018-12-16 03:06:28 -0500 received badge  Notable Question (source)
2018-11-21 20:20:11 -0500 received badge  Commentator
2018-11-21 20:20:11 -0500 commented answer What is the physnet?

very, very useful. thank you

2018-10-09 02:03:33 -0500 received badge  Popular Question (source)
2018-10-08 00:59:55 -0500 asked a question vm not starting, missing volume?

Dear Openstack community,

I had to reboot my server and now I can’t start one of the vms (danrod-server) which boots from volume.

vm details

[root@openstack ~(keystone_admin)]# openstack server show danrod-server
+--------------------------------------+----------------------------------------------------------+
| Field                                | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | openstack.localdomain                                    |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | openstack.localdomain                                    |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000069                                        |
| OS-EXT-STS:power_state               | NOSTATE                                                  |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-STS:vm_state                  | stopped                                                  |
| OS-SRV-USG:launched_at               | 2018-05-29T11:22:17.000000                               |
| OS-SRV-USG:terminated_at             | None                                                     |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| addresses                            | privatenetwork=192.168.1.105, 129.94.14.238              |
| config_drive                         |                                                          |
| created                              | 2018-05-29T11:22:09Z                                     |
| flavor                               | xlarge.cpu.xxxlarge.ram (6)                              |
| hostId                               | ecef276660cd714fe626073a18c11fe1c00bec91c15516178fb6ac28 |
| id                                   | 1e59f329-072e-48ae-abf1-266eba437508                     |
| image                                |                                                          |
| key_name                             | None                                                     |
| name                                 | danrod-server                                            |
| os-extended-volumes:volumes_attached | [{u'id': u'f1ac2e94-b0ed-4089-898f-5b6467fb51e3'}]       |
| project_id                           | d58cf22d960e4de49b71658aee642e94                         |
| properties                           |                                                          |
| security_groups                      | [{u'name': u'admin'}, {u'name': u'R-Studio Server'}]     |
| status                               | SHUTOFF                                                  |
| updated                              | 2018-10-08T02:52:41Z                                     |
| user_id                              | c412f34c353244eabecd4b6dc4d36392                         |
+--------------------------------------+----------------------------------------------------------+

List of volumes

[root@openstack ~(keystone_admin)]# openstack volume list --all
+--------------------------------------+--------------+--------+------+----------------------------------------+
| ID                                   | Display Name | Status | Size | Attached to                            |
+--------------------------------------+--------------+--------+------+----------------------------------------+
| f1ac2e94-b0ed-4089-898f-5b6467fb51e3 |              | in-use |  700 | Attached to danrod-server on /dev/vda  |
+--------------------------------------+--------------+--------+------+----------------------------------------+

nova-compute.log

2018-10-08 13:51:46.476 4015 INFO os_brick.initiator.connectors.iscsi [req-d9ed864b-9a4f-4182-b91b-9a2f1e38fb02 c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - - -] Multipath discovery for iSCSI not enabled.
2018-10-08 13:51:46.476 4015 INFO os_brick.initiator.connectors.iscsi [req-d9ed864b-9a4f-4182-b91b-9a2f1e38fb02 c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - - -] Trying to connect to iSCSI portal 129.94.14.254:3260
2018-10-08 13:51:46.504 4015 WARNING os_brick.initiator.connectors.iscsi [req-d9ed864b-9a4f-4182-b91b-9a2f1e38fb02 c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - - -] ISCSI volume not yet found at: [u'/dev/disk/by-path/ip-129.94.14.254:3260-iscsi-iqn.2010-10.org.openstack:volume-f1ac2e94-b0ed-4089-898f-5b6467fb51e3-lun-0']. Will rescan & retry.  Try number: 0.
2018-10-08 13:51:47.510 4015 WARNING os_brick.initiator.connectors.iscsi [req-d9ed864b-9a4f-4182-b91b-9a2f1e38fb02 c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - - -] ISCSI volume not yet found at: [u'/dev/disk/by-path/ip-129.94.14.254:3260-iscsi-iqn.2010-10.org.openstack:volume-f1ac2e94-b0ed-4089-898f-5b6467fb51e3-lun-0']. Will rescan & retry.  Try number: 1.
2018-10-08 13:51:51.519 4015 WARNING os_brick.initiator.connectors.iscsi [req-d9ed864b-9a4f-4182-b91b-9a2f1e38fb02 c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - - -] ISCSI volume not yet found at: [u'/dev/disk/by-path/ip-129.94.14.254:3260-iscsi-iqn.2010-10.org.openstack:volume-f1ac2e94-b0ed-4089-898f-5b6467fb51e3-lun-0']. Will rescan & retry.  Try number: 2.
2018-10-08 13:52:00.527 4015 WARNING os_brick.initiator.connectors.iscsi [req-d9ed864b-9a4f-4182-b91b-9a2f1e38fb02 c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - - -] ISCSI volume not yet found at: [u'/dev/disk/by-path/ip-129.94.14.254:3260-iscsi-iqn.2010-10.org.openstack:volume-f1ac2e94-b0ed-4089-898f-5b6467fb51e3-lun-0']. Will rescan & retry.  Try number: 3.
2018-10-08 13:52:16.535 4015 WARNING os_brick.initiator.connectors.iscsi [req-d9ed864b-9a4f-4182-b91b-9a2f1e38fb02 c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - - -] ISCSI volume not yet found at: [u'/dev/disk/by-path/ip-129.94.14.254:3260-iscsi-iqn.2010-10.org.openstack:volume-f1ac2e94-b0ed-4089-898f-5b6467fb51e3-lun-0']. Will rescan & retry.  Try number: 4.
2018-10-08 13:52:23.357 4015 INFO nova.compute.resource_tracker [req-359039dc-29dc-4f06-805f-0ca943bb600f - - - - -] Auditing locally available compute resources for node openstack.localdomain
2018-10-08 13:52:23.713 4015 INFO nova.compute.resource_tracker [req-359039dc-29dc-4f06-805f-0ca943bb600f - - - - -] Total usable vcpus: 56, total allocated vcpus: 52
2018-10-08 13:52:23.713 4015 INFO nova.compute.resource_tracker [req-359039dc-29dc-4f06-805f-0ca943bb600f - - - - -] Final resource view: name=openstack.localdomain phys_ram=524173MB used_ram=454608MB phys_disk=9312GB used_disk=1540GB total_vcpus=56 used_vcpus=52 pci_stats=[]
2018-10-08 13:52:23.733 4015 INFO nova.compute.resource_tracker [req-359039dc-29dc-4f06-805f-0ca943bb600f - - - - -] Compute_service record updated for openstack.localdomain:openstack.localdomain
2018-10-08 13:52:41.665 4015 INFO nova.compute.manager [req-d9ed864b-9a4f-4182-b91b-9a2f1e38fb02 c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - - -] [instance: 1e59f329-072e-48ae-abf1-266eba437508 ...
(more)
2018-08-14 15:38:21 -0500 received badge  Famous Question (source)
2018-06-13 05:23:56 -0500 received badge  Famous Question (source)
2018-05-18 13:04:33 -0500 received badge  Famous Question (source)
2018-05-04 06:38:13 -0500 received badge  Famous Question (source)
2018-02-22 00:17:47 -0500 received badge  Famous Question (source)
2018-01-23 22:27:03 -0500 received badge  Famous Question (source)
2017-12-10 18:48:42 -0500 commented answer network performance vm to vm

Hi @SamYaple, I am far from an expert in this type of configuration but it looks like vxlan offloading is setup

[root@hercules-21 ~]# ethtool -k bond0 | grep udp_tnl
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
2017-12-10 18:45:28 -0500 commented answer network performance vm to vm

Hi @Moss, thank you for this, I followed your instructions but VMTP gets stuck during the process. http://paste.openstack.org/raw/628566/ any idea?

2017-12-05 01:26:41 -0500 received badge  Famous Question (source)
2017-12-04 22:58:38 -0500 received badge  Associate Editor (source)
2017-12-04 18:21:57 -0500 received badge  Popular Question (source)
2017-12-04 18:21:57 -0500 received badge  Notable Question (source)
2017-11-30 02:25:58 -0500 asked a question network performance vm to vm

Hi,

I am running a test opentack environment with 2 compute nodes each one of them with and MTU 9000.

Compute nodes:

  • hercules-21 (10.0.32.21): 64cpus, 512GB RAM and 2x 25Gbps bond network
  • hercules-22 (10.0.32.22): 64cpus, 512GB RAM and 2x 25Gbps bond network

VMs:

  • centos (192.168.1.110): 8 vcpus, 16GB RAM
  • centos2 (192.168.1.109): 8 vcpus, 16GB RAM

Network bandwidth test physical host to physical host using iperf3:

[root@hercules-21 ~]# iperf -c 10.0.32.22 -P 4
------------------------------------------------------------
Client connecting to 10.0.32.22, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  5] local 10.0.32.21 port 59014 connected with 10.0.32.22 port 5001
[  3] local 10.0.32.21 port 59008 connected with 10.0.32.22 port 5001
[  4] local 10.0.32.21 port 59010 connected with 10.0.32.22 port 5001
[  6] local 10.0.32.21 port 59012 connected with 10.0.32.22 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  6.91 GBytes  5.94 Gbits/sec
[  3]  0.0-10.0 sec  6.97 GBytes  5.98 Gbits/sec
[  4]  0.0-10.0 sec  6.96 GBytes  5.98 Gbits/sec
[  6]  0.0-10.0 sec  6.77 GBytes  5.82 Gbits/sec
[SUM]  0.0-10.0 sec  27.6 GBytes  23.7 Gbits/sec

Network bandwidth test vm to vm using iperf3 (each vm is running on a differen host):

[centos@centos2 ~]$ iperf -c 192.168.1.110 -P 4
------------------------------------------------------------
Client connecting to 192.168.1.110, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  6] local 192.168.1.109 port 60244 connected with 192.168.1.110 port 5001
[  3] local 192.168.1.109 port 60238 connected with 192.168.1.110 port 5001
[  4] local 192.168.1.109 port 60240 connected with 192.168.1.110 port 5001
[  5] local 192.168.1.109 port 60242 connected with 192.168.1.110 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  2.11 GBytes  1.81 Gbits/sec
[  4]  0.0-10.0 sec  2.12 GBytes  1.82 Gbits/sec
[  5]  0.0-10.0 sec  2.10 GBytes  1.80 Gbits/sec
[  6]  0.0-10.0 sec  2.13 GBytes  1.83 Gbits/sec
[SUM]  0.0-10.0 sec  8.45 GBytes  7.25 Gbits/sec

I am using jumbo frames on physical machines so I did the same on Openstack

MTU on physical host:

[root@hercules-21 ~]# ip a
...
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP qlen 1000
    link/ether 7c:fe:90:12:23:ec brd ff:ff:ff:ff:ff:ff
    inet 10.0.32.21/16 brd 10.0.255.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::b1b0:74dd:8a3:705e/64 scope link
       valid_lft forever preferred_lft forever
...

MTU on VM:

[centos@centos ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc ...
(more)
2017-10-08 14:26:36 -0500 received badge  Famous Question (source)
2017-10-03 00:11:25 -0500 received badge  Notable Question (source)
2017-09-26 09:44:16 -0500 received badge  Popular Question (source)