DPDK: Inter VM communication of iperf3 TCP throughput is very low on same host compare to non DPDK throughput [closed]

asked 2016-12-22 00:39:41 -0500

Rajalakshmi gravatar image

Host - ubuntu16.04 devstack - stable/newton which install DPDK 16.07 and OVS 2.6 versions

with DPDK plugin and following DPDK configurations

Grub changes

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash default_hugepagesz=1G hugepagesz=1G hugepages=8 iommu=pt intel_iommu=on"

local.conf - changes for DPDK

enable_plugin networking-ovs-dpdk https://git.openstack.org/openstack/networking-ovs-dpdk (https://git.openstack.org/openstack/n...) master OVS_DPDK_MODE=controller_ovs_dpdk OVS_NUM_HUGEPAGES=8 OVS_CORE_MASK=2 OVS_PMD_CORE_MASK=4 OVS_DPDK_BIND_PORT=False OVS_SOCKET_MEM=2048 OVS_DPDK_VHOST_USER_DEBUG=n OVS_ALLOCATE_HUGEPAGES=True OVS_HUGEPAGE_MOUNT_PAGESIZE=1G MULTI_HOST=1 OVS_DATAPATH_TYPE=netdev

before VM creation

nova flavor-key m1.small set hw:mem_page_size=1048576

Able to create two ubuntu instance in flavor m1.small

Achieved iperf3 tcp throughput of ~7.5Gbps Ensured the vhostport is created and HugePage is consumed at the end of 2VM created each of 2GB ie 4GB for VMs and 2GB for socket totally 6GB

$ sudo cat /proc/meminfo |grep Huge AnonHugePages: 0 kB HugePages_Total: 8 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 1048576 kB

The same scenario carried for without DPDK case of openstack and achieved higher throughput of ~19Gbps, which is contradictory to the expected results. Kindly suggest me what additional DPDK configuration to be done for high throughput. Also tried cpu pinning and multi queue for OpenStack DPDK but no improvement in the result.

edit retag flag offensive reopen merge delete

Closed for the following reason duplicate question by Antonio G.
close date 2016-12-22 09:58:42.697586