OVS DPDK poor performance

asked 2017-08-07 02:25:14 -0600

alp317 gravatar image

I was experimenting with OVS-DPDK based deployment of OpenStack (using tripleo). In the documentation that I followed they suggested to use different datapaths for control plane networks and tenant (VM) networks. For control plane networks Linux bonds were used. While experimenting I deployed the OpenStack with internal api network (a control plane network) on OVS-DPDK, to my surprise the network was working. But after testing it out I found out it was giving very poor performance, around 150 -250 Mbits/s on a 20 Gbits/s bonded network. While the networks on Linux bond were working fine. Now after reading a lot of questions on forums I couldn’t find the answer to this question. Other than a casual mention that it’s a rule of thumb to not use kernel and dpdk datapath ports on the same bridge. Also using the ethtool I found that the tagged vLAN network that I created over dpdk bridge is showing a link speed of 10 Mb/s. Can someone please explain what’s happening here.

edit retag flag offensive close merge delete


How are you installing Openstack?

ashu gravatar imageashu ( 2017-10-12 05:04:51 -0600 )edit

RedHat Director RHOSP 10.

alp317 gravatar imagealp317 ( 2017-12-18 08:34:52 -0600 )edit

1 answer

Sort by » oldest newest most voted

answered 2018-11-16 09:19:31 -0600

Lee Ballard gravatar image

(Iwork4Dell) Read through the system setup instructions in ovs_perf https://github.com/chaudron/ovs_perf. You need to use tuned to isolate cores and then pin them to the virtio and NIC PMD. If you use a bond you will want to use a DPDK bond and not a kernel bond. If you are using a 2 socket server you may also want to consider NUMA and pin cores to OVS that are in the same NUMA as the PCI slot the NIC is in.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2017-08-07 02:25:14 -0600

Seen: 752 times

Last updated: Nov 16 '18