Ask Your Question
0

GRE vs VxLAN tunneling Performance

asked 2016-06-04 15:54:39 -0600

paullaurence gravatar image

updated 2016-06-07 04:46:24 -0600

Hi,

I have a 3 node Openstack Kilo testbed installed - 1x Controller/network and 2 Compute nodes. And I have DVR configured. I am using Iperf3 to test under numerous conditions, but starting point is the tunneling performance using GRE and VXLAN for east/west (vm to vm). Obviously in this scenario with DVR the traffic goes straight from compute1 to compute2. I have these connected to an Arista switch with 10Gbit/sec ports and the servers are Dell poweredge with 10Gbit/sec Intel NICs.

I was expecting that I would get better performance with VxLAN over GRE but when using both 1500-MTU and 9000-MTU, I get better performance from GRE and for example, using VxLAN with 1500-MTU, TCP tests are getting less than 2Gbit/sec.

Here are some approximate test results:

MTU 1500 VXLAN

east/west MTU-1500 TCP = 1.87 Gbits/sec

MTU 9000 VXLAN

east/west MTU-9000 TCP = 5.82 Gbits/sec

MTU 1500 GRE

east/west MTU-1500 TCP = 3.79 Gbits/sec

MTU 9000 GRE

east/west MTU-9000 TCP = 7.86 Gbits/sec

Is this likely to be down to my physical network card or could there be some other factor in these results? I'm pretty sure all my VxLAN config is correct. Its the VxLAN TCP 1500-MTU test that is most surprising.

The Intel network cards I am using don't have the option to enable 'ethtool -K ethX tx-udp_tnl-segmentation' not sure if that is a factor in these results?

thanks in advance, Paul

update:

I have tested with both the following 10Gbit network cards:

Intel Ethernet X540 10Gb BT DP + i350 1Gb BT DP Network Daughter Card

Broadcom 57810 DP 10Gb BT Converged Network Adapter

Whatever I try, I get roughly the same results for east/west traffic (GRE better than VxLAN):

VXLAN

east/west MTU-1500 TCP = 1.87 Gbits/sec

east/west MTU-9000 TCP = 5.82 Gbits/sec

GRE

east/west MTU-1500 TCP = 3.79 Gbits/sec

east/west MTU-9000 TCP = 7.86 Gbits/sec

I have tried extending the MTU for VxLAN to 1600 - no difference in result.

I have tried with and without 'ethtool -K INTERFACE_NAME gro off' - no difference in result.

There is not much difference in CPU usage between GRE and VxLAN.

Open vSwitch version is: 2.3.2, using KVM, OS is Ubuntu.

Paul

edit retag flag offensive close merge delete

6 answers

Sort by ยป oldest newest most voted
1

answered 2016-06-04 23:27:05 -0600

joe chien gravatar image

updated 2016-06-04 23:30:19 -0600

VXLAN packet are larger than the standard Ethernet packets. MTU size must be increased to accommodate VXLAN packets. Ensure that all physical switch ports participating in VXLAN network configured to MTU size of 1600. tx-udp_tnl-segmentation which is VXLAN offloading enabled by default. You can find some more details from the following link. Also, your physical Ethernet switch needs to enable jumbo frame. http://www.intel.com/content/www/us/en/embedded/products/networking/xl710-ethernet-controller-vxlan-technical-brief.html (http://www.intel.com/content/www/us/e...)

edit flag offensive delete link more

Comments

You really want to have at least 1600 MTU at layer 2 when doing VXLAN. There are numerous reasons and this is a good walk through of the overhead that VXLAN adds on to a standard Ethernet Frame. https://keepingitclassless.net/2014/03/mtu-considerations-vxlan/ (https://keepingitclassless.net/2014/0...)

iguy gravatar imageiguy ( 2016-06-05 13:26:46 -0600 )edit
0

answered 2016-06-07 02:24:13 -0600

on our environment we set MTU to 9000, either on compute nodes and on controller nodes, and using vxlan we're close to 10Gbits/sec.

We don't have DVR only VRRP and this is our results:

  • east/west on the same vxlan MTU-9000 TCP: 9.80 Gbits/sec
  • east/west on the same vxlan MTU-9000 UDP: 8.47 Gbits/sec
  • east/west on differt vxlan (traffic through VRRP*): MTU-9000 TCP: 7.36 Gbits/sec
  • east/west on differt vxlan (traffic through VRRP*): MTU-9000 UDP: 4.12 Gbits/sec

    • on /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini to increase MTU/performance on VRRP we had to define this parameter:

[agent] ... veth_mtu = 8900

edit flag offensive delete link more
0

answered 2016-06-04 23:50:30 -0600

jbbroccard gravatar image

If the MTU in his network was not configured to let 1500+ bytes packets flow then the traffic with "large" packets would just not just affect the performance, it would not let that traffic through at all. So, Paul's MTU configuration across his network is probably fine, he's just experiencing what others have reported: VxLAN performance at default MTU size is poor (partially due to VxLAN overhead). His question still stands. Paul are you monitoring cpu utilization on the hypervisor (for irq's)?

edit flag offensive delete link more
0

answered 2016-06-07 08:30:02 -0600

updated 2016-06-07 08:31:38 -0600

You should really look into VXLAN offloading on both the switches and compute node NICs. I have never personally run a similiar scenario in a lab (VXLAN over 10 Gbit), but have heard about this limitation of cca 6-7 Gbit/s when running on plain vanilla hardware with no additional tuning. Try starting up the benchmarks and have a look at CPU load on your compute nodes, along with number of context switches and both hard and soft interrtupts (dubbed "hi" and "si" in 'top'). I bet your CPUs go crazy.

The even worse performance on 1500B frames could also be explained by the amount of work CPUs are put through. You see, 10 Gbit/s / 1500B gives arount 833k frames per second, with each one causing a hard interrupt and a context switch. This could be mitigated by turning on Generic Recieve Offload on the NIC, which kind of aggregates incoming frames into larger groups.

You can also have a look at Recieve Flow Steering flag or SRIOV functionality of your NIC, which can greatly improve the performance but can rob you of some fancy Nova/Neutron features (live migration, for example).

edit flag offensive delete link more
0

answered 2016-06-05 10:29:30 -0600

Hello,

What type of nics are you using and what is the CPU load during the transfer?

Unless you have hardware acceleration for vxlan, everything is done in software which can peg CPUs on the compute nodes.

Depending on the nics being used, you can offload some functions to the card, which can help with throughput. I don't have the list of options available at the moment.

Michael

edit flag offensive delete link more
0

answered 2016-06-07 03:52:38 -0600

paullaurence gravatar image

updated 2016-06-07 15:23:55 -0600

Hi Amedeo,
Thanks for sharing your results. Yes, I'm getting around 7 Gbits/sec using MTU-9000 and I have the same as you in the config (veth_mtu = 8900) so that was roughly expected.

My concern was more - Why, when using MTU 1500 can I get around 4 Gbits/sec using GRE and only 2 Gbits/sec when using VxLAN. Did you test your setup with 1500-MTU at any stage? I'm just thinking that if, in a particular scenario where only 1500-MTU can be used that it would be best advised to use GRE rather than VxLAN?

hmm, now I'm wondering - is there a little overhead with the DVR implementation? or maybe newer releases such as Liberty - with latest OVS release are optimized for VxLAN? I might upgrade to Liberty next week and see if I experience same results.

Paul

NEW UPDATE

I have tx-udp_tnl-segmentation enabled on my Broadcom NICs and have also tried with generic-receive-offload turned on but still no improvement. But further inspection to the switch - I am using an Arista 7050T - it is not capable of VxLAN so would it be safe to assume that is where my problem lies?

Paul

edit flag offensive delete link more

Comments

I can confirm you that on our env, if we use MTU at 1500, we have the same results with VXLAN, the main concern is vxlan offloading on the hardware, that you have to enable if your hardware support it. At this moment we wa this issue by injecting on instances MTU to 8900: dhcp-option-force=26,8900

amedeo-salvati gravatar imageamedeo-salvati ( 2016-06-09 01:59:14 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2016-06-04 15:54:39 -0600

Seen: 4,208 times

Last updated: Jun 07 '16