Ask Your Question
1

Improving neutron openvswitch performance

asked 2014-11-13 04:47:18 -0500

Krist gravatar image

updated 2014-11-17 09:00:30 -0500

Hello all,

We run openstack havana, with neutron/openvswitch. Our networking node is dual socket XEON machine with 6 sockets (12 threads) per CPU. 32 GB ram. This network node and the compute nodes sit on a 10Gb network. We have a 1Gb connection with the internet, that is burstable to 10Gb.

What I notice is that as soon as traffic hits about 180Mb/s the network starts feeling saturated. When I log in to a node for example the prompt responds sluggish. Connections fail etc.

On the neutron node the load increased to about 8, (which for a 12 core machine should not be a problem) and openvswitch usage in top sat at about 500%. Which means it was using about 5 cores. Again, that ought, in my iponion, not be a problem.

We're running openvSwitch 2.1.3, multi threaded, and about 16 threads are started.

My questions:

  • I suspect that fragmentation might be going on. Is there a way to find out if an image has an incorrect MTU set, without needing access to the image?
  • Is there a way to make openvswitch even more performant? I really want to fill the pipe.

Update:

I managed to get network traffic of up to 1Gb or more, over our line without the network node seeing it's load going over 0.7. I did this by starting a bittorrent session in a VM. I noticed that the openvswitch had no problem coping.

My current working hypothesis is that the problem is that a customers VM got hacked, and started working as part of a botnet, and as such executed a lot of network scans. This means (I think) that a lot of packets hit the switch for which no flows exist. This means that the userspace component has a lot of work to do, and that the flow tables get large.

Googling I do find references to the possibility to tweak things, however I do not find any information on the openvswitch site on which parameters I can set, and what the consequences are. Can anyone point me in the right direction?

edit retag flag offensive close merge delete

Comments

I've seen significant improvement in performance by increasing the MTU on VMs, you could try that. Refer this linkForcing MTU... on how to set MTU on VMs using dnsmasq.

vthapar gravatar imagevthapar ( 2014-11-16 23:26:35 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2014-11-17 11:08:06 -0500

dgalvao gravatar image

From http://docs.openstack.org/juno/instal... :

" Tunneling protocols such as GRE include additional packet headers that increase overhead and decrease space available for the payload or user data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using the default Ethernet maximum transmission unit (MTU) of 1500 bytes. Internet protocol (IP) networks contain the path MTU discovery (PMTUD) mechanism to detect end-to-end MTU and adjust packet size accordingly. However, some operating systems and networks block or otherwise lack support for PMTUD causing performance degradation or connectivity failure.

Ideally, you can prevent these problems by enabling jumbo frames on the physical network that contains your tenant virtual networks. Jumbo frames support MTUs up to approximately 9000 bytes which negates the impact of GRE overhead on virtual networks. However, many network devices lack support for jumbo frames and OpenStack administrators often lack control over network infrastructure. Given the latter complications, you can also prevent MTU problems by reducing the instance MTU to account for GRE overhead. Determining the proper MTU value often takes experimentation, but 1454 bytes works in most environments. You can configure the DHCP server that assigns IP addresses to your instances to also adjust the MTU."

edit flag offensive delete link more

Comments

I am aware of this. However the MTU is not our problem... What I need is either tuning our oVswitch, or a better way to detect misbehaving hosts....

Krist gravatar imageKrist ( 2014-11-19 01:26:27 -0500 )edit
0

answered 2014-11-14 13:06:27 -0500

s_am gravatar image

Have you considered using Intel's DPDK vSwitch ( https://01.org/packet-processing )?

edit flag offensive delete link more

Comments

Would this help us? We have broadcom network cards.

Krist gravatar imageKrist ( 2014-11-17 08:56:23 -0500 )edit

The list of supported NICs is here: http://dpdk.org/doc/nics There are no Broadcom NICs listed, so it's unlikely that it's going to be helpful.

s_am gravatar images_am ( 2014-11-17 10:35:34 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-11-13 04:47:18 -0500

Seen: 1,541 times

Last updated: Nov 17 '14