Ask Your Question
0

OVS setup with VLAN provider network [closed]

asked 2016-01-06 22:29:12 -0500

kaustubh gravatar image

updated 2016-01-09 10:36:25 -0500

I have the following setup:

  • Openstack release - Kilo
  • Controller node (node 1) - Horizon, Keystone, Glance, Database, Rabbitmq, neutron-server etc.
  • Compute/Network node (node 2) - nova-compute, all networking services (OVS, L3, DHCP, neutron plugin)

On node 2, I have four provider networks with four interfaces - em0 (untagged), eth1 (vlan 100), eth2 (vlan 200), eth3 (vlan 300). They connect to a ToR switch that is configured to allow the VLANs. em0 has external access via br-ex. eth1, eth2 and eth3 are part of br-vlan. There are four project networks corresponding to the above provider networks.
net-0 (flat) => em0 (no DHCP)
net-1 (vlan 100) => eth1
net-2 (vlan 200) = > eth2
net-3 (vlan 300) => eth3
And a neutron router connected to net-0 and net-1.

The OVS bridges are as follows:
br-ex: with ports em0 and phy-br-ex
br-vlan: with ports eth1, eth2, eth3 and phy-br-vlan
br-int: with ports int-br-ex, int-br-vlan, qr-xxxx, qg-xxxx and three tapxxx interfaces to DHCP namespaces (for net-1, net-2 and net-3).
After launching a VM (Ubuntu guest OS), I can see qvoxxxx ports (towards linux bridges) created on br-int. And the flow table in OVS look identical to the one http://techbackground.blogspot.com/2013/07/the-open-vswitch-plugin-with-vlans.html (here).
(http://techbackground.blogspot.com/2013/07/the-open-vswitch-plugin-with-vlans.html (http://techbackground.blogspot.com/20...))

Now, when an external ARP request comes on either eth*, it creates an ARP flood within the network. To stop this, I have to bring down the eth* interfaces. I ran tcpdump on all eth* and noticed ARP requests tagged with 100, 200 and 300 on all three of them. It looks like the NORMAL action in OVS flows creates the broadcast storm, which should die out as the MACs are learned. But I don't see it happening. Am I missing any OVS flows?

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by kaustubh
close date 2016-06-13 17:10:00.385416

2 answers

Sort by ยป oldest newest most voted
0

answered 2016-01-09 10:36:58 -0500

kaustubh gravatar image

updated 2016-06-13 17:09:42 -0500

@Prateek, thanks for the reply, but it looks like the ARP responder feature for VLAN is inactive. As a workaround, I have added custom flows into OVS and modified iptables rules on qbrxxx bridges to allow traffic. This question can be closed.

update:
Sorry for the late update. It turned out that this was caused due a bug in the i40e device driver of the NIC cards. The older version of the driver was configured to loop back packets to other VSIs (VEB mode) by default, and the newer version disabled this feature.

edit flag offensive delete link more
0

answered 2016-01-06 23:24:08 -0500

Prateek K gravatar image

I think what we you want it an ARP responder which actually works in the following ways:-

  • OVS at L2 creates a table known as arp table where the MAC address of machines is learned
  • Whenever ARP request comes instead of treating it as a broadcast and sending to all VM's the tables are used and a arp response is created by OVS at l2 itself and sent
  • Prevent kind of ARP request attacks and flooding etc.

Check this value in the ml2_conf.ini file of the neutron node:- arp_responder = False

One thing that we have to note here is currently only networks such as VXLAN and GRE(Overlay networks) support ARP responder in OVS. There is already a bug raised for getting this feature in vlan :-

https://bugs.launchpad.net/neutron/+b...

What I can suggest you is if you are worried about the network congrestion and dont want such things to happen then convert your network topology to VXLAN or GRE and then the arp responder will work and will prevent flooding

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-01-06 22:29:12 -0500

Seen: 609 times

Last updated: Jun 13 '16