Ask Your Question

OVS with VLAN - why do we need 2 bridges (br-int + br-eth1)? Can we use only one instead?

asked 2015-04-08 13:44:42 -0500

GLaupre gravatar image

updated 2015-04-14 11:54:37 -0500

Hello all,

I am trying to do a setup using only VLAN tag and I made it work using the cloud administrator guide: In big picture that's what I have in my compute node.

image description

We have two OVS bridges taking care of the VLAN tagging: br-int and br-eth1 (enp2s0f4 for me..), the latest attached to the physical interface of the host.

What I am wondering is, why the translation between global and local VLAN tag can't be done in only one bridge, let's say br-int and therefore br-int directly attached to eth1?

The flow rule translating local tag to global:

# ovs-ofctl dump-flows br-enp2s0f4
cookie=0x0,table=0,n_packets=126, idle_age=2, priority=4,in_port=5,dl_vlan=1 actions=mod_vlan_vid:101,NORMAL

And in my br-int, transaltion of global VLAN tag to local:

# ovs-ofctl dump-flows br-int
cookie=0x0,table=0,n_packets=120, idle_age=23, priority=3,in_port=8,dl_vlan=101 actions=mod_vlan_vid:1,NORMAL

Can't we combine those two rules in the same OVS bridge? Or the way OVS bridges is done doesn't make possible to combine br-int and br-eth1 in the same and only bridge?

Thank you for your answers :)

edit retag flag offensive close merge delete

2 answers

Sort by » oldest newest most voted

answered 2015-04-08 22:48:04 -0500

RHK gravatar image


br-int is the default name for the core bridge used on compute and network nodes. On the compute nodes, all VMs are connected direct or indirect to br-int. DHCP servers are also connected to br-int. Router uplinks from L3 agents may be connected to br-int, when no external bridge is configured. Router links to customer networks are always connected to br-int.

If tunneling is used, the tunneling bridge br-tun is connected to br-int. This connection is managed by Openstack neutron and is using an Openvswitch patch port.

If Vlan or Flat network types are used, another bridge, e.g. br-eth1, is connected to br-int. This connection is also managed by Openstack neutron and is using a Linux veth pair, which is a serious performance bottleneck.


edit flag offensive delete link more


Thank you for the explanation but I know all that and this is not my question:)

GLaupre gravatar imageGLaupre ( 2015-04-09 11:29:30 -0500 )edit

Performance-wise it sucks, no?

GLaupre gravatar imageGLaupre ( 2015-04-09 12:44:39 -0500 )edit

i don't think there is a performance drop - the "patch cable" (not veth) is internal to OVS.

darragh-oreilly gravatar imagedarragh-oreilly ( 2015-04-09 12:59:05 -0500 )edit

I think OVS and br are deprecated and ML2 should be used.

uts9 gravatar imageuts9 ( 2015-04-20 06:25:25 -0500 )edit

@uttamhoode: yes, the ovs and linuxbridge monolithic plugins have been deprecated, and ML2 is the way now. But this question is about how the OVS agent uses multiple bridges.

darragh-oreilly gravatar imagedarragh-oreilly ( 2015-04-20 12:44:04 -0500 )edit

answered 2015-04-09 12:27:11 -0500

darragh-oreilly gravatar image

updated 2015-04-20 05:19:10 -0500

Not sure why it's like this - maybe the thinking was that separate bridges for different things would make the implementation easier. Although classifying ingress packets and directing them to distinct flow tables should achieve the same thing (it does this on br-tun).

Have a look at the Neutron ofagent - it uses just one bridge.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2015-04-08 13:44:42 -0500

Seen: 3,838 times

Last updated: Apr 20 '15