Ask Your Question

Why does the controller have a linux bridge interface on the public network?

asked 2016-01-15 22:06:56 -0500

Kernel_Panic gravatar image

updated 2016-02-22 10:18:31 -0500

Following the RHEL 7 Liberty install docs, which are EXCELLENT by the way. Does the controller have to be bridged on the public interface? For security reasons we would not want to have the controller exposed at all to the public facing or customer facing networks. Is this really necessary? THANKS!! Configure the Linux bridge agent¶

The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances including VXLAN tunnels for private networks and handles security groups.

Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following actions:

    In the [linux_bridge] section, map the public virtual network to the public physical network interface:

    physical_interface_mappings = public:PUBLIC_INTERFACE_NAME

    Replace PUBLIC_INTERFACE_NAME with the name of the underlying physical public network interface.


edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted

answered 2016-02-22 01:19:43 -0500

Kernel_Panic gravatar image

updated 2016-02-22 11:04:35 -0500

I worked around this by using the DVR configuration with a network node. Then the controller only needs the management interface for control plane and the network and compute nodes have data plane access to the external flat network. I will have to proxy the tenants dashboard from the public side of the network node back to the controller, where admins can access their dashboard and api's from the management or another private network.

DVR is improves security, scaleability and performance, because it lets tenants create routers on the compute nodes with external access without routing to a controller. This works with VXLAN or GRE tunnels, and it's a great sandbox to learn openflow and SDN.

from this scenario:

in this configuration: image description

Hat's off to the technical writers on this project. There is one caveat with this guide; on the compute and network nodes, the ovs configurations need to be in "/etc/neutron/plugins/ml2/openvswitch_agent.ini" the guide puts them in lm2_config.ini, which will not work. took me a long time to figure this out, I hope it saves you some time. OVS configs below.

    [root@network ~]# grep -v '^$' /etc/neutron/plugins/ml2/openvswitch_agent.ini  |grep -v '^ *#'
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =
enable_tunneling = True
bridge_mappings = ext252:br252
l2_population = True
tunnel_types = vxlan,gre
enable_distributed_routing = True
arp_responder = True
flat_networks = ext252
network_vlan_ranges = ext250,ext251,ext252
tunnel_id_ranges = 1000:2000
vni_ranges = 1000:2000
vxlan_group =
[root@network ~]#

also not in the guide, you will need to map the bridge to your physical interface with ovs-vsctl, This is handled in the linuxbridge.ini without openvswitch.

ovs-vsctl add-br br252
ovs-vsctl add-port br252 em1
edit flag offensive delete link more


Excelent answer, mark your answer as right answered. Regards

Eduardo Gonzalez gravatar imageEduardo Gonzalez ( 2016-02-22 15:06:04 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2016-01-15 22:06:56 -0500

Seen: 324 times

Last updated: Feb 22 '16