Openstack Neutron Multi-Flat Network Model with GRE tunnels not working

asked 2015-04-10 07:22:19 -0600

nfv-zombie gravatar image

updated 2015-04-10 07:29:35 -0600

I am unable to setup Multiple Flat Networks with GRE Tunnels and connect to the external network.

Data Networks - eth3 (192.168.x.x) and eth4 (172.16.x.x) GRE Tunnel Network - eth1 (10.30.x.x) External Network - eth0 (172.30.x.x)

I have used the following resources - ( ( ( (

I have been trying implement the following model - (image description)

My setup has 3 nodes / servers.


  • Control Node


[root@controller ~]# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:21:F6:81:A9:39 inet addr: Bcast: Mask:

[root@controller network-scripts]# cat /etc/neutron/plugins/ml2/ml2_conf.ini 
type_drivers = flat,gre                                                   
tenant_network_types = gre                                                
mechanism_drivers = openvswitch                                           

flat_networks = MgmtNet,AccessNet,CoreNet

tunnel_id_ranges = 1:1000

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

local_ip =
tunnel_type = gre
enable_tunneling = True


  • Network Node


[root@network network-scripts]# ifconfig br-AccessNet Link encap:Ethernet HWaddr 00:10:E0:70:05:71
inet addr: Bcast: Mask:

br-CoreNet Link encap:Ethernet HWaddr A0:36:9F:62:E7:44
inet addr: Bcast: Mask:

br-MgmtNet Link encap:Ethernet HWaddr 00:10:E0:70:05:6E

br-int Link encap:Ethernet HWaddr 46:26:13:AF:E5:4C

br-tun Link encap:Ethernet HWaddr 92:CA:34:76:17:4F

eth0 Link encap:Ethernet HWaddr 00:10:E0:70:05:6E inet addr: Bcast: Mask:

eth1 Link encap:Ethernet HWaddr 00:10:E0:70:05:6F inet addr: Bcast: Mask:

eth2 Link encap:Ethernet HWaddr 00:10:E0:70:05:70 inet addr: Bcast: Mask:

eth3 Link encap:Ethernet HWaddr 00:10:E0:70:05:71 inet addr: Bcast: Mask:

eth4 Link encap:Ethernet HWaddr A0:36:9F:62:E7:44 inet addr: Bcast: Mask:

lo Link encap:Local Loopback inet addr: Mask:

[root@network network-scripts]# cat /etc/neutron/neutron.conf                                                
auth_strategy = keystone                                      
rpc_backend = neutron.openstack.common.rpc.impl_qpid          
qpid_hostname = controller                                    
core_plugin = ml2                                             
service_plugins = router                                      

auth_uri = http://controller:5000
auth_host = controller           
auth_protocol = http             
auth_port = 35357                
admin_tenant_name = service      
admin_user = neutron             
admin_password = abc123          


service_provider=LOADBALANCER:Haproxy:neutron ...
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2015-04-10 08:09:43 -0600

jdexter gravatar image

What you are looking to do is have 2 flat provider networks.
I would set your flat network setting in the ml2.conf file to *

flat_networks = *

Then in your ovs plugin config, set your flat networks. For flat providers networks, you can put the abstract name for the physical network under vlan without the vlan tags, this denotes a flat network. You then add the bridge mappings so that OVS knows which provider network is mapped to which bridge. (if you are uisng a provider network, then OVS will take full control of the interface, the only thing you need to have is nic up.

network_vlan_ranges =  AccessNet,CoreNet
integration_bridge = br-int
bridge_mappings = AccessNet:br-eth2;CoreNet:br-eth3

Once you have setup the ovs plugin, add your bridge mappings to OVS.

$ ovs-vsctl add-br br-eth2
$ ovs-vsctl add-port br-eth2 eth2
$ ovs-vsctl add-br br-eth3
$ ovs-vsctl add-port br-eth3 eth3

Restart neutron. If everything is configured correctly a phy-[bridgeNAME] and an int-[bridgeName] will be created when openvswitch-agent restarts. If not check your settings.

The last piece is to tell neutron that you are using a provider network when creating the network.

$ neutron net-create ext_net --provider:network_type flat --provider:physical_network AccessNet [--router:external=True]
$ neutron subnet-create  ext_net --gateway -- [--enable_dhcp=True]
edit flag offensive delete link more

answered 2015-04-10 09:07:23 -0600

nfv-zombie gravatar image

updated 2015-04-10 09:09:17 -0600

Thank you jdexter!

I am trying to do the following -

Instance should have the following -

Public Mgmt IP - 172.30.x.x [FloatingIP]
Internal Mgmt IP - 20.30.x.x 
Data Ingress IP - 192.168.x.x
Data Egress IP - 172.16.x.x

GRE Tunneling needs to be -

Compute Node - eth1 - 10.30.x.x
Network Node - eth1 - 10.30.x.x

Following networks are present -


br-tun >> eth1 - 10.30.x.x
eth2 - 20.30.x.x
br-AccessNet >> eth3 - 192.168.x.x
br-CoreNet >> eth4 - 172.16.x.x


br-MgmtNet >> eth0 - 172.30.x.x
br-tun >> eth1 - 10.30.x.x
eth2 - 20.30.x.x
br-AccessNet >> eth3 - 192.168.x.x
br-CoreNet >> eth4 - 172.16.x.x


eth0 - 172.30.x.x

Neutron Router should be able to do the following -

Public Mgmt IP 172.30.x.x ----> route ----> Internal Mgmt IP 20.30.x.x

Am I doing the right thing here?

Thanks for your help.

edit flag offensive delete link more


Is there a reason why you are doing this with flat networks? Do you need access outside of your cloud for Internal mgmt, and the ingress, egress networks? If not those should be used as vxlan networks. As for routing, your routing would be forwarding from your floating IP to any of the ports.

jdexter gravatar imagejdexter ( 2015-04-10 11:05:01 -0600 )edit

Wouldn't performance be an issue to deal with if it werent flat networks?

Control node surely needs access from the outside world.

Instances can do with out access outside.

I understand that floating ip to any of them would work. Thanks for the input.

nfv-zombie gravatar imagenfv-zombie ( 2015-04-10 14:06:39 -0600 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2015-04-10 07:22:19 -0600

Seen: 832 times

Last updated: Apr 10 '15