Ask Your Question
0

Openstack Neutron Multi-Flat Network Model with GRE tunnels not working

asked 2015-04-10 07:22:19 -0500

nfv-zombie gravatar image

updated 2015-04-10 07:29:35 -0500

I am unable to setup Multiple Flat Networks with GRE Tunnels and connect to the external network.

Data Networks - eth3 (192.168.x.x) and eth4 (172.16.x.x) GRE Tunnel Network - eth1 (10.30.x.x) External Network - eth0 (172.30.x.x)

I have used the following resources -

http://docs.openstack.org/havana/install-guide/install/yum/content/ch_preface.html (http://docs.openstack.org/havana/inst...) http://docs.openstack.org/icehouse/install-guide/install/yum/openstack-install-guide-yum-icehouse.pdf (http://docs.openstack.org/icehouse/in...) https://developer.rackspace.com/blog/neutron-networking-the-building-blocks-of-an-openstack-cloud/ (https://developer.rackspace.com/blog/...) http://docs.openstack.org/juno/install-guide/install/apt/content/neutron_initial-external-network.html (http://docs.openstack.org/juno/instal...)

I have been trying implement the following model -

http://imgur.com/fMMofLj (image description)

My setup has 3 nodes / servers.

**

  • Control Node

**

[root@controller ~]# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:21:F6:81:A9:39 inet addr:172.30.21.160 Bcast:172.30.255.255 Mask:255.255.0.0

[root@controller network-scripts]# cat /etc/neutron/plugins/ml2/ml2_conf.ini 
[ml2]                                                                     
type_drivers = flat,gre                                                   
tenant_network_types = gre                                                
mechanism_drivers = openvswitch                                           

[ml2_type_flat]
flat_networks = MgmtNet,AccessNet,CoreNet

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True


[ovs]
local_ip = 10.30.21.160
tunnel_type = gre
enable_tunneling = True

**

  • Network Node

**

[root@network network-scripts]# ifconfig br-AccessNet Link encap:Ethernet HWaddr 00:10:E0:70:05:71
inet addr:192.168.21.74 Bcast:192.168.255.255 Mask:255.255.0.0

br-CoreNet Link encap:Ethernet HWaddr A0:36:9F:62:E7:44
inet addr:172.16.21.74 Bcast:172.16.255.255 Mask:255.255.0.0

br-MgmtNet Link encap:Ethernet HWaddr 00:10:E0:70:05:6E

br-int Link encap:Ethernet HWaddr 46:26:13:AF:E5:4C

br-tun Link encap:Ethernet HWaddr 92:CA:34:76:17:4F

eth0 Link encap:Ethernet HWaddr 00:10:E0:70:05:6E inet addr:172.30.21.74 Bcast:172.30.255.255 Mask:255.255.0.0

eth1 Link encap:Ethernet HWaddr 00:10:E0:70:05:6F inet addr:10.30.21.74 Bcast:10.30.255.255 Mask:255.255.0.0

eth2 Link encap:Ethernet HWaddr 00:10:E0:70:05:70 inet addr:20.30.21.74 Bcast:20.30.255.255 Mask:255.255.0.0

eth3 Link encap:Ethernet HWaddr 00:10:E0:70:05:71 inet addr:192.168.21.74 Bcast:20.30.255.255 Mask:255.255.0.0

eth4 Link encap:Ethernet HWaddr A0:36:9F:62:E7:44 inet addr:172.16.21.74 Bcast:20.30.255.255 Mask:255.255.0.0

lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0

[root@network network-scripts]# cat /etc/neutron/neutron.conf                                                
[DEFAULT]                                                     
auth_strategy = keystone                                      
rpc_backend = neutron.openstack.common.rpc.impl_qpid          
qpid_hostname = controller                                    
core_plugin = ml2                                             
service_plugins = router                                      


[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller           
auth_protocol = http             
auth_port = 35357                
admin_tenant_name = service      
admin_user = neutron             
admin_password = abc123          


[service_providers]

service_provider=LOADBALANCER:Haproxy:neutron ...
(more)
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2015-04-10 09:07:23 -0500

nfv-zombie gravatar image

updated 2015-04-10 09:09:17 -0500

Thank you jdexter!

I am trying to do the following -

Instance should have the following -

Public Mgmt IP - 172.30.x.x [FloatingIP]
Internal Mgmt IP - 20.30.x.x 
Data Ingress IP - 192.168.x.x
Data Egress IP - 172.16.x.x

GRE Tunneling needs to be -

Compute Node - eth1 - 10.30.x.x
Network Node - eth1 - 10.30.x.x

Following networks are present -

Compute

br-tun >> eth1 - 10.30.x.x
eth2 - 20.30.x.x
br-AccessNet >> eth3 - 192.168.x.x
br-CoreNet >> eth4 - 172.16.x.x

Network

br-MgmtNet >> eth0 - 172.30.x.x
br-tun >> eth1 - 10.30.x.x
eth2 - 20.30.x.x
br-AccessNet >> eth3 - 192.168.x.x
br-CoreNet >> eth4 - 172.16.x.x

Control

eth0 - 172.30.x.x

Neutron Router should be able to do the following -

Public Mgmt IP 172.30.x.x ----> route ----> Internal Mgmt IP 20.30.x.x

Am I doing the right thing here?

Thanks for your help.

edit flag offensive delete link more

Comments

Is there a reason why you are doing this with flat networks? Do you need access outside of your cloud for Internal mgmt, and the ingress, egress networks? If not those should be used as vxlan networks. As for routing, your routing would be forwarding from your floating IP to any of the ports.

jdexter gravatar imagejdexter ( 2015-04-10 11:05:01 -0500 )edit

Wouldn't performance be an issue to deal with if it werent flat networks?

Control node surely needs access from the outside world.

Instances can do with out access outside.

I understand that floating ip to any of them would work. Thanks for the input.

nfv-zombie gravatar imagenfv-zombie ( 2015-04-10 14:06:39 -0500 )edit
0

answered 2015-04-10 08:09:43 -0500

jdexter gravatar image

What you are looking to do is have 2 flat provider networks.
I would set your flat network setting in the ml2.conf file to *

/etc/neutron/plugins/ml2/ml2.conf
...
[ml2_type_flat]
flat_networks = *

Then in your ovs plugin config, set your flat networks. For flat providers networks, you can put the abstract name for the physical network under vlan without the vlan tags, this denotes a flat network. You then add the bridge mappings so that OVS knows which provider network is mapped to which bridge. (if you are uisng a provider network, then OVS will take full control of the interface, the only thing you need to have is nic up.

/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
...
[ovs]
network_vlan_ranges =  AccessNet,CoreNet
integration_bridge = br-int
bridge_mappings = AccessNet:br-eth2;CoreNet:br-eth3

Once you have setup the ovs plugin, add your bridge mappings to OVS.

$ ovs-vsctl add-br br-eth2
$ ovs-vsctl add-port br-eth2 eth2
$ ovs-vsctl add-br br-eth3
$ ovs-vsctl add-port br-eth3 eth3

Restart neutron. If everything is configured correctly a phy-[bridgeNAME] and an int-[bridgeName] will be created when openvswitch-agent restarts. If not check your settings.

The last piece is to tell neutron that you are using a provider network when creating the network.

$ neutron net-create ext_net --provider:network_type flat --provider:physical_network AccessNet [--router:external=True]
$ neutron subnet-create  ext_net --gateway 10.35.1.254 10.35.1.0/24 -- [--enable_dhcp=True]
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2015-04-10 07:22:19 -0500

Seen: 735 times

Last updated: Apr 10 '15