# Icehouse Tutorial Neutron ML2 GRE External Gateway IP Not Pingable\Routed To On Ubuntu 14.04 Over Br-Ex

Hi,

Been trying to follow a tutorial to setup a triple node Openstack Icehouse configuration by scratch in a private Virtualbox environment. The guest VMs are using Ubuntu 14.04. So far I've been able to follow along with everything except the networking portion.

This is the tutorial I've been following: http://docs.openstack.org/icehouse/install-guide/install/apt/content/ch_basics.html (Here)

I have the following Virtualbox networks attached to the network node running all the Neutron OVS stuff. Neutron service itself is running on a seperate controller node as described in the guide I linked above. Trying to use GRE tunnels for the transport type.

vmnet0: 10.10.10.0/24 Management Network (Virtualbox Host Only)

vmnet1: 10.20.20.0/24 Internal Tunnel Network to Compute Nodes (Virtualbox Host Only)

vmnet2: 10.0.2.0/24 My supposed external interface without an IP configured behind a Virtualbox NAT interface. DHCP off. (Static Nat)

vmnet3: 10.0.5.0/24 Another Nat interface with DHCP just for easy Internet access to install packages... (Regular Nat)

Problem is, installing and setting up the external bridge interface and verifying it's up by pinging the external gateway interface is not working from the network node. I presume this would be possible since the tutorial instructs you to do this?

Here's how I created the external subnet:

neutron subnet-create ext-net --name ext-subnet \
> --allocation-pool start=10.0.2.101,end=10.0.2.200 \
> --disable-dhcp --gateway 10.0.2.1 10.0.2.0/24


And I attached it like so;

neutron router-gateway-set demo-router ext-net
Set gateway for router demo-router


Trying to run tracepath just results in the default GW sending the packet to my physical router out in my apartment which obviously isn't going to know where to send it so I don't think I need to look with tcpdump:

tracepath 10.0.2.101
1?: [LOCALHOST]                                         pmtu 1500
1:  10.0.5.2                                              0.664ms
1:  10.0.5.2                                              0.586ms
2:  therouter.internal                                    0.909ms asymm 64


Here's my routing table on the network node. No modifications done: Shouldn't have Neutron or OVS updated this for me? Do I need to launch an instance before the routing table is updated?

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.5.2        0.0.0.0         UG    0      0        0 eth3
10.0.5.0        0.0.0.0         255.255.255.0   U     0      0        0 eth3
10.10.10.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.20.20.0      0.0.0.0         255.255.255.0   U     0      0        0 eth1


/etc/network/interfaces

This file describes the network interfaces available on your system

# The loopback network interface
auto lo
iface lo inet loopback

#VMNET0 ...
edit retag close merge delete

Sort by » oldest newest most voted

Looks like I was able to resolve it.

Tooks pieces of this tutorial to update the Neutron config files on my network node. http://virtual2privatecloud.com/install-havana-on-ubuntu/ (http://virtual2privatecloud.com/insta...)

Updated my /etc/network/interfaces/ file to look like below after all the Openvswitch components were installed. Compared to my last one, you'll notice I'm specifying the br-ex and essentially adding all the necesary network information to this in the config file. The Icehouse tutorial does not specify this action needing to be done.

# This file describes the network interfaces available on your system

# The loopback network interface
auto lo
iface lo inet loopback

#VMNET0 10.10.10.0/24 Internal Management Network
auto eth0
iface eth0 inet static
network 10.10.10.0

#VMNET2 10.20.20.0/24 Private Tunnel Link Between Compute Nodes
auto eth1
iface eth1 inet static
network 10.20.20.0

# Create Nic on a NAT network without DHCP and put this  in promiscuous mode.
#
auto eth2
iface eth2 inet manual
up ifconfig $IFACE up up ip link set$IFACE promisc on
down ip link set $IFACE promisc off down ifconfig$IFACE down

#After you install the OpenVswitch after putting on Neutron
#you need to actually specify the br-ex interface in your interfaces
# file.
auto br-ex
iface br-ex inet static
gateway 10.0.2.1
dns-nameservers 8.8.8.8


The next change I borrowed from that tutorial, is how in the ml2_conf.ini file I specify more information under the [ovs] section than mentioned in the tutorial. Whether or not this is actually necessary with ML2 I have yet to test but the service did not error out with the following line items added.

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
firewall_driver = neutron
enable_security_group = True

[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.20.20.53


Note the above config is the same on the network node and the compute node with the exception of the local_ip filed which obviously needs to be different.

So yeah, I believe once I added those lines I was able to restart all the relavant services on the network and compute nodes and there were no errors in the new logs. Now here's the output of my ovs-vsctl show:

  Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port "gre-0a141434"
Interface "gre-0a141434"
type: gre
options: {in_key=flow, local_ip="10.20.20.51", out_key=flow, remote_ip="10.20.20.52"}
Bridge br-int
fail_mode: secure
Port "tapcd27629f-76"
tag: 1
Interface "tapcd27629f-76"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "qr-460bddb7-ef"
tag: 1
Interface "qr-460bddb7-ef"
type: internal
Port br-int
Interface br-int
type ...
more

@Strelik: this is great! many people have run into exactly this problem but you are the first that I have seen to post a solution. May I suggest that you file a bug against the document?

( 2014-08-27 23:37:56 -0500 )edit

Followed your advise to the tee, still having issues with compute nodes getting out to the internet through network controller. Can you help out please. Do I need to set up some NAT like in grizzly?

( 2014-09-24 15:49:49 -0500 )edit

Why is your gateway different below than previously mentioned above? Also, the routing info is that from compute node or network node?

( 2014-09-26 13:02:11 -0500 )edit

I have a similar problem, but can you access to the instances via the floating IP?

( 2014-12-08 09:07:11 -0500 )edit