Ask Your Question
1

Icehouse Tutorial Neutron ML2 GRE External Gateway IP Not Pingable\Routed To On Ubuntu 14.04 Over Br-Ex

asked 2014-08-24 22:36:32 -0500

Strelik gravatar image

Hi,

Been trying to follow a tutorial to setup a triple node Openstack Icehouse configuration by scratch in a private Virtualbox environment. The guest VMs are using Ubuntu 14.04. So far I've been able to follow along with everything except the networking portion.

This is the tutorial I've been following: http://docs.openstack.org/icehouse/install-guide/install/apt/content/ch_basics.html (Here)

I have the following Virtualbox networks attached to the network node running all the Neutron OVS stuff. Neutron service itself is running on a seperate controller node as described in the guide I linked above. Trying to use GRE tunnels for the transport type.

vmnet0: 10.10.10.0/24 Management Network (Virtualbox Host Only)

vmnet1: 10.20.20.0/24 Internal Tunnel Network to Compute Nodes (Virtualbox Host Only)

vmnet2: 10.0.2.0/24 My supposed external interface without an IP configured behind a Virtualbox NAT interface. DHCP off. (Static Nat)

vmnet3: 10.0.5.0/24 Another Nat interface with DHCP just for easy Internet access to install packages... (Regular Nat)

Problem is, installing and setting up the external bridge interface and verifying it's up by pinging the external gateway interface is not working from the network node. I presume this would be possible since the tutorial instructs you to do this?

Here's how I created the external subnet:

neutron subnet-create ext-net --name ext-subnet \
> --allocation-pool start=10.0.2.101,end=10.0.2.200 \                        
> --disable-dhcp --gateway 10.0.2.1 10.0.2.0/24

And I attached it like so;

neutron router-gateway-set demo-router ext-net
Set gateway for router demo-router

Trying to run tracepath just results in the default GW sending the packet to my physical router out in my apartment which obviously isn't going to know where to send it so I don't think I need to look with tcpdump:

tracepath 10.0.2.101
1?: [LOCALHOST]                                         pmtu 1500
1:  10.0.5.2                                              0.664ms 
1:  10.0.5.2                                              0.586ms 
2:  therouter.internal                                    0.909ms asymm 64 
3:  no reply

Here's my routing table on the network node. No modifications done: Shouldn't have Neutron or OVS updated this for me? Do I need to launch an instance before the routing table is updated?

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.5.2        0.0.0.0         UG    0      0        0 eth3
10.0.5.0        0.0.0.0         255.255.255.0   U     0      0        0 eth3
10.10.10.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.20.20.0      0.0.0.0         255.255.255.0   U     0      0        0 eth1

Here's more info on my configuration:

/etc/network/interfaces

This file describes the network interfaces available on your system
and how to activate them. For more information, see interfaces(5).

# The loopback network interface
 auto lo
iface lo inet loopback

#VMNET0 ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
2

answered 2014-08-27 20:28:58 -0500

Strelik gravatar image

Looks like I was able to resolve it.

Tooks pieces of this tutorial to update the Neutron config files on my network node. http://virtual2privatecloud.com/install-havana-on-ubuntu/ (http://virtual2privatecloud.com/insta...)

Updated my /etc/network/interfaces/ file to look like below after all the Openvswitch components were installed. Compared to my last one, you'll notice I'm specifying the br-ex and essentially adding all the necesary network information to this in the config file. The Icehouse tutorial does not specify this action needing to be done.

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

#VMNET0 10.10.10.0/24 Internal Management Network
auto eth0
iface eth0 inet static
address 10.10.10.52
netmask 255.255.255.0
network 10.10.10.0

#VMNET2 10.20.20.0/24 Private Tunnel Link Between Compute Nodes
auto eth1
iface eth1 inet static
address 10.20.20.51
netmask 255.255.255.0
network 10.20.20.0

 # Create Nic on a NAT network without DHCP and put this  in promiscuous mode.
 #
 auto eth2
 iface eth2 inet manual
 up ifconfig $IFACE up
 up ip link set $IFACE promisc on
 down ip link set $IFACE promisc off
 down ifconfig $IFACE down

 #After you install the OpenVswitch after putting on Neutron
 #you need to actually specify the br-ex interface in your interfaces
 # file. 
 auto br-ex
 iface br-ex inet static
 address 10.0.2.10
 netmask 255.255.255.0
 gateway 10.0.2.1
 dns-nameservers 8.8.8.8

The next change I borrowed from that tutorial, is how in the ml2_conf.ini file I specify more information under the [ovs] section than mentioned in the tutorial. Whether or not this is actually necessary with ML2 I have yet to test but the service did not error out with the following line items added.

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
firewall_driver = neutron
enable_security_group = True

[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.20.20.53

Note the above config is the same on the network node and the compute node with the exception of the local_ip filed which obviously needs to be different.

So yeah, I believe once I added those lines I was able to restart all the relavant services on the network and compute nodes and there were no errors in the new logs. Now here's the output of my ovs-vsctl show:

  Bridge br-tun
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
    Port br-tun
        Interface br-tun
            type: internal
    Port "gre-0a141434"
        Interface "gre-0a141434"
            type: gre
            options: {in_key=flow, local_ip="10.20.20.51", out_key=flow, remote_ip="10.20.20.52"}
Bridge br-int
    fail_mode: secure
    Port "tapcd27629f-76"
        tag: 1
        Interface "tapcd27629f-76"
            type: internal
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port "qr-460bddb7-ef"
        tag: 1
        Interface "qr-460bddb7-ef"
            type: internal
    Port br-int
        Interface br-int
            type ...
(more)
edit flag offensive delete link more

Comments

@Strelik: this is great! many people have run into exactly this problem but you are the first that I have seen to post a solution. May I suggest that you file a bug against the document?

Beel gravatar imageBeel ( 2014-08-27 23:37:56 -0500 )edit

Followed your advise to the tee, still having issues with compute nodes getting out to the internet through network controller. Can you help out please. Do I need to set up some NAT like in grizzly?

bgyako gravatar imagebgyako ( 2014-09-24 15:49:49 -0500 )edit

Why is your gateway different below than previously mentioned above? Also, the routing info is that from compute node or network node?

bgyako gravatar imagebgyako ( 2014-09-26 13:02:11 -0500 )edit

I have a similar problem, but can you access to the instances via the floating IP?

sasakii gravatar imagesasakii ( 2014-12-08 09:07:11 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

3 followers

Stats

Asked: 2014-08-24 22:36:32 -0500

Seen: 1,764 times

Last updated: Aug 27 '14