Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Looks like I was able to resolve it.

Tooks pieces of this tutorial to update the Neutron config files on my network node. http://virtual2privatecloud.com/install-havana-on-ubuntu/

Updated my /etc/network/interfaces/ file to look like below after all the Openvswitch components were installed. Compared to my last one, you'll notice I'm specifying the br-ex and essentially adding all the necesary network information to this in the config file. The Icehouse tutorial does not specify this action needing to be done.

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

#VMNET0 10.10.10.0/24 Internal Management Network
auto eth0
iface eth0 inet static
address 10.10.10.52
netmask 255.255.255.0
network 10.10.10.0

#VMNET2 10.20.20.0/24 Private Tunnel Link Between Compute Nodes
auto eth1
iface eth1 inet static
address 10.20.20.51
netmask 255.255.255.0
network 10.20.20.0

 # Create Nic on a NAT network without DHCP and put this  in promiscuous mode.
 #
 auto eth2
 iface eth2 inet manual
 up ifconfig $IFACE up
 up ip link set $IFACE promisc on
 down ip link set $IFACE promisc off
 down ifconfig $IFACE down

 #After you install the OpenVswitch after putting on Neutron
 #you need to actually specify the br-ex interface in your interfaces
 # file. 
 auto br-ex
 iface br-ex inet static
 address 10.0.2.10
 netmask 255.255.255.0
 gateway 10.0.2.1
 dns-nameservers 8.8.8.8

The next change I borrowed from that tutorial, is how in the ml2_conf.ini file I specify more information under the [ovs] section than mentioned in the tutorial. Whether or not this is actually necessary with ML2 I have yet to test but the service did not error out with the following line items added.

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
firewall_driver = neutron
enable_security_group = True

[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.20.20.53

Note the above config is the same on the network node and the compute node with the exception of the local_ip filed which obviously needs to be different.

So yeah, I believe once I added those lines I was able to restart all the relavant services on the network and compute nodes and there were no errors in the new logs. Now here's the output of my ovs-vsctl show:

  Bridge br-tun
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
    Port br-tun
        Interface br-tun
            type: internal
    Port "gre-0a141434"
        Interface "gre-0a141434"
            type: gre
            options: {in_key=flow, local_ip="10.20.20.51", out_key=flow, remote_ip="10.20.20.52"}
Bridge br-int
    fail_mode: secure
    Port "tapcd27629f-76"
        tag: 1
        Interface "tapcd27629f-76"
            type: internal
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port "qr-460bddb7-ef"
        tag: 1
        Interface "qr-460bddb7-ef"
            type: internal
    Port br-int
        Interface br-int
            type: internal
Bridge br-ex
    Port br-ex
        Interface br-ex
            type: internal
    Port "eth2"
        Interface "eth2"
    Port "qg-dfc1d737-29"
        Interface "qg-dfc1d737-29"
            type: internal
ovs_version: "2.0.1"

Cool, now I can see the tap interfaces showing connectivity along with the gre ports. Actually I don't think these show up till you create the networks and connect them to the router. But clearly there was no connectivity in the output from my OP. Routing table doesn't seem to have changed though...maybe namespaces has something to do with that.

  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
 0.0.0.0         10.0.2.1        0.0.0.0         UG    0      0        0 br-ex
 10.0.2.0        0.0.0.0         255.255.255.0   U     0      0        0 br-ex
10.10.10.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.20.20.0      0.0.0.0         255.255.255.0   U     0      0        0 eth1

So once I got a successful ping from the 10.0.2.101 address on br-ex I knew I was already off to a better start. I launched an instance and connected to it from the VNC console, logged in and was able to ping an address on the internet.

So yup, really hope this helps anyone else having the same issues with the Icehouse tutorial on the Openstack site if they're following it line for line. Hopefully the authors will correct or clarify the networking services portion of the tutorials going forward.