Ask Your Question

Niobos's profile - activity

2015-05-01 09:09:24 -0600 received badge  Good Question (source)
2015-04-08 08:59:39 -0600 received badge  Famous Question (source)
2015-04-05 07:19:23 -0600 received badge  Nice Question (source)
2015-04-05 07:19:20 -0600 received badge  Notable Question (source)
2015-04-02 12:46:03 -0600 received badge  Popular Question (source)
2015-04-02 09:01:47 -0600 received badge  Student (source)
2015-04-02 08:50:14 -0600 asked a question neutron does not configure openvswitch correctly: no gre tunnels

I'm struggling to get Neutron networking up and running. From my own debugging, I believe the issue to be that the GRE tunnels are not being set up between the Nova-node and the Neutron-node.

First some context. I'm trying out OpenStack Juno on Debian Wheezy. It's a 3-node setup, with a controller, nova and neutron node. Technically though, these are all on the same machine. Controller in an LXC-container, nova & neutron are KVM machines; the networks are bridges. I'm fairly confident that this is not the issue: I've successfully made a similar setup with Ubuntu & Openstack before.

The network setup

  • the Controller node has 2 network interfaces: eth0 connects to the corporate network ("the internet", I've changed the IP to in the dumps to protect the innocent) for API access, eth1 connects to the OpenStack management network (

  • The Nova node has 3 network interfaces: eth0 connects to the management network (, eth1 to the tunnel network (, eth2 is unused (future storage network).

  • The Neutron node has 3 network interfaces: eth0 is management (, eth1 is tunnel (, eth2 is corporate network (no IP address)

I can ping every node from every other node via the management network; nova & neutron can ping each other via the tunnel network.

The problem

I'll start by describing the actual problem: I can successfully spin up an instance on Nova, and work with it via the console. Networking however does not work. The instance receives a DHCP timeout when trying to get an IP address. The instance is connected to its own tenant subnet, which is connected to the internet via a tenant router. I can ping the router from "the internet", so that part of Neutron seems to be working correctly.

Now, a few assumptions and experimental data:

  • I've traced the DHCP requests with tcpdump. I can see them exiting the instance, going through the veth-pair, and arriving an the br-int integration bridge.
  • I don't see the packets on the br-tun tunnel bridge, nor do I see any GRE-tunnel configured there. I was expecting a pair of GRE-interfaces connecting the nova-node to the neutron-node.
  • From the logs, it looks like neutron/openvswitch is confused. ovs-vsctl show does not show GRE ports, yet the controller logs WARNING neutron.plugins.ml2.drivers.type_gre [req-... None] Gre endpoint with ip already exists

Configs & logs

To keep things structured, I've a collection of configs & logs up on a webserver. If it's desirable to paste them here one below the other, please let me know. On , you can find the config of the three nodes next to each other, among logs and command output that was previously requested on IRC.