Ask Your Question

neutron does not configure openvswitch correctly: no gre tunnels

asked 2015-04-02 08:48:53 -0500

Niobos gravatar image

updated 2015-04-02 09:43:39 -0500

I'm struggling to get Neutron networking up and running. From my own debugging, I believe the issue to be that the GRE tunnels are not being set up between the Nova-node and the Neutron-node.

First some context. I'm trying out OpenStack Juno on Debian Wheezy. It's a 3-node setup, with a controller, nova and neutron node. Technically though, these are all on the same machine. Controller in an LXC-container, nova & neutron are KVM machines; the networks are bridges. I'm fairly confident that this is not the issue: I've successfully made a similar setup with Ubuntu & Openstack before.

The network setup

  • the Controller node has 2 network interfaces: eth0 connects to the corporate network ("the internet", I've changed the IP to in the dumps to protect the innocent) for API access, eth1 connects to the OpenStack management network (

  • The Nova node has 3 network interfaces: eth0 connects to the management network (, eth1 to the tunnel network (, eth2 is unused (future storage network).

  • The Neutron node has 3 network interfaces: eth0 is management (, eth1 is tunnel (, eth2 is corporate network (no IP address)

I can ping every node from every other node via the management network; nova & neutron can ping each other via the tunnel network.

The problem

I'll start by describing the actual problem: I can successfully spin up an instance on Nova, and work with it via the console. Networking however does not work. The instance receives a DHCP timeout when trying to get an IP address. The instance is connected to its own tenant subnet, which is connected to the internet via a tenant router. I can ping the router from "the internet", so that part of Neutron seems to be working correctly.

Now, a few assumptions and experimental data:

  • I've traced the DHCP requests with tcpdump. I can see them exiting the instance, going through the veth-pair, and arriving an the br-int integration bridge.
  • I don't see the packets on the br-tun tunnel bridge, nor do I see any GRE-tunnel configured there. I was expecting a pair of GRE-interfaces connecting the nova-node to the neutron-node.
  • From the logs, it looks like neutron/openvswitch is confused. ovs-vsctl show does not show GRE ports, yet the controller logs WARNING neutron.plugins.ml2.drivers.type_gre [req-... None] Gre endpoint with ip already exists

Configs & logs

To keep things structured, I've a collection of configs & logs up on a webserver. If it's desirable to paste them here one below the other, please let me know. On , you can find the config of the three nodes next to each other, among logs and command output that was previously requested on IRC.

edit retag flag offensive close merge delete


Have you found anymore info on this?

SamYaple gravatar imageSamYaple ( 2015-04-05 07:19:57 -0500 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2015-04-28 04:58:40 -0500 gravatar image

Have you found the root cause? or workaround? I got similar logs. I have only 2 nodes, so I installed controller and network node on the single node, compute is another node.

I can see the gre port on COMPUTE node, but on controller (as well as network node) can not see gre port.

similar logs: 2015-04-22 15:55:30.428 15064 WARNING neutron.plugins.ml2.drivers.type_gre [req-903f1aab-693c-49b4-8fd8-1b954445adcf None] Gre endpoint with ip already exists

edit flag offensive delete link more


I solved it by comment out type_drivers = gre in the ml2 config file. gravatar ( 2015-04-28 06:48:59 -0500 )edit

if you comment out gre in type_drivers, how can you use GRE? or you chose vxlan?

Steven Li gravatar imageSteven Li ( 2015-11-16 20:52:25 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2015-04-02 08:48:53 -0500

Seen: 1,639 times

Last updated: Apr 02 '15