Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

neutron does not configure openvswitch correctly: no gre tunnels

I'm struggling to get Neutron networking up and running. From my own debugging, I believe the issue to be that the GRE tunnels are not being set up between the Nova-node and the Neutron-node.

First some context. I'm trying out OpenStack Juno on Debian Wheezy. It's a 3-node setup, with a controller, nova and neutron node. Technically though, these are all on the same machine. Controller in an LXC-container, nova & neutron are KVM machines; the networks are bridges. I'm fairly confident that this is not the issue: I've successfully made a similar setup with Ubuntu & Openstack before.

The network setup

  • the Controller node has 2 network interfaces: eth0 connects to the corporate network ("the internet") for API access, eth1 connects to the OpenStack management network (10.0.0.2/24).

  • The Nova node has 3 network interfaces: eth0 connects to the management network (10.0.0.3/24), eth1 to the tunnel network (10.0.1.3/24), eth2 is unused (future storage network).

  • The Neutron node has 3 network interfaces: eth0 is management (10.0.0.4/24), eth1 is tunnel (10.0.1.4/24), eth2 is corporate network (no IP address)

I can ping every node from every other node via the management network; nova & neutron can ping each other via the tunnel network.

The problem

I'll start by describing the actual problem: I can successfully spin up an instance on Nova, and work with it via the console. Networking however does not work. The instance receives a DHCP timeout when trying to get an IP address. The instance is connected to its own tenant subnet, which is connected to the internet via a tenant router. I can ping the router from "the internet", so that part of Neutron seems to be working correctly.

Now, a few assumptions and experimental data:

  • I've traced the DHCP requests with tcpdump. I can see them exiting the instance, going through the veth-pair, and arriving an the br-int integration bridge.
  • I don't see the packets on the br-tun tunnel bridge, nor do I see any GRE-tunnel configured there. I was expecting a pair of GRE-interfaces connecting the nova-node to the neutron-node.
  • From the logs, it looks like neutron/openvswitch is confused. ovs-vsctl show does not show GRE ports, yet the controller logs WARNING neutron.plugins.ml2.drivers.type_gre [req-... None] Gre endpoint with ip 10.0.1.3 already exists

Configs & logs

To keep things structured, I've a collection of configs & logs up on a webserver. If it's desirable to paste them here one below the other, please let me know. On http://niobos.dest-unreach.be/os/, you can find the config of the three nodes next to each other, among logs and command output that was previously requested on IRC.

neutron does not configure openvswitch correctly: no gre tunnels

I'm struggling to get Neutron networking up and running. From my own debugging, I believe the issue to be that the GRE tunnels are not being set up between the Nova-node and the Neutron-node.

First some context. I'm trying out OpenStack Juno on Debian Wheezy. It's a 3-node setup, with a controller, nova and neutron node. Technically though, these are all on the same machine. Controller in an LXC-container, nova & neutron are KVM machines; the networks are bridges. I'm fairly confident that this is not the issue: I've successfully made a similar setup with Ubuntu & Openstack before.

The network setup

  • the Controller node has 2 network interfaces: eth0 connects to the corporate network ("the internet") internet", I've changed the IP to 192.0.2.0/24 in the dumps to protect the innocent) for API access, eth1 connects to the OpenStack management network (10.0.0.2/24).

  • The Nova node has 3 network interfaces: eth0 connects to the management network (10.0.0.3/24), eth1 to the tunnel network (10.0.1.3/24), eth2 is unused (future storage network).

  • The Neutron node has 3 network interfaces: eth0 is management (10.0.0.4/24), eth1 is tunnel (10.0.1.4/24), eth2 is corporate network (no IP address)

I can ping every node from every other node via the management network; nova & neutron can ping each other via the tunnel network.

The problem

I'll start by describing the actual problem: I can successfully spin up an instance on Nova, and work with it via the console. Networking however does not work. The instance receives a DHCP timeout when trying to get an IP address. The instance is connected to its own tenant subnet, which is connected to the internet via a tenant router. I can ping the router from "the internet", so that part of Neutron seems to be working correctly.

Now, a few assumptions and experimental data:

  • I've traced the DHCP requests with tcpdump. I can see them exiting the instance, going through the veth-pair, and arriving an the br-int integration bridge.
  • I don't see the packets on the br-tun tunnel bridge, nor do I see any GRE-tunnel configured there. I was expecting a pair of GRE-interfaces connecting the nova-node to the neutron-node.
  • From the logs, it looks like neutron/openvswitch is confused. ovs-vsctl show does not show GRE ports, yet the controller logs WARNING neutron.plugins.ml2.drivers.type_gre [req-... None] Gre endpoint with ip 10.0.1.3 already exists

Configs & logs

To keep things structured, I've a collection of configs & logs up on a webserver. If it's desirable to paste them here one below the other, please let me know. On http://niobos.dest-unreach.be/os/, you can find the config of the three nodes next to each other, among logs and command output that was previously requested on IRC.