Ask Your Question
1

Floating IP not working (in-/egress from/to outside) [Neutron, OVS, GRE]

asked 2013-10-15 09:31:46 -0500

red_trela gravatar image

updated 2013-10-16 03:27:02 -0500

Trying to get Havana (rc1 from RDO) to work with Neutron using OVS and GRE tunnels. There's a single controller which also acts as the network node and there's currently two compute nodes. Trying to launch an instance, it gets a fixed IP which seems to work well. But then I assign a floating IP which is supposedly routed within our network. Now I can neither access our network from within the instance nor the other way around. I can't even ping the external gateway from within the instance and I can't ping the instance from the controller (and yes, I opened up the security group for all ICMP from 0.0.0.0/0).

Controller (aka ctrl below): - eth0 added to br-ex - br-ex has IP of 129.132.32.7 (a range that is not used anywhere else in the setup) - eth1 has the IP 10.112.1.30, which is the management network (over which the GRE tunnels go) - eth2 connects to our storage network and can be ignored in this scenario - Can ping Floating Gateway (external gateway) in default namespace - Can ping Floating Router and Floating IP in qrouter namespace - Can NOT ping Floating Gateway in qrouter namespace - Can NOT ping Floating Router or Floating IP in default namespace

Compute Nodes: - eth0 has IP of 10.112.1.{5,13}, which again is the management network (over which the GRE tunnels go)

Instance (aka instance below): - Private IP of 10.1.0.2 - Floating IP of 172.31.116.12 - Can ping itself on Private IP - Can ping Private Gateway - Can ping itself on Floating IP - Can ping Floating Router - Can NOT ping Floating Gateway

So what seems to sort of lack is a connection between the two namespaces (default and qrouter). But a tcpdump on eth0 reveals that if the instance tries to ping the Floating Gateway, there'a a arp request but there doesn't seem to be an answer. My guess would be because the network doesn't know where to send to answer to (i.e. the Floating IP is unknown to the network). Really no idea what I'm missing here.

Suspiciously, ALL ports on the Floating Network are shown as DOWN (while those on the Private Network are UP). I know there used to be a bug where they were shown down but actually worked, but this should be fixed and I think with the bug all ports on all networks showed as DOWN so not seeing that bug here.

Below, I'll list everything that I think could matter. Logs look clean, so leaving them away but happy to provide any additional information. Any help is much appreciated, thanks!

[root@ctrl]# neutron net-show eth_zurich
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 90696787-0c11-4031-bc99-8d1db2d24744 |
| name                      | eth_zurich                           |
| provider:network_type     | gre                                  |
| provider:physical_network |                                      |
| provider:segmentation_id  | 4                                    |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 47e26025-f09a-4299-915e-6b2275fea59e |
| tenant_id                 | b6f22facd6bb43a5bb465097b53a087a     |
+---------------------------+--------------------------------------+

[root@ctrl]# neutron subnet-show private_v4
+------------------+---------------------------------------------------+
| Field            | Value                                             |
+------------------+---------------------------------------------------+
| allocation_pools | {"start": "172.31.116.7", "end": "172 ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2013-10-15 10:38:38 -0500

darragh-oreilly gravatar image

updated 2013-10-16 05:31:37 -0500

The subnet associated with your external network has gateway_ip 172.31.116.1. This needs to be the IP address of some real router on the link that eth0 is connected to. You should be able to ping it from the router namespace as br-ex bridges qg-5366ad09-03 and eth0. For some reason you cannot. Can you provide the tcpdump. Is this setup running in virtual machines?

I have no idea what the 129.132.32.7/26 on br-ex is for. I would remove it and leave this bridge dedicated to external access for Neutron routers for now.

Update: ok, so the controller/network node is a VMware VM. I don't know how VMware networking works, but it may be that the VMware virtual switch/bridge is not writing packets to the Nic you have put in br-ex, because the destination mac in the packets is that of the qg-xxxxxx-xx interface (tcpdump -e) and not the VMware nic. In VMware can you try setting the Nic for eth0 (or whatever nic is in br-ex) to promiscuous mode - then it should write all packets.

edit flag offensive delete link more

Comments

Added the answers to the bottom of the OP, starting where it says UPDATE.

red_trela gravatar imagered_trela ( 2013-10-16 03:27:35 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-10-15 09:31:46 -0500

Seen: 5,233 times

Last updated: Oct 16 '13