Ask Your Question
0

What affects neutron router extenal routing? [closed]

asked 2014-10-02 07:35:17 -0500

mickt gravatar image

updated 2014-10-08 08:32:03 -0500

I've added this to the top so that it may save you reading through all that follows. I have as yet not achieved a fully working system comprising of icehouse with neutron on ubuntu after some weeks of effort. I have however achieved it on CentOS after two days. I'd advise those who are not tied to ubuntu as a host OS to use RHEL/CentOS and use packstack. :-) I will continue investigation on ubuntu when time allows and follow below.

I've installed OpenStack Icehouse on Ubuntu 14.04 LTS and neutron. I've been struggling for a while with neutron set-up, specifically external routing. It took some time to reach this stage where I can sometimes route externally and sometimes not, depending on how the server boots. Anyone been through this pain and/or know what might be causing it?

root@qa-openstack01:~# ifconfig
br-ex     Link encap:Ethernet  HWaddr 00:22:64:9b:38:46
      inet addr:10.10.12.7  Bcast:10.10.12.255  Mask:255.255.255.0
      inet6 addr: fe80::899:1fff:fef1:93c0/64 Scope:Link
      UP BROADCAST RUNNING PROMISC  MTU:1500  Metric:1
      RX packets:1154 errors:0 dropped:10 overruns:0 frame:0
      TX packets:286 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:82021 (82.0 KB)  TX bytes:34227 (34.2 KB)

br-int    Link encap:Ethernet  HWaddr ca:db:87:d6:d9:45
      inet6 addr: fe80::40a0:e3ff:fe54:97e2/64 Scope:Link
      UP BROADCAST RUNNING  MTU:1500  Metric:1
      RX packets:18 errors:0 dropped:0 overruns:0 frame:0
      TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:1476 (1.4 KB)  TX bytes:648 (648.0 B)

br-tun    Link encap:Ethernet  HWaddr 56:75:d4:ab:31:43
      inet6 addr: fe80::5800:afff:fe81:8fa6/64 Scope:Link
      UP BROADCAST RUNNING  MTU:1500  Metric:1
      RX packets:0 errors:0 dropped:0 overruns:0 frame:0
      TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

eth0      Link encap:Ethernet  HWaddr 00:22:64:9b:38:46
      UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
      RX packets:1163 errors:0 dropped:0 overruns:0 frame:0
      TX packets:310 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:91656 (91.6 KB)  TX bytes:37539 (37.5 KB)

lo        Link encap:Local Loopback
      inet addr:127.0.0.1  Mask:255.0.0.0
      inet6 addr: ::1/128 Scope:Host
      UP LOOPBACK RUNNING  MTU:65536  Metric:1
      RX packets:7846 errors:0 dropped:0 overruns:0 frame:0
      TX packets:7846 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:1500253 (1.5 MB)  TX bytes:1500253 (1.5 MB)

virbr0    Link encap:Ethernet  HWaddr 62:bc:67:6a:a0:15
      inet addr ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason question is not relevant or outdated by mickt
close date 2015-01-15 04:52:15.034136

Comments

Post route -n on Neutron Node.

dbaxps gravatar imagedbaxps ( 2014-10-02 07:48:06 -0500 )edit

It is already included above. Newly installed single node.

mickt gravatar imagemickt ( 2014-10-02 07:56:24 -0500 )edit

3 answers

Sort by ยป oldest newest most voted
1

answered 2014-10-02 09:12:15 -0500

mickt gravatar image

I only have /etc/neutron/plugins/ml2/ml2_conf.ini file. It is similar. I've not used packstack.

I've seen your link before from other questions. :)

edit flag offensive delete link more
2

answered 2014-10-02 08:26:13 -0500

larsks gravatar image

Showing the routing table on the Neutron node actually won't tell us anything. And here's why:

Neutron L3 routers are implemented as network namespaces on your Neutron node. You can see these namespaces with the ip netns command. For example, on my system, we see:

# ip netns
qrouter-92a5e69a-8dcf-400a-a2c2-46c775aee06b
qdhcp-18fc1b31-9b2c-4b6e-8dea-7e04bfd1eb43

The qdhcp-... namespaces are used for dhcp servers for various networks. The part after the qdhcp- is the UUID of a neutron network:

$ neutron net-list
+--------------------------------------+----------+--------------------------------------------------+
| id                                   | name     | subnets                                          |
+--------------------------------------+----------+--------------------------------------------------+
| 028d70dd-67b8-4901-8bdd-0c62b06cce2d | ext-nat  | 0918b173-9280-4f63-91e0-1df626830f4b             |
| 18fc1b31-9b2c-4b6e-8dea-7e04bfd1eb43 | net0     | 55c6da11-cfbe-4443-bb5f-19c4d0ea041a 10.0.0.0/24 |
+--------------------------------------+----------+--------------------------------------------------+

The qrouter-... namespaces are are your Neutron routers; the part after the qrouter- is the UUID of a router:

$ neutron router-list
+--------------------------------------+------------+-------------------------------------------------------...
| id                                   | name       | external_gateway_info                                 ...
+--------------------------------------+------------+-------------------------------------------------------...
| 92a5e69a-8dcf-400a-a2c2-46c775aee06b | router-nat | {"network_id": "028d70dd-67b8-4901-8bdd-0c62b06cce2d",...
+--------------------------------------+------------+-------------------------------------------------------...

A "network namespace" is an isolated set of network resources -- interfaces, iptables rules, routing tables, etc. You can use the ip netns exec command to run commands inside the namespace. To look inside my router namespace:

# ip netns exec qrouter-92a5e69a-8dcf-400a-a2c2-46c775aee06b ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
9: qr-416ca0b2-c8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether fa:16:3e:54:51:50 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-416ca0b2-c8
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe54:5150/64 scope link 
       valid_lft forever preferred_lft forever
15: qg-2cad0370-bb: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether fa:16:3e:f8:f4:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.10/24 brd 192.168.200.255 scope global qg-2cad0370-bb
       valid_lft forever preferred_lft forever
    inet 192.168.200.202/32 brd 192.168.200.202 scope global qg-2cad0370-bb
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fef8:f4c4/64 scope link 
       valid_lft forever preferred_lft forever

The qr-... interface shown here is attached to a subnet of the net0 network. The interface name is based on the neutron port id associated with that connection.

The qg-... interface is the "external" side of the router, and is associated with my ext-nat network. This interface is created when you run the neutron router-gateway-set command. This is where floating ip addresses are attached.

The router also has a routing table:

# ip netns exec qrouter-92a5e69a-8dcf-400a-a2c2-46c775aee06b ip route
default via 192.168.200.1 dev qg-2cad0370-bb 
10.0.0.0/24 dev qr-416ca0b2-c8  proto kernel  scope link  src 10.0.0.1 
192.168.200.0/24 dev qg-2cad0370-bb  proto kernel  scope link  src 192.168.200.10

Here we see that the default route in this router is to 192.168.200.1. This is completely different from the default route on my host, which looks like:

# ip route
default via 192 ...
(more)
edit flag offensive delete link more

Comments

But why does my router externally route then after a reboot not and after other reboots may or may not etc.? It is the inconsistency that I do not understand.

mickt gravatar imagemickt ( 2014-10-02 08:35:00 -0500 )edit

I was hoping that given this information you could look at your system when there are routing problems and see where things are breaking down.

larsks gravatar imagelarsks ( 2014-10-02 08:56:12 -0500 )edit

What is your host OS?

mickt gravatar imagemickt ( 2014-10-08 03:35:45 -0500 )edit
0

answered 2014-10-02 08:31:37 -0500

dbaxps gravatar image

Forced to use answer field for formatting comment.
Your routing table on Neutron Node doesn't look good to me . Sample bellow

[root@icehouse1 ~(keystone_admin)]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 br-ex
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 p37p1
169.254.0.0     0.0.0.0         255.255.0.0     U     1004   0        0 p4p1
169.254.0.0     0.0.0.0         255.255.0.0     U     1018   0        0 br-ex
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 p4p1
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 br-ex

You don't have access to nova metadata server. First three lines in my report is a fair

edit flag offensive delete link more

Comments

I can manually add these via interfaces file. Is it usual to have to do so? What are your interface p37p1 & p4p1? I presume one of them is equiv to my eth0?

mickt gravatar imagemickt ( 2014-10-02 08:41:25 -0500 )edit

It happens automatically during system setup. I didn't touch routing table at all
p37p1 -> eth0 ( external && management network)
p4p1 -> eth1 (GRE tunnel end points between cluster nodes)
It is working cluster right now.

dbaxps gravatar imagedbaxps ( 2014-10-02 08:50:55 -0500 )edit

Interesting. I wonder why it is not doing so for me. Which part of set-up is supposed to automatically add these?

I've shown my modified routing tables above.

mickt gravatar imagemickt ( 2014-10-02 08:57:26 -0500 )edit

Please, view setup procedure via RDO packstack http://bderzhavets.blogspot.com/2014/... . I had to fix only 2 files /etc/neutron/ml2_conf.ini && /etc/neutron/plugin.ini on Controller ( for you Neutron Node) && on Compute Node. The rest was done for me by packstack.

dbaxps gravatar imagedbaxps ( 2014-10-02 09:02:57 -0500 )edit

View also:
[1] http://kashyapc.fedorapeople.org/virt...
[2] http://kashyapc.fedorapeople.org/virt...
In particular , see routing table on Controller ( Neutron Node ) in [1]

dbaxps gravatar imagedbaxps ( 2014-10-02 09:43:00 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-10-02 07:35:17 -0500

Seen: 222 times

Last updated: Oct 08 '14