Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Neutron router can't reach external network - Kilo

Hi guys, it's not my first time to face with this issue - I had tried Juno before and now Kilo. I installed multi-node (3 nodes for controller, network and compute) openstack environment per Kilo installation instruction on my Mac via virtualbox. My Mac connected to internet via wifi and virtualbox's NAT network is 10.0.3.0/24, with DHCP server assigned by company wifi network.

I've set up the neutron network according to the instruction, here is the network interface configuration on network node:


auto eth2

iface eth2 inet manual

up ifconfig $IFACE 0.0.0.0 up

up ip link set $IFACE promisc on

down ip link set $IFACE promisc off

down ifconfig $IFACE down

auto br-ex

iface br-ex inet dhcp


eth2 is the external NIC of network node connecting to the NAT network of virtualbox. All 3 nodes have one NIC connected to this network (10.0.3.0/24), controller is 10.0.3.16, network 10.0.3.12 and compute 10.0.3.11. The private network (demo-subnet) I defined is 172.16.1.0/24, so the external gateway IP of demo-router is 10.0.3.101, the private gateway IP is 172.16.1.1.

Now I have initiated one instance (VM1) within demo tenant, getting private IP 172.16.1.3/24 and floating IP 10.0.3.102/24. I did some basic network connectivity testing as below output: 1)VM1 can ping 10.0.3.101 (the external gateway of demo-router) and 10.0.3.12 (network node public IP); 2)VM1 can NOT ping 10.0.3.1; 3)VM1 can NOT ping 10.0.3.16 (controller node) or 10.0.3.11 (compute node); 4)Within the qrouter network namespace, demo-router can ping 10.0.3.12 (network node) but can NOT ping 10.0.3.1, though 10.0.3.1 did respond the ARP reply towards OVS bridge br-ex, but demo-router didn't receive it. The same case if ping 10.0.3.16 (controller node). 5)network node can ping 10.0.3.101 (external gateway IP of demo-router) and 10.0.3.102 (instance VM1) 6)controller node can NOT ping 10.0.3.101 or 10.0.3.102.

I have looked up many similar questions but could not find the right solution for my case, so if you have any thoughts on that, pls be kind to share with me, thanks in advance!

Neutron router can't reach external network - Kilo

Hi guys, it's not my first time to face with this issue - I had tried Juno before and now Kilo. I installed multi-node (3 nodes for controller, network and compute) openstack environment per Kilo installation instruction on my Mac via virtualbox. My Mac connected to internet via wifi and virtualbox's NAT network is 10.0.3.0/24, with DHCP DNS server assigned by company wifi network.

I've set up the neutron network according to the instruction, here is the network interface configuration on network node:


auto eth2

iface eth2 inet manual

up ifconfig $IFACE 0.0.0.0 up

up ip link set $IFACE promisc on

down ip link set $IFACE promisc off

down ifconfig $IFACE down

auto br-ex

iface br-ex inet dhcp


eth2 is the external NIC of network node connecting to the NAT network of virtualbox. All 3 nodes have one NIC connected to this network (10.0.3.0/24), controller is 10.0.3.16, network 10.0.3.12 and compute 10.0.3.11. The private network (demo-subnet) I defined is 172.16.1.0/24, so the external gateway IP of demo-router is 10.0.3.101, the private gateway IP is 172.16.1.1.

Now I have initiated one instance (VM1) within demo tenant, getting private IP 172.16.1.3/24 and floating IP 10.0.3.102/24. I did some basic network connectivity testing as below output: 1)VM1 can ping 10.0.3.101 (the external gateway of demo-router) and 10.0.3.12 (network node public IP); 2)VM1 can NOT ping 10.0.3.1; 3)VM1 can NOT ping 10.0.3.16 (controller node) or 10.0.3.11 (compute node); 4)Within the qrouter network namespace, demo-router can ping 10.0.3.12 (network node) but can NOT ping 10.0.3.1, though 10.0.3.1 did respond the ARP reply towards OVS bridge br-ex, but demo-router didn't receive it. The same case if ping 10.0.3.16 (controller node). 5)network node can ping 10.0.3.101 (external gateway IP of demo-router) and 10.0.3.102 (instance VM1) 6)controller node can NOT ping 10.0.3.101 or 10.0.3.102.

I have looked up many similar questions but could not find the right solution for my case, so if you have any thoughts on that, pls be kind to share with me, thanks in advance!

Neutron router can't reach external network - Kilo

Hi guys, it's not my first time to face with this issue - I had tried Juno before and now Kilo. I installed multi-node (3 nodes for controller, network and compute) openstack environment per Kilo installation instruction on my Mac via virtualbox. My Mac connected to internet via wifi and virtualbox's NAT network is 10.0.3.0/24, with DNS server assigned by company wifi network.

I've set up the neutron network according to the instruction, here is the network interface configuration on network node:


auto eth2

iface eth2 inet manual

up ifconfig $IFACE 0.0.0.0 up

up ip link set $IFACE promisc on

down ip link set $IFACE promisc off

down ifconfig $IFACE down

auto br-ex

iface br-ex inet dhcp


eth2 is the external NIC of network node connecting to the NAT network of virtualbox. All 3 nodes have one NIC connected to this network (10.0.3.0/24), controller is 10.0.3.16, network 10.0.3.12 and compute 10.0.3.11. The private network (demo-subnet) I defined is 172.16.1.0/24, so the external gateway IP of demo-router is 10.0.3.101, the private gateway IP is 172.16.1.1.

Now I have initiated one instance (VM1) within demo tenant, getting private IP 172.16.1.3/24 and floating IP 10.0.3.102/24. I did some basic network connectivity testing as below output: 1)VM1 can ping 10.0.3.101 (the external gateway of demo-router) and 10.0.3.12 (network node public IP); 2)VM1 can NOT ping 10.0.3.1; 3)VM1 can NOT ping 10.0.3.16 (controller node) or 10.0.3.11 (compute node); 4)Within the qrouter network namespace, demo-router can ping 10.0.3.12 (network node) but can NOT ping 10.0.3.1, though 10.0.3.1 did respond the ARP reply towards OVS bridge br-ex, but demo-router didn't receive it. The same case if ping 10.0.3.16 (controller node). 5)network node can ping 10.0.3.101 (external gateway IP of demo-router) and 10.0.3.102 (instance VM1) 6)controller node can NOT ping 10.0.3.101 or 10.0.3.102.

I have looked up many similar questions but could not find the right solution for my case, so if you have any thoughts on that, pls be kind to share with me, thanks in advance!

---Appended based on Rahul's reply----

Hi Rahul, first thanks for your kind reply. Actually I forgot 1 point of what you mentioned - enabling IP forwarding on network node. But after the changes and all nodes rebooted, the controller still could not ping router gateway IP (10.0.3.101). I tried to ping controller from the router and got following findings: Testing command on network node - ip netns exec qrouter-9bbc99ea-69ca-4a23-8fb7-c0b2ef7425dc ping 10.0.3.16 note: 10.0.3.16 is the IP of controller within ext-net

Controller (10.0.3.16) received the ARP request from 10.0.3.101 and unicast the ARP reply with its MAC address to the MAC address of neutron router external interface; The interface br-ex on network node didn't see the ARP reply from controller to neutron router; The router external interface didn't see the ARP reply either; Compute node saw the ARP request from 10.0.3.101 and did nothing (expected behaviour).

So the question is why the ARP reply packets didn't arrive at the interface br-ex (I also tried the physical interface eth2 of network node and didn't see the ARP reply). I guess it may be caused by no layer 2 trunk between the virtualbox vswitch and ovs-vswitch. Any other thoughts?

Neutron router can't reach external network - Kilo

Hi guys, it's not my first time to face with this issue - I had tried Juno before and now Kilo. I installed multi-node (3 nodes for controller, network and compute) openstack environment per Kilo installation instruction on my Mac via virtualbox. My Mac connected to internet via wifi and virtualbox's NAT network is 10.0.3.0/24, with DNS server assigned by company wifi network.

I've set up the neutron network according to the instruction, here is the network interface configuration on network node:


# The primary network interface
auto eth2

eth2 iface eth2 inet manual

manual #iface eth2 inet dhcp up ifconfig $IFACE 0.0.0.0 up

up up ip link set $IFACE promisc on

on down ip link set $IFACE promisc off

off down ifconfig $IFACE down

down # The openvswitch network interface auto br-ex

br-ex iface br-ex inet dhcp


dhcp

eth2 is the external NIC of network node connecting to the NAT network of virtualbox. All 3 nodes have one NIC connected to this network (10.0.3.0/24), controller is 10.0.3.16, network 10.0.3.12 and compute 10.0.3.11. The private network (demo-subnet) I defined is 172.16.1.0/24, so the external gateway IP of demo-router is 10.0.3.101, the private gateway IP is 172.16.1.1.

Now I have initiated one instance (VM1) within demo tenant, getting private IP 172.16.1.3/24 and floating IP 10.0.3.102/24. I did some basic network connectivity testing as below output: 1)VM1 can ping 10.0.3.101 (the external gateway of demo-router) and 10.0.3.12 (network node public IP); 2)VM1 can NOT ping 10.0.3.1; 3)VM1 can NOT ping 10.0.3.16 (controller node) or 10.0.3.11 (compute node); 4)Within the qrouter network namespace, demo-router can ping 10.0.3.12 (network node) but can NOT ping 10.0.3.1, though 10.0.3.1 did respond the ARP reply towards OVS bridge br-ex, but demo-router didn't receive it. The same case if ping 10.0.3.16 (controller node). 5)network node can ping 10.0.3.101 (external gateway IP of demo-router) and 10.0.3.102 (instance VM1) 6)controller node can NOT ping 10.0.3.101 or 10.0.3.102.

I have looked up many similar questions but could not find the right solution for my case, so if you have any thoughts on that, pls be kind to share with me, thanks in advance!

---Appended based on Rahul's reply----

Hi Rahul, first thanks for your kind reply. Actually I forgot 1 point of what you mentioned - enabling IP forwarding on network node. But after the changes and all nodes rebooted, the controller still could not ping router gateway IP (10.0.3.101). I tried to ping controller from the router and got following findings: Testing command on network node - ip netns exec qrouter-9bbc99ea-69ca-4a23-8fb7-c0b2ef7425dc ping 10.0.3.16 note: 10.0.3.16 is the IP of controller within ext-net

Controller (10.0.3.16) received the ARP request from 10.0.3.101 and unicast the ARP reply with its MAC address to the MAC address of neutron router external interface; The interface br-ex on network node didn't see the ARP reply from controller to neutron router; The router external interface didn't see the ARP reply either; Compute node saw the ARP request from 10.0.3.101 and did nothing (expected behaviour).

So the question is why the ARP reply packets didn't arrive at the interface br-ex (I also tried the physical interface eth2 of network node and didn't see the ARP reply). I guess it may be caused by no layer 2 trunk between the virtualbox vswitch and ovs-vswitch. Any other thoughts?

---Adding more info about my configuration and testing Jul/31/2015, thanks for Kiran's comments--- start with output of "ovs-vsctl show"

root@network:/etc/neutron/plugins# ovs-vsctl show
2e906efa-87a8-46f4-99e4-4d8f299db833
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "tap842f5f3c-63"
            tag: 1
            Interface "tap842f5f3c-63"
                type: internal
        Port "qr-30408360-36"
            tag: 1
            Interface "qr-30408360-36"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "gre-0a00011f"
            Interface "gre-0a00011f"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="10.0.1.21", out_key=flow, remote_ip="10.0.1.31"}
    Bridge br-ex
        Port "qg-1de2b3f1-d2"
            Interface "qg-1de2b3f1-d2"
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.3.1"

The dst MAC address of ARP reply sent by controller node was exactly the same as the gateway interface of demo-router.

14: qg-1de2b3f1-d2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether fa:16:3e:f5:79:05 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.101/24 brd 10.0.3.255 scope global qg-1de2b3f1-d2

root@controller:~# tcpdump -eni eth1 net 10.0.3.0/24
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
09:34:39.862113 fa:16:3e:f5:79:05 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.3.16 tell 10.0.3.101, length 46
09:34:39.862138 08:00:27:a0:a0:43 > fa:16:3e:f5:79:05, ethertype ARP (0x0806), length 42: Reply 10.0.3.16 is-at 08:00:27:a0:a0:43, length 28

And I found there was no such file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini, but I did add bridge_mappings = external:br-ex in file /etc/neutron/plugins/ml2/ml2_conf.ini as Rahul pointed out and have restarted the network node.

Do you need more info? please let me know and thanks again for viewing my question!