Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Should the tenant router gateway be accessible from the network node itself ?

I am trying to setup the 3-node (controller, neutron and compute) openstack configuration on 3 Virtual machines running on Virtualbox. I have enabled promiscous mode on the external network as suggested in the manual. In the verify connectivity step - here ( http://docs.openstack.org/juno/install-guide/install/apt/content/neutron_initial-networks-verify.html ), it says that "you should be able to ping the IP address of the tenant router gateway from any host on your external physical network." I am able to do that from the controller and compute nodes which are on the physical network. But when I try to do that from the network node (which has the tenant router on it) , it cannot ping it. Does this mean that there is an error in my setup ? If so, any pointers on how to troubleshoot it would be very welcome.

Should the tenant router gateway be accessible from the network node itself ?

I am trying to setup the 3-node (controller, neutron and compute) openstack configuration on 3 Virtual machines running on Virtualbox. I have enabled promiscous mode on the external network as suggested in the manual. In the verify connectivity step - here ( http://docs.openstack.org/juno/install-guide/install/apt/content/neutron_initial-networks-verify.html ), it says that "you should be able to ping the IP address of the tenant router gateway from any host on your external physical network." I am able to do that from the controller and compute nodes which are on the physical network. But when I try to do that from the network node (which has the tenant router on it) , it cannot ping it. Does this mean that there is an error in my setup ? If so, any pointers on how to troubleshoot it would be very welcome.

Edit 1 in reply to response - After configuring the network node as mentioned in the manual, I am able to launch an instance, but the network node no longer connects to the external network and so the instance is not able to as well. I have read through similar discussions on this forum and elsewhere but could not find a clear solution. I think I am not bridging the interfaces as they should be bridged. I am including my /etc/network/interface file of the network node as well as the outputs of some of the network-related commands that are often asked in such situations.

Configuration Detail - 3 VM's running Ubuntu 14.04 on VirtualBox. The management and tunnel networks are on eth1 and eth2 and are configured as internal networks. Static ip's are set on 10.0.0/24 and 10.0.1/24 respectively. The external network is on eth0 and is configured using the NatNetwork adapter that Virtualbox provides. (https://www.virtualbox.org/manual/ch06.html#network_nat_service) . Basically the host and guests can see each other and the guest machines can access internet using the host (similar to NAT adapter in that aspect.). This works on the compute and controller nodes and used to work on the network node before I configured neutron setup on the network node. I set a static ip on this in the 192.168.16/24 subnet. A part of this is used for the virtual external network on the neutron node.

**/etc/network/interfaces**

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

# External network
auto eth0
iface eth0 inet static
address 192.168.16.6
netmask 255.255.255.0
gateway 192.168.16.1
dns-nameservers 128.46.154.76 128.210.11.5 128.210.11.57

#Management Interface
auto eth1
iface eth1 inet static
address 10.0.0.21
netmask 255.255.255.0

# VM tunneling interface
auto eth2
iface eth2 inet static
address 10.0.1.21
netmask 255.255.255.0

Output of $ ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
    link/ether 08:00:27:e8:86:3b brd ff:ff:ff:ff:ff:ff
    inet 192.168.16.6/24 brd 192.168.16.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fee8:863b/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:96:3e:85 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.21/24 brd 10.0.0.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe96:3e85/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:cd:0b:20 brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.21/24 brd 10.0.1.255 scope global eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fecd:b20/64 scope link 
       valid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default 
    link/ether 4a:ce:2b:0b:76:7b brd ff:ff:ff:ff:ff:ff
6: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 08:00:27:e8:86:3b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5053:71ff:feee:909e/64 scope link 
       valid_lft forever preferred_lft forever
8: br-int: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether c6:bc:cd:70:50:4f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4a7:8fff:fe7f:28dd/64 scope link 
       valid_lft forever preferred_lft forever
13: br-tun: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether da:0c:c7:e9:6f:4d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e85f:bcff:fe4b:36c4/64 scope link 
       valid_lft forever preferred_lft forever

I came across a comment that eth0 should not not show an ip as above, instead br-ex should have an ip on the external network. But I could not find any official documentation to support this. I also queried the status of the port of the tenant-router-gateway using neutron port-show <port id=""> and got the status as DOWN .

Any pointers in the right direction would be greatly useful.

Should the tenant router gateway be accessible from the network node itself ?

I am trying to setup the 3-node (controller, neutron and compute) openstack configuration on 3 Virtual machines running on Virtualbox. I have enabled promiscous mode on the external network as suggested in the manual. In the verify connectivity step - here ( http://docs.openstack.org/juno/install-guide/install/apt/content/neutron_initial-networks-verify.html ), it says that "you should be able to ping the IP address of the tenant router gateway from any host on your external physical network." I am able to do that from the controller and compute nodes which are on the physical network. But when I try to do that from the network node (which has the tenant router on it) , it cannot ping it. Does this mean that there is an error in my setup ? If so, any pointers on how to troubleshoot it would be very welcome.

Edit 1 in reply to response - After configuring the network node as mentioned in the manual, I am able to launch an instance, but the network node no longer connects to the external network and so the instance is not able to as well. I have read through similar discussions on this forum and elsewhere but could not find a clear solution. I think I am not bridging the interfaces as they should be bridged. I am including my /etc/network/interface file of the network node as well as the outputs of some of the network-related commands that are often asked in such situations.

Configuration Detail - 3 VM's running Ubuntu 14.04 on VirtualBox. The management and tunnel networks are on eth1 and eth2 and are configured as internal networks. Static ip's are set on 10.0.0/24 and 10.0.1/24 respectively. The external network is on eth0 and is configured using the NatNetwork adapter that Virtualbox provides. (https://www.virtualbox.org/manual/ch06.html#network_nat_service) . Basically the host and guests can see each other and the guest machines can access internet using the host (similar to NAT adapter in that aspect.). This works on the compute and controller nodes and used to work on the network node before I configured neutron setup on the network node. I set a static ip on this in the 192.168.16/24 subnet. A part of this is used for the virtual external network on the neutron node.

Edit 2 - I found the solution. I had initially set a static ip for the external network on the network node because I needed internet access to install packages. So the bridge was not correctly configured. I deleted br-ex, br-tun and then added them as described in the manual. I then edited my interfaces file as shown below and added a static ip for br-ex. Restart the eth0 and br-ex interfaces, neutron services and possibly the openvswicth agent and service on the compute node as well. Delete old network and subnets and create new ones. I am now able to ssh into my instance and ping the external network from my instance. I do have one remaining issue, my instance can ping external ip addresses, but it cannot resolve names say "google.com" for example.

/etc/network/interfaces

**/etc/network/interfaces**

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

 # External network

# The public network interface
auto eth0
iface eth0 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

auto br-ex
iface br-ex inet static
address 192.168.16.6
netmask 255.255.255.0
gateway 192.168.16.1
dns-nameservers 128.46.154.76 128.210.11.5 128.210.11.57

#Management Interface
Inetrface
auto eth1
iface eth1 inet static
address 10.0.0.21
netmask 255.255.255.0

# VM tunneling interface
auto eth2
iface eth2 inet static
address 10.0.1.21
netmask 255.255.255.0

Output of $ ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
    link/ether 08:00:27:e8:86:3b brd ff:ff:ff:ff:ff:ff
    inet 192.168.16.6/24 brd 192.168.16.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fee8:863b/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:96:3e:85 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.21/24 brd 10.0.0.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe96:3e85/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:cd:0b:20 brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.21/24 brd 10.0.1.255 scope global eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fecd:b20/64 scope link 
       valid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default 
    link/ether 4a:ce:2b:0b:76:7b brd ff:ff:ff:ff:ff:ff
6: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 08:00:27:e8:86:3b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5053:71ff:feee:909e/64 scope link 
       valid_lft forever preferred_lft forever
8: br-int: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether c6:bc:cd:70:50:4f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4a7:8fff:fe7f:28dd/64 scope link 
       valid_lft forever preferred_lft forever
13: br-tun: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether da:0c:c7:e9:6f:4d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e85f:bcff:fe4b:36c4/64 scope link 
       valid_lft forever preferred_lft forever

I came across a comment that eth0 should not not show an ip as above, instead br-ex should have an ip on the external network. But I could not find any official documentation to support this. I also queried the status of the port of the tenant-router-gateway using neutron port-show <port id=""> and got the status as DOWN .

Any pointers in the right direction would be greatly useful.

Should the tenant router gateway be accessible from the network node itself ?

I am trying to setup the 3-node (controller, neutron and compute) openstack configuration on 3 Virtual machines running on Virtualbox. I have enabled promiscous mode on the external network as suggested in the manual. In the verify connectivity step - here ( http://docs.openstack.org/juno/install-guide/install/apt/content/neutron_initial-networks-verify.html ), it says that "you should be able to ping the IP address of the tenant router gateway from any host on your external physical network." I am able to do that from the controller and compute nodes which are on the physical network. But when I try to do that from the network node (which has the tenant router on it) , it cannot ping it. Does this mean that there is an error in my setup ? If so, any pointers on how to troubleshoot it would be very welcome.

Edit 1 in reply to response - After configuring the network node as mentioned in the manual, I am able to launch an instance, but the network node no longer connects to the external network and so the instance is not able to as well. I have read through similar discussions on this forum and elsewhere but could not find a clear solution. I think I am not bridging the interfaces as they should be bridged. I am including my /etc/network/interface file of the network node as well as the outputs of some of the network-related commands that are often asked in such situations.

Configuration Detail - 3 VM's running Ubuntu 14.04 on VirtualBox. The management and tunnel networks are on eth1 and eth2 and are configured as internal networks. Static ip's are set on 10.0.0/24 and 10.0.1/24 respectively. The external network is on eth0 and is configured using the NatNetwork adapter that Virtualbox provides. (https://www.virtualbox.org/manual/ch06.html#network_nat_service) . Basically the host and guests can see each other and the guest machines can access internet using the host (similar to NAT adapter in that aspect.). This works on the compute and controller nodes and used to work on the network node before I configured neutron setup on the network node. I set a static ip on this in the 192.168.16/24 subnet. A part of this is used for the virtual external network on the neutron node.

Edit 2 - I found the solution. I had initially set a static ip for the external network on the network node because I needed internet access to install packages. So the bridge was not correctly configured. I deleted br-ex, br-tun and then added them as described in the manual. I then edited my interfaces file as shown below and added a static ip for br-ex. Restart the eth0 and br-ex interfaces, neutron services and possibly the openvswicth agent and service on the compute node as well. Delete old network and subnets and create new ones. I am now able to ssh into my instance and ping the external network from my instance. I do have one remaining issue, my instance can ping external ip addresses, but it cannot resolve names say "google.com" for example.

/etc/network/interfaces

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback


# External network

# The public network interface
auto eth0
iface eth0 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

auto br-ex
iface br-ex inet static
address 192.168.16.6
netmask 255.255.255.0
gateway 192.168.16.1
dns-nameservers 128.46.154.76 128.210.11.5 128.210.11.57

#Management Inetrface
auto eth1
iface eth1 inet static
address 10.0.0.21
netmask 255.255.255.0

# VM tunneling interface
auto eth2
iface eth2 inet static
address 10.0.1.21
netmask 255.255.255.0