Ask Your Question

RDO PIKE: failing north-south connectivity when using VLAN isolation

asked 2018-01-25 12:18:22 -0500

holger-king gravatar image

updated 2018-02-12 02:47:29 -0500

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "" (=FIP) and "" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you ... (more)

edit retag flag offensive close merge delete


Can you ping your openstack router external port IP address from your network? You'll need to do this from the same subnet. If you cannot ping, do you get ARP resolving?

tony.pearce gravatar imagetony.pearce ( 2018-01-28 03:05:17 -0500 )edit

No, we cannot PING the virtual router's (namespace: qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea) external IP "" neither from the PHYSICAL network nor from the HOST. The incoming PING request just reaches "br-ext" on port "eth0" but does not reach "br-int" the vRouter is attached to.

holger-king gravatar imageholger-king ( 2018-01-28 03:36:32 -0500 )edit

Your problem is that you cannot ping from to but you can ping from to This looks to me as firewall, so my mind goes to Security Group. As i remember in the default security group the incoming icmp is blocked. Check and this parameter.

tze gravatar imagetze ( 2018-01-29 03:20:48 -0500 )edit

Pinging from "" to "" and from "" to "" is not possible! Before, we adapted the default security group to allow ICMP/SSH comm. ingress + egress to the guest VM attached to the virt. router ("qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea").

holger-king gravatar imageholger-king ( 2018-01-29 03:40:38 -0500 )edit

When pinging from the host ( to the FIP ( bound on the virtual router, the ICMP packet does not reach "br-int" where the virtual tenant router (namespace: qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea) is attached to.

holger-king gravatar imageholger-king ( 2018-01-29 04:22:15 -0500 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2018-02-09 10:04:59 -0500

holger-king gravatar image

updated 2018-02-11 06:36:56 -0500

Important to understand when using VLAN isolation:

There, it is mentioned below "Self-service networks" that in case of VLAN isolation even provider networks get a tagged VLAN assigned:

Networking allows users to create multiple provider or project networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers, and other networking infrastructure on the same layer 2 VLAN.

As a VLAN ID is assigned to the provider network too, this VLAN ID has to match the VLAN ID used in the physical network to get north-south connectivity. When using PACKSTACK as deployment tool, the following configuration directive:


has to offer a range containing the VLAN ID used in the physical network (here: 3909), e.g.:


The other VLANs out of the range are used for:

  • project/tenant networks and their isolation and
  • optionally additional isolated provider networks

allowing overlapping IP ranges. In general, provider networks offering the floating IP address pool are set shared to be used by different projects/tenants. That's why normally only a single provider network is typical!

After the deployment, when creating the provider network, the VLAN ID used in the physical network has to be explicitly set to prevent an automatically assigned value that does not match the physical VLAN ID.

This can be achieved via:

openstack network create --provider-network-type vlan --provider-physical-network bcn --provider-segment 3909 <name_of_the_provider_network>

during network creation time where the following PACKSTACK configuration directives come into play:

  • "--provider-network-type" = "CONFIG_NEUTRON_ML2_TYPE_DRIVERS"
  • "--provider-physical-network" = name of physnet optionally given in "CONFIG_NEUTRON_OVS_EXTERNAL_PHYSNET"
  • "--provider-segment" = a VLAN ID out of "CONFIG_NEUTRON_ML2_VLAN_RANGES"


  • changing the provider-values via "network set" after the network has been created is not possible with the standard ML2 plugin! An error will occur that prevents it! :-(
  • the provider networks should be created first to prevent project/tenant networks automatically choosing a VLAN ID later explicitly set in a provider network!
  • when RDO is running virtualized on VMware, the port group the vNIC for north-south connectivity is bound to has to be configured as trunk (=VLAN 4095) as the VMware vSwitch acts as a switch using port-based untagged VLAN!
edit flag offensive delete link more


I'm glad that you could solve the problem. :)

belissner gravatar imagebelissner ( 2018-02-10 02:43:44 -0500 )edit

Me tooooo :) :) :) Thanks to your hints yesterday!

holger-king gravatar imageholger-king ( 2018-02-10 04:11:26 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2018-01-25 12:18:22 -0500

Seen: 736 times

Last updated: Feb 12 '18