Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems between the following two Open vSwitches:

  • br-int (integration bridge)
  • br-ext (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are note reaching their target (details see screen shot enclosed). Instead, the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully.

The virtual router here:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is changed to ID 1010 (see console output attached). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet never could reach the target IP "10.116.64.10" on port "eth0" connected to "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems between the following two Open vSwitches:

  • br-int (integration bridge)
  • br-ext (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are note reaching their target (details see screen shot enclosed). Instead, the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully.

The virtual router here:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is changed to ID 1010 (see console output attachedactions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet never could reach the target IP "10.116.64.10" on port "eth0" connected to "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems between the following two Open vSwitches:

  • br-int (integration bridge)
  • br-ext (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are note reaching their target (details see screen shot enclosed). Instead, the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully.

The virtual router here:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never could reach the target IP "10.116.64.10" on port "eth0" connected to "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems between the following two Open vSwitches:

  • br-int (integration bridge)
  • br-ext (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are note not reaching their target (details see screen shot enclosed). Instead, the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully.

The virtual router router, here:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems between the following two Open vSwitches:

  • br-int (integration bridge)
  • br-ext (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully.

The virtual router, here:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems between the following two Open vSwitches:

  • br-int (integration bridge)
  • br-ext (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully.

The virtual router, here:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is changed to ID 1010 according to CONFIG_NEUTRON_ML2_VLAN_RANGES (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a bug?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems between the following two Open vSwitches:

  • br-int (integration bridge)
  • br-ext (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully.

The virtual router, here:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 according to CONFIG_NEUTRON_ML2_VLAN_RANGES (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a bug?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems between the following two Open vSwitches:on the provider bridge "br-ext" after packets passed:

  • br-int from: "br-int" (integration bridge)
  • br-ext to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully.

The virtual router, here:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a bug?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully.successfully as the traffic stays locally and does not reach "br-ext".

The virtual router, here:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a bug?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

The virtual router, here:router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a bug?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a bug?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a bug?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

The Back to the problem, the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a bug?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem, problem with the failing PING requests to "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a bug?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a bug?maybe bug when using VLAN as ML2 type driver?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a maybe a bug when using VLAN as ML2 type driver?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?
  • Is it a maybe a bug when using VLAN as ML2 type driver?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

In case, when re-deploying RDO after having cleared the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In the "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST fai
led.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Shall we clear Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES?CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a maybe a bug when using VLAN as ML2 type driver?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

In case, when re-deploying we re-deploy RDO after having cleared the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In the "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST fai
led.: failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a maybe a bug when using VLAN as ML2 type driver?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

In case, we re-deploy RDO after having cleared the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not the right approach.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a maybe a bug when using VLAN as ML2 type driver?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

In case, we re-deploy RDO after having cleared Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and redeploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not the right approach.helpful - no it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a maybe a bug when using VLAN as ML2 type driver?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and redeploying re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a maybe a bug when using VLAN as ML2 type driver?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a maybe a bug when using VLAN as ML2 type driver?driver as the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a maybe a bug when using VLAN as ML2 type driver as the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a maybe a bug when using VLAN as ML2 type driver as the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). . Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a maybe a bug when using VLAN as ML2 type driver as the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010"?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a maybe a bug when using VLAN as ML2 type driver as the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010"?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a maybe a bug when using VLAN as ML2 type driver as driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no no, it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010"?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it maybe a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010"?"br-int:1000:1010?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it maybe a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010?"br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was a an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?vSwitch?
    • Where are the VLAN IDs dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
    • Where are the VLAN IDs dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
      vSwitch?
    • Where are the VLAN IDs dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to CONFIG_NEUTRON_ML2_VLAN_RANGES "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive CONFIG_NEUTRON_ML2_VLAN_RANGES "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Important hint: hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed.guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110") "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.110/32" (=FIP) and "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.110/32" (=FIP) and "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed. ). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: "10.116.64.110/32" (=FIP) and "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (here: (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems.problems. That's why we switched to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running on ESXi with a single vNIC where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running on as a ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as a an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24") "10.116.64.101/24" bound on port ) to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port ) "qg-e15cae7f-43") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43") "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 ("phy-br-ext": the via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to "CONFIG_NEUTRON_ML2_VLAN_RANGES" - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress is allowed. Additionally, SSH access has been guaranteed, allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ICMP communication ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. and + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Checking for ARP requests on port "eth1" (bound as additional port to "br-ext" after the packstack deployment has finished) via:

tcpdump -nneevvi eth1 port not 22| grep 10.116.64.11

we see no ARP replies for the requested IP address "10.116.64.11" although the IP address is bound to "br-ext" on port "eth1":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1001, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.11 tell 10.116.64.105, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1001, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.11 tell 10.116.64.105, length 28

Hint: as interface "eth1" will be used in a bridge, an IP address cannot be applied directly to it. If there is an IP address applied to "eth1", it will become inaccessible once the interface is placed in a bridge. Instead, consider applying the IP address to the bridge if you must have connectivity to this interface.

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Checking for ARP requests on port "eth1" (bound as additional port to "br-ext" after the packstack deployment has finished) via:

tcpdump -nneevvi eth1 port not 22| grep 10.116.64.11

we see no ARP replies for the requested IP address "10.116.64.11" although the IP address is bound to "br-ext" on port "eth1":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1001, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.11 tell 10.116.64.105, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1001, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.11 tell 10.116.64.105, length 28

Hint: as interface "eth1" will be used in a bridge, an IP address cannot be applied directly to it. If there is an IP address applied to "eth1", it will become inaccessible once the interface is placed in a bridge. Instead, consider applying the IP address to the bridge if you must have connectivity to this interface.

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth1"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Checking for ARP requests on port "eth1" (bound as additional port to "br-ext" after the packstack deployment has finished) via:

tcpdump -nneevvi eth1 eth0 port not 22| grep 10.116.64.11
10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.11" "10.116.64.10" although the IP address is bound to "br-ext" on port "eth1":"eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1001, 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.11 10.116.64.10 tell 10.116.64.105, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1001, 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.11 10.116.64.10 tell 10.116.64.105, length 28

Hint: as interface "eth1" "eth0" will be used in a bridge, an IP address cannot be applied directly to it. If there is an IP address applied to "eth1", "eth0", it will become inaccessible once the interface is placed in a bridge. Instead, consider applying the IP address to the bridge if you must have connectivity to this interface.

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth1"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Checking for ARP requests on port "eth1" (bound as additional port to "br-ext" after the packstack deployment has finished) via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.105, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.105, length 28

Hint: Hint: as interface "eth0" will be used in a bridge, an IP address cannot be applied directly to it. If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge. Instead, consider applying the IP address to the bridge if you must have connectivity to this interface.

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth1"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Checking for ARP requests on port "eth1" (bound as additional port to "br-ext" after the packstack deployment has finished) via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.105, 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.105, 10.116.64.101, length 28

Hint: as interface "eth0" will be used in a bridge, an IP address cannot be applied directly to it. If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge. Instead, consider applying the IP address to the bridge if you must have connectivity to this interface.

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth1"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Checking for ARP requests on port "eth1" (bound as additional port to "br-ext" after the packstack deployment has finished) via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

Hint: as interface "eth0" will be used in a bridge, an IP address cannot be applied directly to it. If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge. Instead, consider applying the IP address to the bridge if you must have connectivity to this interface.interface (quoted out of "Learning OpenStack Networking (Neutron) 2nd Edition").

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth1"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Checking for ARP requests on port "eth1" (bound as additional port to "br-ext" after the packstack deployment has finished) via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

Hint: as interface "eth0" will be used in a bridge, an IP address cannot be applied directly to it. If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge. Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (quoted out of "Learning OpenStack Networking (Neutron) 2nd Edition").

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth1"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And here is the root cause: as the host IP "10.116.64.10" is not assigned to a VLAN 1010 the packet could never reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext".

Checking for ARP requests on port "eth1" "eth0" (bound as additional port to "br-ext" after the packstack deployment has finished) via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

Hint: as interface "eth0" will be used in a bridge, an IP address cannot be applied directly to it. If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge. Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (quoted out of "Learning OpenStack Networking (Neutron) 2nd Edition").

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth1"?"/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010"). And ).

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here is the root cause: as the host "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking (Neutron) 2nd Edition). A step that PACKSTACK did for "eth0" during the RDO deployment.

By default ports in OVS bridges are "trunk" ports. That means that ICMP requests to IP address "10.116.64.10" is not assigned to a VLAN 1010 the packet could never bound directly to the provider bridge "br-ext" can reach the target IP "10.116.64.10" on port "eth0" connected to the same bridge "br-ext"."eth0".

Checking for ARP requests on port "eth0" (bound as additional port to "br-ext" after the packstack deployment has finished) via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

Hint: as interface "eth0" will be used in a bridge, an IP address cannot be applied directly to it. If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge. Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (quoted out of "Learning OpenStack Networking (Neutron) 2nd Edition").

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking (Neutron) 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0" during the RDO deployment."eth0".

By default ports in OVS bridges are "trunk" ports. ports! That means that our ICMP requests to IP address "10.116.64.10" bound (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for ARP requests on port "eth0" via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking (Neutron) 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for ARP requests on port "eth0" via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see (see p. 80 of Learning OpenStack Networking (Neutron) Neutron 2nd Edition). Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for ARP requests on port "eth0" via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth1" "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10", the "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too.too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive:

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN handling on OVS provider bridge

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive:

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: VLAN ARP handling on OVS provider bridgebridge with eth0 bound to it

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to the gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive:

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ARP handling on OVS provider bridge with eth0 bound to it

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to the its gateway IP of the virtual router (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive:

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ARP handling on OVS provider bridge with eth0 bound to it

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive:

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ARP handling on OVS provider bridge with eth0 bound to it

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive:

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: missing ARP handling replies on OVS provider bridge with eth0 bound to it

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive:directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: missing ARP replies on OVS provider bridge with eth0 bound to it

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition). respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: missing ARP replies on OVS provider bridge with eth0 bound to it

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: missing ARP replies on OVS provider bridge with eth0 bound to it

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: missing ping triggerred ARP replies on OVS from router namespace to provider bridge with eth0 bound to itfails

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARP from router namespace to provider bridge "br-ext" fails

Dear OpenStack community,

after having deployed an all-in-one approach based on "Red Hat Distributed OpenStack" (RDO) using version PIKE (new fresh installation - no upgrade) via this packstack answer file we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARP from router namespace to provider bridge "br-ext" failsis not replied

Dear OpenStack community,

after having deployed deployed:

  • an all-in-one approach approach
  • based on "Red Hat Distributed OpenStack" (RDO) (RDO)
  • using version PIKE (new fresh installation - no upgrade) upgrade)
  • via this packstack answer file
  • in am ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARP from router namespace to provider bridge "br-ext" is not replied

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in am ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running as an ESXi guest-VM with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARP from router namespace to provider bridge "br-ext" is not replied

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in am an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARP from router namespace to provider bridge "br-ext" is not replied

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARP ARPs from router namespace to provider bridge "br-ext" is are not replied

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0".

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARPs from router namespace to provider bridge "br-ext" are not repliedanswered

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0"."eth0". Here, you see the network configuration scripts:

for "eth0":

[root@rb-openstack-d network-scripts]# cat ifcfg-eth0
DEVICE=eth0
NAME=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ext
ONBOOT=yes
BOOTPROTO=none

for "br-ext": [root@rb-openstack-d network-scripts]# cat ifcfg-br-ext NM_CONTROLLED=no ONBOOT=yes IPADDR=10.116.64.10 NETMASK=255.255.255.0 GATEWAY=10.116.64.1 PEERDNS=yes DOMAIN="bosch.com de.bosch.com" DEVICE=br-ext NAME=br-ext DEVICETYPE=ovs OVSBOOTPROTO=static TYPE=OVSBridge OVS_EXTRA="set bridge br-ext fail_mode=standalone"

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARPs from router namespace to provider bridge "br-ext" are not answered

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0". Here, you see the network configuration scripts:

for "eth0":

[root@rb-openstack-d network-scripts]# cat ifcfg-eth0
DEVICE=eth0
NAME=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ext
ONBOOT=yes
BOOTPROTO=none

for "br-ext": "br-ext":

[root@rb-openstack-d network-scripts]# cat ifcfg-br-ext
 NM_CONTROLLED=no
 ONBOOT=yes
 IPADDR=10.116.64.10
 NETMASK=255.255.255.0
 GATEWAY=10.116.64.1
 PEERDNS=yes
 DOMAIN="bosch.com de.bosch.com"
 DEVICE=br-ext
 NAME=br-ext
 DEVICETYPE=ovs
 OVSBOOTPROTO=static
 TYPE=OVSBridge
 OVS_EXTRA="set bridge br-ext fail_mode=standalone"

fail_mode=standalone"

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARPs from router namespace to provider bridge "br-ext" are not answered

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0". Here, you see the network configuration scripts:

for "eth0":

[root@rb-openstack-d network-scripts]# cat ifcfg-eth0
DEVICE=eth0
NAME=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ext
ONBOOT=yes
BOOTPROTO=none

for "br-ext":

[root@rb-openstack-d network-scripts]# cat ifcfg-br-ext
NM_CONTROLLED=no
ONBOOT=yes
IPADDR=10.116.64.10
NETMASK=255.255.255.0
GATEWAY=10.116.64.1
PEERDNS=yes
DOMAIN="bosch.com de.bosch.com"
DOMAIN="xxxxx.com de.xxxxx.com"
DEVICE=br-ext
NAME=br-ext
DEVICETYPE=ovs
OVSBOOTPROTO=static
TYPE=OVSBridge
OVS_EXTRA="set bridge br-ext fail_mode=standalone"

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARPs from router namespace to provider bridge "br-ext" are not answered

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0". Here, you see the network configuration scripts:

for "eth0":

[root@rb-openstack-d network-scripts]# cat ifcfg-eth0
DEVICE=eth0
NAME=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ext
ONBOOT=yes
BOOTPROTO=none

for "br-ext":

[root@rb-openstack-d network-scripts]# cat ifcfg-br-ext
NM_CONTROLLED=no
ONBOOT=yes
IPADDR=10.116.64.10
NETMASK=255.255.255.0
GATEWAY=10.116.64.1
PEERDNS=yes
DOMAIN="xxxxx.com de.xxxxx.com"
DEVICE=br-ext
NAME=br-ext
DEVICETYPE=ovs
OVSBOOTPROTO=static
TYPE=OVSBridge
OVS_EXTRA="set bridge br-ext fail_mode=standalone"

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARPs from router namespace to provider bridge "br-ext" are not answeredreplied

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0". Here, you see the network configuration scripts:

for "eth0":

[root@rb-openstack-d network-scripts]# cat ifcfg-eth0
DEVICE=eth0
NAME=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ext
ONBOOT=yes
BOOTPROTO=none

for "br-ext":

[root@rb-openstack-d network-scripts]# cat ifcfg-br-ext
NM_CONTROLLED=no
ONBOOT=yes
IPADDR=10.116.64.10
NETMASK=255.255.255.0
GATEWAY=10.116.64.1
PEERDNS=yes
DOMAIN="xxxxx.com de.xxxxx.com"
DEVICE=br-ext
NAME=br-ext
DEVICETYPE=ovs
OVSBOOTPROTO=static
TYPE=OVSBridge
OVS_EXTRA="set bridge br-ext fail_mode=standalone"

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: ping triggerred ARPs from router namespace to provider bridge "br-ext" are not repliednorth-south connectivity when using VLAN isolation

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0". Here, you see the network configuration scripts:

for "eth0":

[root@rb-openstack-d network-scripts]# cat ifcfg-eth0
DEVICE=eth0
NAME=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ext
ONBOOT=yes
BOOTPROTO=none

for "br-ext":

[root@rb-openstack-d network-scripts]# cat ifcfg-br-ext
NM_CONTROLLED=no
ONBOOT=yes
IPADDR=10.116.64.10
NETMASK=255.255.255.0
GATEWAY=10.116.64.1
PEERDNS=yes
DOMAIN="xxxxx.com de.xxxxx.com"
DEVICE=br-ext
NAME=br-ext
DEVICETYPE=ovs
OVSBOOTPROTO=static
TYPE=OVSBridge
OVS_EXTRA="set bridge br-ext fail_mode=standalone"

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: failing north-south connectivity when using VLAN isolation

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0". Here, you see the network configuration scripts:

for "eth0":

[root@rb-openstack-d network-scripts]# cat ifcfg-eth0
DEVICE=eth0
NAME=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ext
ONBOOT=yes
BOOTPROTO=none

for "br-ext":

[root@rb-openstack-d network-scripts]# cat ifcfg-br-ext
NM_CONTROLLED=no
ONBOOT=yes
IPADDR=10.116.64.10
NETMASK=255.255.255.0
GATEWAY=10.116.64.1
PEERDNS=yes
DOMAIN="xxxxx.com de.xxxxx.com"
DEVICE=br-ext
NAME=br-ext
DEVICETYPE=ovs
OVSBOOTPROTO=static
TYPE=OVSBridge
OVS_EXTRA="set bridge br-ext fail_mode=standalone"

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?

RDO PIKE: failing north-south connectivity when using tagged VLAN isolation

Dear OpenStack community,

after having deployed:

  • an all-in-one approach
  • based on "Red Hat Distributed OpenStack" (RDO)
  • using version PIKE (new fresh installation - no upgrade)
  • via this packstack answer file
  • in an ESXi guest VM

we observe connectivity problems on the provider bridge "br-ext" after packets passed:

  • from: "br-int" (integration bridge)
  • to: "br-ext" (provider bridge)

RDO is running with a single vNIC "eth0" where mgmt. + tenant traffic are connected to the "physical" network (here: VMware vSwitch).

We rely on "vlan" instead of "vxlan" as ML2 type driver and use "openvswitch" as ML2 mechanism driver. So, we specified the following PACKSTACK configuration directive in the referenced PACKSTACK answer file:

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=bcn:1000:1010

The problem: PING ICMP packets from a virtual tenant-specific router (having the following IPs "10.116.64.110/32" (=FIP) and "10.116.64.101/24" bound on port "qg-e15cae7f-43" of "br-int") to the host IP (here: "10.116.64.10/24" assigned to port "eth0" on "br-ext") where OpenStack RDO all-in-one is running on, are not reaching their target (details see screen shot enclosed). Instead, PING ICMP requests from the router-local to its gateway IP (here: "10.116.64.110" - a floating IP for a running guest VM) attached to "br-int" can be reached successfully as the traffic stays locally on the virtual router and does not reach "br-ext".

Back to the problem with the failing PING requests to IP address "10.116.64.10". The virtual router with the following namespace:

  • qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea

sends the ICMP packets using VLAN 3 (see "ovs-vsctl show" command) to interface "qg-e15cae7f-43" (= port 7 of "br-int"). The flow rule on "br-int" that hits when starting the PING request is (see console output attached):

cookie=0xa4b1eeacb0914dc5, duration=76991.147s, table=60, n_packets=1277, n_bytes=85073, priority=3 actions=NORMAL

This can be observed and proven when taking a look on the packet counter "n_packets" that just increases with each ICMP packet.

The ICMP packet even reaches the provider bridge "br-ext" on port 2 via "phy-br-ext" (the virtual patch port that connects "br-ext" with "br-int") where the following flow rule hits (see console output attached):

cookie=0x2714bb48d4bacd8e, duration=4319.065s, table=0, n_packets=792, n_bytes=33320, priority=4,in_port=2,dl_vlan=3 actions=mod_vlan_vid:1010,NORMAL

Here, incoming packets on port 2 with VLAN ID 3 are modified in a way where the VLAN ID is - according to configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" in the above packstack answer file - changed to ID 1010 (see actions field "mod_vlan_vid:1010").

As interface "eth0" is used in a bridge, an IP address cannot be applied directly to it (see "/etc/sysconfig/network-scripts/ifcfg-eth0"). If there is an IP address applied to "eth0", it will become inaccessible once the interface is placed in a bridge - here "br-ext". Instead, consider applying the IP address to the bridge if you must have connectivity to this interface (see p. 80 of Learning OpenStack Networking Neutron 2nd Edition respectively http://docs.openvswitch.org/en/latest/faq/issues/). A step that PACKSTACK did during the deployment of RDO for "eth0". Here, you see the network configuration scripts:

for "eth0":

[root@rb-openstack-d network-scripts]# cat ifcfg-eth0
DEVICE=eth0
NAME=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ext
ONBOOT=yes
BOOTPROTO=none

for "br-ext":

[root@rb-openstack-d network-scripts]# cat ifcfg-br-ext
NM_CONTROLLED=no
ONBOOT=yes
IPADDR=10.116.64.10
NETMASK=255.255.255.0
GATEWAY=10.116.64.1
PEERDNS=yes
DOMAIN="xxxxx.com de.xxxxx.com"
DEVICE=br-ext
NAME=br-ext
DEVICETYPE=ovs
OVSBOOTPROTO=static
TYPE=OVSBridge
OVS_EXTRA="set bridge br-ext fail_mode=standalone"

By default ports in OVS bridges are "trunk" ports! That means that our ICMP requests to IP address "10.116.64.10" (bound directly to the provider bridge "br-ext" as described above) can reach port "eth0".

Checking for PING triggered ARP requests on port "eth0" during the PING run via:

tcpdump -nneevvi eth0 port not 22| grep 10.116.64.10

we see no ARP replies for the requested IP address "10.116.64.10" although the IP address is bound to "br-ext" on port "eth0":

15:37:12.400786 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28
15:37:13.402765 fa:16:3e:c7:1d:65 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1010, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.116.64.10 tell 10.116.64.101, length 28

And here is the root cause: the virtual router (see "qrouter-cb4e7a46-1d06-4618-bbe6-37e1ef7894ea") does not get the MAC address for "10.116.64.10". But why?

Our assumption to get rid of this issue by clearing the configuration directive:

CONFIG_NEUTRON_ML2_VLAN_RANGES=

and re-deploying RDO was an erroneous belief. Afterwards the creation of networks using "openstack" CLI fails:

(openstack) network create development
Error while executing command: Service Unavailable (HTTP 503) (Request-ID: req-533064de-8ced-4ab3-97cb-afc9740d3124)

In "/var/log/neutron/server.log" we see:

2018-01-26 19:36:05.287 4815 ERROR neutron.pecan_wsgi.hooks.translation [req-7fe1dd7a-4080-40f5-8e7d-6d6ed936144c 5862ebe761904784aacf8cb50267fb08 f665ee0c6fc541fe8fe2d4b9a9e3910f - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.

That means clearing "CONFIG_NEUTRON_ML2_VLAN_RANGES" is not helpful - no, it leads to more problems. That's why we switched back to a deployment with a value set to "bcn:1000:1010".

Adding "br-ext" to configuration directive (see packstack answer file):

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ext

including a complete fresh re-deployment via packstack didn't help either.

Important hint: the default security group used here has been adapted in a way where ingress and egress ICMP communication is allowed. Additionally, SSH access has been allowed, too!

Our questions:

  • How could we solve that problem without manually changing flow rules on OVS bridges that are under control by OpenStack?
  • Is there a port misconfiguration between "eth0" and "br-ext" regarding "/etc/sysconfig/network-scripts/ifcfg-br-ext" and "/etc/sysconfig/network-scripts/ifcfg-eth0"?
  • Is there a misconfiguration in the PACKSTACK answer file?
  • Should the configuration setting for "CONFIG_NEUTRON_ML2_VLAN_RANGES" be changed to "br-int:1000:1010 although the description text talks about the physical network name as a prefix?
  • Is clearing the configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" the right approach?
  • Is it a bug when using VLAN as ML2 type driver? It seems the flow rules are not correctly setup by OpenStack on the "br-ext" Open vSwitch?
  • Where are the VLAN IDs set in packstack configuration directive "CONFIG_NEUTRON_ML2_VLAN_RANGES" dissolved?
  • Is it possible to share a single NIC (here: "eth0") for mgmt. + tenant network traffic?