Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Can't ping or ssh VM on provider network (Pike and Linuxbridge)

Hi there,

I have OpenStack Pike running on Ubuntu 16.04 and trying to follow this guide https://docs.openstack.org/neutron/pike/admin/deploy-lb.html to test HA/VRRP capabilities. Sadly, I'm already stuck just building the architecture, since I can't access my VMs through ping or ssh. Since I only have three servers for my test case, I'm running the Neutron services on the Compute nodes. My nodes are running on bare metal servers:

Controller

management interface -> eno3

Compute01

management interface -> eno3 provider network -> eno2 overlay network -> eno1

Compute02

management interface -> eno3 provider network -> eno2 overlay network -> eno1

I have one external flat provider network with the subnet 203.0.113.0/24, like in the example on the website I mentioned. I can launch instances and create networks without problems. I can also acces my VMs through VNC. If I deploy multiple VMs on the same network, they can ping each other. The gateway is 203.0.113.1.

When I vnc to the VM with the IP 203.0.113.15 and try to ping the gateway 203.0.113.1 I get no reaction/reply at all. Checking for traffic on the Compute Nodes I get:

tcpdump -vni eno2
tcpdump: listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes
18:48:33.031098 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:34.031089 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:35.033832 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46

I can ping the instance from the qdhcp device without a problem. When I check the ARP entries I get:

ip netns exec qdhcp-f7cc0f45-97a8-4341-8251-62cbafa60e0f arp -a
? (203.0.113.1) at <incomplete> on ns-1e01fec7-d5
? (203.0.113.21) at fa:16:3e:ee:1e:8a [ether] on ns-1e01fec7-d5
? (203.0.113.15) at fa:16:3e:a0:32:1a [ether] on ns-1e01fec7-d5

203.0.113.1 should be the gateway, *.21 the dhcp and *.15 the VM.

Also adding self-service networks and floating IPs results in the same problem, since the floating IPs are part of the external network and can't communicate to the outside world.

I'm at a complete loss what the problem is and any advice would be appreciated. I suspect the problem has something to do with the bridging of anything happening within OpenStack to the node itself, but don't know what else to check.

Can't ping or ssh VM on provider network (Pike and Linuxbridge)

Hi there,

I have OpenStack Pike running on Ubuntu 16.04 and trying to follow this guide https://docs.openstack.org/neutron/pike/admin/deploy-lb.html to test HA/VRRP capabilities. Sadly, I'm already stuck just building the architecture, since I can't access my VMs through ping or ssh. Since I only have three servers for my test case, I'm running the Neutron services on the Compute nodes. My nodes are running on bare metal servers:

Controller

# The management interface -> eno3

network interface auto eno3 iface eno3 inet static address 10.0.0.11 netmask 255.255.255.0 network 10.0.0.0 broadcast 10.0.0.255 gateway 10.0.0.1

Compute01

# The management interface -> network interface
auto eno3
iface eno3 inet static
    address 10.0.0.21
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

# The provider network -> interface
auto eno2
iface eno2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
up sysctl -w net.ipv4.ip_forward=1

# The overlay network -> eno1

interface auto eno1 iface eno1 inet static address 10.0.1.21 netmask 255.255.255.0 network 10.0.1.0 broadcast 10.0.1.255

Compute02

# The management interface -> network interface
auto eno3
iface eno3 inet static
    address 10.0.0.31
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

# The provider network -> interface
auto eno2
iface eno2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
up sysctl -w net.ipv4.ip_forward=1

# The overlay network -> eno1

interface auto eno1 iface eno1 inet static address 10.0.1.31 netmask 255.255.255.0 network 10.0.1.0 broadcast 10.0.1.255

I have one external flat provider network with the subnet 203.0.113.0/24, like in the example on the website I mentioned. I can launch instances and create networks without problems. I can also acces my VMs through VNC. If I deploy multiple VMs on the same network, they can ping each other. The gateway is 203.0.113.1.

When I vnc to the VM with the IP 203.0.113.15 and try to ping the gateway 203.0.113.1 I get no reaction/reply at all. Checking for traffic on the Compute Nodes I get:

tcpdump -vni eno2
tcpdump: listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes
18:48:33.031098 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:34.031089 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:35.033832 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46

I can ping the instance from the qdhcp device without a problem. When I check the ARP entries I get:

ip netns exec qdhcp-f7cc0f45-97a8-4341-8251-62cbafa60e0f arp -a
? (203.0.113.1) at <incomplete> on ns-1e01fec7-d5
? (203.0.113.21) at fa:16:3e:ee:1e:8a [ether] on ns-1e01fec7-d5
? (203.0.113.15) at fa:16:3e:a0:32:1a [ether] on ns-1e01fec7-d5

203.0.113.1 should be the gateway, *.21 the dhcp and *.15 the VM.

Also adding self-service networks and floating IPs results in the same problem, since the floating IPs are part of the external network and can't communicate to the outside world.

I'm at a complete loss what the problem is and any advice would be appreciated. I suspect the problem has something to do with the bridging of anything happening within OpenStack to the node itself, but don't know what else to check.

Can't ping or ssh VM on provider network (Pike and Linuxbridge)

Hi there,

I have OpenStack Pike running on Ubuntu 16.04 and trying to follow this guide https://docs.openstack.org/neutron/pike/admin/deploy-lb.html to test HA/VRRP capabilities. Sadly, I'm already stuck just building the architecture, since I can't access my VMs through ping or ssh. Since I only have three servers for my test case, I'm running the Neutron services on the Compute nodes. My nodes are running on bare metal servers:

Controller

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.11
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

Compute01

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.21
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

# The provider network interface
auto eno2
iface eno2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
up sysctl -w net.ipv4.ip_forward=1

# The overlay network interface
auto eno1
iface eno1 inet static
        address 10.0.1.21
        netmask 255.255.255.0
        network 10.0.1.0
        broadcast 10.0.1.255

Compute02

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.31
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

# The provider network interface
auto eno2
iface eno2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
up sysctl -w net.ipv4.ip_forward=1

# The overlay network interface
auto eno1
iface eno1 inet static
        address 10.0.1.31
        netmask 255.255.255.0
        network 10.0.1.0
        broadcast 10.0.1.255

I have one external flat provider network with the subnet 203.0.113.0/24, like in the example on the website I mentioned. I can launch instances and create networks without problems. I can also acces my VMs through VNC. If I deploy multiple VMs on the same network, they can ping each other. The gateway is 203.0.113.1.

When I vnc to the VM with the IP 203.0.113.15 and try to ping the gateway 203.0.113.1 I get no reaction/reply at all. Checking for traffic on the Compute Nodes I get:

tcpdump -vni eno2
tcpdump: listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes
18:48:33.031098 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:34.031089 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:35.033832 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46

I can ping the instance from the qdhcp device without a problem. When I check the ARP entries I get:

ip netns exec qdhcp-f7cc0f45-97a8-4341-8251-62cbafa60e0f arp -a
? (203.0.113.1) at <incomplete> on ns-1e01fec7-d5
? (203.0.113.21) at fa:16:3e:ee:1e:8a [ether] on ns-1e01fec7-d5
? (203.0.113.15) at fa:16:3e:a0:32:1a [ether] on ns-1e01fec7-d5

203.0.113.1 should be the gateway, *.21 the dhcp and *.15 the VM.

Also adding self-service networks and floating IPs results in the same problem, since the floating IPs are part of the external network and can't communicate to the outside world.

I'm at a complete loss what the problem is and any advice would be appreciated. I suspect the problem has something to do with the bridging of anything happening within OpenStack to the node itself, but don't know what else to check.

Can't ping or ssh VM on provider network (Pike and Linuxbridge)

Hi there,

I have OpenStack Pike running on Ubuntu 16.04 and trying to follow this guide https://docs.openstack.org/neutron/pike/admin/deploy-lb.html to test HA/VRRP capabilities. Sadly, I'm already stuck just building the architecture, since I can't access my VMs through ping or ssh. Since I only have three servers for my test case, I'm running the Neutron services on the Compute nodes. My nodes are running on bare metal servers:

Controller

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.11
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

Compute01

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.21
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

# The provider network interface
auto eno2
iface eno2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
up sysctl -w net.ipv4.ip_forward=1

# The overlay network interface
auto eno1
iface eno1 inet static
        address 10.0.1.21
        netmask 255.255.255.0
        network 10.0.1.0
        broadcast 10.0.1.255

Compute02

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.31
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

# The provider network interface
auto eno2
iface eno2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
up sysctl -w net.ipv4.ip_forward=1

# The overlay network interface
auto eno1
iface eno1 inet static
        address 10.0.1.31
        netmask 255.255.255.0
        network 10.0.1.0
        broadcast 10.0.1.255

Since I only have three servers for my test case, I'm running the Neutron services on the Compute nodes.

openstack network agent list
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host               | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+
| 1e15bfd7-cad4-4ce5-bacb-2f5ee6c0c12e | Linux bridge agent | sso1compute01-test | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 71379165-39f0-475b-9a27-0afa0b909f0e | DHCP agent         | sso1compute02-test | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 7add3f94-a1c5-48f1-9162-9ede561ebd83 | L3 agent           | sso1compute01-test | nova              | :-)   | UP    | neutron-l3-agent          |
| 9aab3cec-ddf7-4ddc-ab2d-8658e4435fef | L3 agent           | sso1compute02-test | nova              | :-)   | UP    | neutron-l3-agent          |
| 9d642951-4f80-45ff-bfda-976aa78b73bc | Metadata agent     | sso1compute02-test | None              | :-)   | UP    | neutron-metadata-agent    |
| dc1b6c24-1719-40c2-a882-5650d0941827 | DHCP agent         | sso1compute01-test | nova              | :-)   | UP    | neutron-dhcp-agent        |
| e83ee7bc-d93f-4441-ad01-cf43f82bf3fe | Linux bridge agent | sso1compute02-test | None              | :-)   | UP    | neutron-linuxbridge-agent |
| f19cccff-0bee-44c0-9883-89974bdad6f0 | Metadata agent     | sso1compute01-test | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+

I have one external flat provider network with the subnet 203.0.113.0/24, like in the example on the website I mentioned. I can launch instances and create networks without problems. I can also acces access my VMs through VNC. If I deploy multiple VMs on the same network, they can ping each other. The gateway is 203.0.113.1.

When I vnc to the VM with the IP 203.0.113.15 and try to ping the gateway 203.0.113.1 I get no reaction/reply at all. Checking for traffic on the Compute Nodes I get:

tcpdump -vni eno2
tcpdump: listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes
18:48:33.031098 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:34.031089 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:35.033832 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46

I can ping the instance from the qdhcp device without a problem. When I check the ARP entries I get:

ip netns exec qdhcp-f7cc0f45-97a8-4341-8251-62cbafa60e0f arp -a
? (203.0.113.1) at <incomplete> on ns-1e01fec7-d5
? (203.0.113.21) at fa:16:3e:ee:1e:8a [ether] on ns-1e01fec7-d5
? (203.0.113.15) at fa:16:3e:a0:32:1a [ether] on ns-1e01fec7-d5

203.0.113.1 should be the gateway, *.21 the dhcp and *.15 the VM.

Also adding self-service networks and floating IPs results in the same problem, since the floating IPs are part of the external network and can't communicate to the outside world.

I'm at a complete loss what the problem is and any advice would be appreciated. I suspect the problem has something to do with the bridging of anything happening within OpenStack to the node itself, but don't know what else to check.

Can't ping or ssh VM on provider network (Pike and Linuxbridge)

Hi there,

I have OpenStack Pike running on Ubuntu 16.04 and trying to follow this guide https://docs.openstack.org/neutron/pike/admin/deploy-lb.html to test HA/VRRP capabilities. Sadly, I'm already stuck just building the architecture, stuck, since I can't access my VMs through ping or ssh. My nodes are running on bare metal servers:

Controller

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.11
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

Compute01

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.21
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

# The provider network interface
auto eno2
iface eno2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
up sysctl -w net.ipv4.ip_forward=1

# The overlay network interface
auto eno1
iface eno1 inet static
        address 10.0.1.21
        netmask 255.255.255.0
        network 10.0.1.0
        broadcast 10.0.1.255

Compute02

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.31
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

# The provider network interface
auto eno2
iface eno2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
up sysctl -w net.ipv4.ip_forward=1

# The overlay network interface
auto eno1
iface eno1 inet static
        address 10.0.1.31
        netmask 255.255.255.0
        network 10.0.1.0
        broadcast 10.0.1.255

Since I only have three servers for my test case, I'm running the Neutron services on the Compute nodes.

openstack network agent list
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host               | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+
| 1e15bfd7-cad4-4ce5-bacb-2f5ee6c0c12e | Linux bridge agent | sso1compute01-test | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 71379165-39f0-475b-9a27-0afa0b909f0e | DHCP agent         | sso1compute02-test | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 7add3f94-a1c5-48f1-9162-9ede561ebd83 | L3 agent           | sso1compute01-test | nova              | :-)   | UP    | neutron-l3-agent          |
| 9aab3cec-ddf7-4ddc-ab2d-8658e4435fef | L3 agent           | sso1compute02-test | nova              | :-)   | UP    | neutron-l3-agent          |
| 9d642951-4f80-45ff-bfda-976aa78b73bc | Metadata agent     | sso1compute02-test | None              | :-)   | UP    | neutron-metadata-agent    |
| dc1b6c24-1719-40c2-a882-5650d0941827 | DHCP agent         | sso1compute01-test | nova              | :-)   | UP    | neutron-dhcp-agent        |
| e83ee7bc-d93f-4441-ad01-cf43f82bf3fe | Linux bridge agent | sso1compute02-test | None              | :-)   | UP    | neutron-linuxbridge-agent |
| f19cccff-0bee-44c0-9883-89974bdad6f0 | Metadata agent     | sso1compute01-test | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+

I have one external flat provider network with the subnet 203.0.113.0/24, like in the example on the website I mentioned. I can launch instances and create networks without problems. I can also access my VMs through VNC. If I deploy multiple VMs on the same network, they can ping each other. The gateway is 203.0.113.1.

When I vnc to the VM with the IP 203.0.113.15 and try to ping the gateway 203.0.113.1 I get no reaction/reply at all. Checking for traffic on the Compute Nodes I get:

tcpdump -vni eno2
tcpdump: listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes
18:48:33.031098 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:34.031089 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:35.033832 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46

I can ping the instance from the qdhcp device without a problem. When I check the ARP entries I get:

ip netns exec qdhcp-f7cc0f45-97a8-4341-8251-62cbafa60e0f arp -a
? (203.0.113.1) at <incomplete> on ns-1e01fec7-d5
? (203.0.113.21) at fa:16:3e:ee:1e:8a [ether] on ns-1e01fec7-d5
? (203.0.113.15) at fa:16:3e:a0:32:1a [ether] on ns-1e01fec7-d5

203.0.113.1 should be the gateway, *.21 the dhcp and *.15 the VM.

Also adding self-service networks and floating IPs results in the same problem, since the floating IPs are part of the external network and can't communicate to the outside world.

My security group for this project:

openstack security group show xxx

+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field           | Value                                                                                                                                                                                                                                          |
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at      | 2018-12-03T14:23:30Z                                                                                                                                                                                                                           |
| description     | Default security group                                                                                                                                                                                                                         |
| id              | a03bdd58-909b-4e56-8cf0-b785a60dc9ad                                                                                                                                                                                                           |
| name            | default                                                                                                                                                                                                                                        |
| project_id      | 135ec3fdcfb74004b07d937cd86364b7                                                                                                                                                                                                               |
| revision_number | 10                                                                                                                                                                                                                                             |
| rules           | created_at='2018-12-03T14:23:30Z', direction='egress', ethertype='IPv6', id='30ad9309-bb7b-442c-a1ee-f842177ae572', updated_at='2018-12-03T14:23:30Z'                                                                                          |
|                 | created_at='2018-12-03T14:23:55Z', direction='ingress', ethertype='IPv6', id='42c6f44b-ecaa-436b-9208-f18b30e5631f', port_range_max='22', port_range_min='22', protocol='tcp', updated_at='2018-12-03T14:23:55Z'                               |
|                 | created_at='2018-12-04T10:48:19Z', direction='ingress', ethertype='IPv4', id='4add6b43-3c85-41e0-8cd7-02090c808ac0', remote_ip_prefix='0.0.0.0/0', updated_at='2018-12-04T10:48:19Z'                                                           |
|                 | created_at='2018-12-03T14:23:41Z', direction='ingress', ethertype='IPv6', id='6f5c9136-40ab-446f-be0c-d59e1004d6cc', protocol='ipv6-icmp', updated_at='2018-12-03T14:23:41Z'                                                                   |
|                 | created_at='2018-12-03T14:23:30Z', direction='ingress', ethertype='IPv4', id='8677827b-c1c3-48e3-aa75-f42ae71811b9', remote_group_id='a03bdd58-909b-4e56-8cf0-b785a60dc9ad', updated_at='2018-12-03T14:23:30Z'                                 |
|                 | created_at='2018-12-03T14:23:47Z', direction='ingress', ethertype='IPv4', id='aa78b9b8-246a-440b-8bac-6e109e06d623', port_range_max='22', port_range_min='22', protocol='tcp', remote_ip_prefix='0.0.0.0/0', updated_at='2018-12-03T14:23:47Z' |
|                 | created_at='2018-12-03T14:23:30Z', direction='ingress', ethertype='IPv6', id='eded8f4d-ed0d-4645-965d-5a594532959c', remote_group_id='a03bdd58-909b-4e56-8cf0-b785a60dc9ad', updated_at='2018-12-03T14:23:30Z'                                 |
|                 | created_at='2018-12-03T14:23:31Z', direction='ingress', ethertype='IPv4', id='fc0af4c1-f333-4467-895f-02533f4fc9f5', protocol='icmp', remote_ip_prefix='0.0.0.0/0', updated_at='2018-12-03T14:23:31Z'                                          |
|                 | created_at='2018-12-03T14:23:30Z', direction='egress', ethertype='IPv4', id='fedf0b0d-e62c-4257-9b74-b6a931caea02', updated_at='2018-12-03T14:23:30Z'                                                                                          |
| updated_at      | 2018-12-04T10:50:19Z                                                                                                                                                                                                                           |
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

I'm at a complete loss what the problem is and any advice would be appreciated. I suspect the problem has something to do with the bridging of anything happening within OpenStack to the node itself, but don't know what else to check.

Can't ping or ssh VM on provider network (Pike and Linuxbridge)

Hi there,

I have OpenStack Pike running on Ubuntu 16.04 and trying to follow this guide https://docs.openstack.org/neutron/pike/admin/deploy-lb.html to test HA/VRRP capabilities. Sadly, I'm stuck, since I can't access my VMs through ping or ssh. My nodes are running on bare metal servers:

Controller

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.11
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

Compute01

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.21
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

# The provider network interface
auto eno2
iface eno2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
up sysctl -w net.ipv4.ip_forward=1

# The overlay network interface
auto eno1
iface eno1 inet static
        address 10.0.1.21
        netmask 255.255.255.0
        network 10.0.1.0
        broadcast 10.0.1.255

Compute02

# The management network interface
auto eno3
iface eno3 inet static
    address 10.0.0.31
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

# The provider network interface
auto eno2
iface eno2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
up sysctl -w net.ipv4.ip_forward=1

# The overlay network interface
auto eno1
iface eno1 inet static
        address 10.0.1.31
        netmask 255.255.255.0
        network 10.0.1.0
        broadcast 10.0.1.255

Since I only have three servers for my test case, I'm running the Neutron services on the Compute nodes.

openstack network agent list
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+
+--------------------------------------+--------------------+--------------------
| ID                                   | Agent Type         | Host               | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+
+--------------------------------------+--------------------+--------------------
| 1e15bfd7-cad4-4ce5-bacb-2f5ee6c0c12e | Linux bridge agent | sso1compute01-test | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 71379165-39f0-475b-9a27-0afa0b909f0e | DHCP agent         | sso1compute02-test | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 7add3f94-a1c5-48f1-9162-9ede561ebd83 | L3 agent           | sso1compute01-test | nova              | :-)   | UP    | neutron-l3-agent          |
| 9aab3cec-ddf7-4ddc-ab2d-8658e4435fef | L3 agent           | sso1compute02-test | nova              | :-)   | UP    | neutron-l3-agent          |
| 9d642951-4f80-45ff-bfda-976aa78b73bc | Metadata agent     | sso1compute02-test | None              | :-)   | UP    | neutron-metadata-agent    |
| dc1b6c24-1719-40c2-a882-5650d0941827 | DHCP agent         | sso1compute01-test | nova              | :-)   | UP    | neutron-dhcp-agent        |
| e83ee7bc-d93f-4441-ad01-cf43f82bf3fe | Linux bridge agent | sso1compute02-test | None              | :-)   | UP    | neutron-linuxbridge-agent |
| f19cccff-0bee-44c0-9883-89974bdad6f0 | Metadata agent     | sso1compute01-test | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+

I have one external flat provider network with the subnet 203.0.113.0/24, like in the example on the website I mentioned. I can launch instances and create networks without problems. I can also access my VMs through VNC. If I deploy multiple VMs on the same network, they can ping each other. The gateway is 203.0.113.1.

When I vnc to the VM with the IP 203.0.113.15 and try to ping the gateway 203.0.113.1 I get no reaction/reply at all. Checking for traffic on the Compute Nodes I get:

tcpdump -vni eno2
tcpdump: listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes
18:48:33.031098 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:34.031089 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46
18:48:35.033832 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 203.0.113.1 tell 203.0.113.15, length 46

I can ping the instance from the qdhcp device without a problem. When I check the ARP entries I get:

ip netns exec qdhcp-f7cc0f45-97a8-4341-8251-62cbafa60e0f arp -a
? (203.0.113.1) at <incomplete> on ns-1e01fec7-d5
? (203.0.113.21) at fa:16:3e:ee:1e:8a [ether] on ns-1e01fec7-d5
? (203.0.113.15) at fa:16:3e:a0:32:1a [ether] on ns-1e01fec7-d5

203.0.113.1 should be the gateway, *.21 the dhcp and *.15 the VM.

Also adding self-service networks and floating IPs results in the same problem, since the floating IPs are part of the external network and can't communicate to the outside world.

My security group for this project:

openstack security group show xxx

+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field           | Value                                                                                                                                                                                                                                          |
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
---------------------------------------------------------------------+
| created_at      | 2018-12-03T14:23:30Z                                                                                                                                                                                                                           |
| description     | Default security group                                                                                                                                                                                                                         |
| id              | a03bdd58-909b-4e56-8cf0-b785a60dc9ad                                                                                                                                                                                                           |
| name            | default                                                                                                                                                                                                                                        |
| project_id      | 135ec3fdcfb74004b07d937cd86364b7                                                                                                                                                                                                               |
| revision_number | 10                                                                                                                                                                                                                                             |
| rules           | created_at='2018-12-03T14:23:30Z', direction='egress', ethertype='IPv6', id='30ad9309-bb7b-442c-a1ee-f842177ae572', updated_at='2018-12-03T14:23:30Z'                                                                                          |
|                 | created_at='2018-12-03T14:23:55Z', direction='ingress', ethertype='IPv6', id='42c6f44b-ecaa-436b-9208-f18b30e5631f', port_range_max='22', port_range_min='22', protocol='tcp', updated_at='2018-12-03T14:23:55Z'                               |
|                 | created_at='2018-12-04T10:48:19Z', direction='ingress', ethertype='IPv4', id='4add6b43-3c85-41e0-8cd7-02090c808ac0', remote_ip_prefix='0.0.0.0/0', updated_at='2018-12-04T10:48:19Z'                                                           |
|                 | created_at='2018-12-03T14:23:41Z', direction='ingress', ethertype='IPv6', id='6f5c9136-40ab-446f-be0c-d59e1004d6cc', protocol='ipv6-icmp', updated_at='2018-12-03T14:23:41Z'                                                                   |
|                 | created_at='2018-12-03T14:23:30Z', direction='ingress', ethertype='IPv4', id='8677827b-c1c3-48e3-aa75-f42ae71811b9', remote_group_id='a03bdd58-909b-4e56-8cf0-b785a60dc9ad', updated_at='2018-12-03T14:23:30Z'                                 |
|                 | created_at='2018-12-03T14:23:47Z', direction='ingress', ethertype='IPv4', id='aa78b9b8-246a-440b-8bac-6e109e06d623', port_range_max='22', port_range_min='22', protocol='tcp', remote_ip_prefix='0.0.0.0/0', updated_at='2018-12-03T14:23:47Z' |
|                 | created_at='2018-12-03T14:23:30Z', direction='ingress', ethertype='IPv6', id='eded8f4d-ed0d-4645-965d-5a594532959c', remote_group_id='a03bdd58-909b-4e56-8cf0-b785a60dc9ad', updated_at='2018-12-03T14:23:30Z'                                 |
|                 | created_at='2018-12-03T14:23:31Z', direction='ingress', ethertype='IPv4', id='fc0af4c1-f333-4467-895f-02533f4fc9f5', protocol='icmp', remote_ip_prefix='0.0.0.0/0', updated_at='2018-12-03T14:23:31Z'                                          |
|                 | created_at='2018-12-03T14:23:30Z', direction='egress', ethertype='IPv4', id='fedf0b0d-e62c-4257-9b74-b6a931caea02', updated_at='2018-12-03T14:23:30Z'                                                                                          |
| updated_at      | 2018-12-04T10:50:19Z                                                                                                                                                                                                                           |
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

I'm at a complete loss what the problem is and any advice would be appreciated. I suspect the problem has something to do with the bridging of anything happening within OpenStack to the node itself, but don't know what else to check.