# Revision history [back]

It was written above :-

Now only network node can ping the VM floating IP and neutron router gateway , the controller/compute node could not ping via the public IP. Any thoughts


THAT IS NORMAL. SEE PICTURE BELLOW

It is classic 3 Node deployment on RDO Liberty . No matter is it bridged or non-bridged ( external network provider got involved , external neutron flow goes through br-int ) by default external network is available only from Network Node.

Would you plugin into Controller one more NIC and set it to belong top External net. You would get access to your VMs via FIPs from Controller.


It was written above :-

Now only network node can ping the VM floating IP and neutron router gateway , the controller/compute node could not ping via the public IP. Any thoughts


THAT IS NORMAL. SEE PICTURE BELLOW

It is classic 3 Node deployment on RDO Liberty . No matter is it bridged or non-bridged ( external network provider got involved , external neutron flow goes through br-int ) by default external network is available only from Network Node.

Would you plugin into Controller one more NIC and set it to belong top External net. You would get access to your VMs via FIPs from Controller.


# When external_network_bridge is set, each L3 agent can be associated
# with no more than one external network. This value should be set to the UUID
# of that external network. To allow L3 agent support multiple external
# networks, both the external_network_bridge and gateway_external_network_id
# must be left empty.


Setting "external_network_bridge = " to an empty value in /etc/neutron/l3-agent.ini enables the use of external provider networks. In this case external network flow will go through br-int, otherwise normal bridged external networking via br-ex Details here :-
http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html
When non-bridged external networking come into play neutron-openvswitch-agent ( running on Network Node ) places external interfaces of corresponding neutron routers into corresponding OVS bridges ( in particular case br-eth3,br-eth4,br-ex) see link above :-

Any of br-eth3,br-eth4,br-ex don't have IP and work via br-int as described in the last link posted.

It was written above :-

Now only network node can ping the VM floating IP and neutron router gateway , the controller/compute node could not ping via the public IP. Any thoughts


THAT IS NORMAL. SEE PICTURE BELLOW

It is classic 3 Node deployment on RDO Liberty . No matter is it bridged or non-bridged ( external network provider got involved , external neutron flow goes through br-int ) by default external network is available only from Network Node.

Would you plugin into Controller one more NIC and set it to belong top External net. You would get access to your VMs via FIPs from Controller.


# When external_network_bridge is set, each L3 agent can be associated