[Linuxbridge] How to set up instances with layer-3 (external) networking?

asked 2019-09-18 16:17:36 -0500

dabovard gravatar image

updated 2019-09-18 16:35:57 -0500

Hello everyone, I am very new to OpenStack, so sorry if this seems like a stupid question. (also kind of lengthy) I have been following this guide to set up my network: https://docs.openstack.org/neutron/rocky/install/controller-install-rdo.html (https://docs.openstack.org/neutron/ro...). I followed the self-service networking route when setting everything up.

Some context on my OpenStack architecture:

I have set up an architecture with 3 nodes, a controller and 2 compute nodes. I am running these 3 nodes through the virtual-machine-manager on CentOS7 on my physical machine. The management interface is attached to a virtual bridge virbr0 on the physical machine to eth0 on the nodes, while the provider network is attached to virbr1 on the physical machine and eth1 on the nodes.

Some context on my OpenStack network configuration:

The controller node has the dhcp, layer-3, metadata, and linuxbridge agents. The two compute nodes have the linuxbridge agent. All of my instances get assigned an IP address on a private network I have called 'Network1' in the private address space 192.168.14.0/24, and are able to ping each other. I and am having trouble configuring and establishing a public facing network where I can give my instances internet access.

I created an external flat network attached to the physical network 'provider' mapped to interface eth1, which does not have an IP address assigned to it as stated in the tutorial. I named the network 'public', and created a subnet on it called 'publicSub'. I set the 'public' network as the gateway of Router1 which is connected to Network1. I allocated a floating IP from publicSub to one of my instances, which now has 2 IP addresses (private and floating). I am able to ping the public network gateway and can still ping other instances inside the cloud as well as Router1. But I am not able to ping any external addresses or the DNS servers.

I followed the tutorial exactly, and here is the information I configured in my configuration files:

Controller Node:

neutron.conf:
[database]
connection = mysql+pymysql://neutron:<my_password>@controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:<my_password>@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = <my_password>
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = <my_password>
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

ml2_conf.ini:
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true

linuxbridge_agent.ini:
[linux_bridge]
physical_interface_mappings = provider:eth1
[vxlan]
enable_vxlan = true
local_ip = 192.168.122.49
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

l3_agent.ini:
[DEFAULT]
interface_driver = linuxbridge
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

metadata_agent.ini:
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = <my_secret>

Compute Nodes:

neutron.conf:
[DEFAULT]
transport_url = rabbit://openstack:<my_password>@controller ...
(more)
edit retag flag offensive close merge delete

Comments

1

To understand how instances are connected, study https://docs.openstack.org/neutron/la...

Bernd Bausch gravatar imageBernd Bausch ( 2019-09-18 17:08:15 -0500 )edit
1

A dedicated network node is not needed. It's useful when the controller is overloaded.

Bernd Bausch gravatar imageBernd Bausch ( 2019-09-18 17:09:32 -0500 )edit

Thanks Bernd, I will check it out and report back. Is there really supposed to be no IP address for the provider network interface mapped to eth1? It seems very odd to me.

dabovard gravatar imagedabovard ( 2019-09-18 17:15:30 -0500 )edit
1

The absence of an IP address puzzled me as well initially. It turns out that eth1 is plugged into a bridge. The router's network namespace contains an interface that is also plugged into the bridge, and the router's gateway address as well as any floating IPs are set on that interface.

Bernd Bausch gravatar imageBernd Bausch ( 2019-09-18 18:55:29 -0500 )edit

Thank you for the explanation, and thank you for all your Swift replies!!! Will pick it back up in the morning and study north-south networking.

dabovard gravatar imagedabovard ( 2019-09-18 19:43:32 -0500 )edit