# Openstack Manual Installation (CentOS/RHEL) Network Questions

I have been using packstack to setup and test different Openstack environment types including all-in-one, separate controller and compute nodes, and an all-in-one with a separate storage appliance (Nexenta). Now the goal here is to make a production ready Openstack private cloud to use for testing so I would like to get away from using packstack (unless you can convince me that packstack is production ready). I have been following the Installation Guide for CentOS/RHEL for Liberty and I have become stuck at the networking portion. Now I want to use OpenvSwitch as the ML2 mechanism here and the documents don't seem to have any information about the process of creating the needed br-int, br-tun, and br-ex bridges that packstack would have created. I have already checked the Networking Guide and the Operations guide with no luck. Also it is very important to me to have different networks on different interfaces such as management traffic on eth0, external network on eth1, and any other networks needed. I am open to using a provider network or allowing self service networks (without FWaaS and LBaaS). I plan on converting this all into Chef for my specific environment.

My questions are:

1) How can I configure OpenvSwitch during a manual installation of Openstack without packstack?

2) Do I need to create these br-* bridges or am I seeing this wrong?

3) How can I set this up across different interfaces instead of all on the "management" network like in the Install Guide?

4) Is there a situation when I would not need one of the br-ex, br-int, br-tun interfaces?

edit retag close merge delete

Sort by » oldest newest most voted

Questions
1) How can I configure OpenvSwitch during a manual installation of Openstack without packstack?
For instance like ( case of ML2&OVS&VXLAN setup)

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent vxlan_udp_port 4789
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip 192.169.142.137
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs integration_bridge br-int
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge br-tun
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings physnet1:br-eth0
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs network_vlan_ranges physnet1
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population False


Next :-

systemctl start neutron-openvswitch-agent
systemctl enable neutron-ovs-cleanup
systemctl enable neutron-openvswitch-agent


2) Do I need to create these br-* bridges or am I seeing this wrong?

 Update  /etc/neutron/plugins/ml2/openvswitch_agent.ini as needed
# systemctl restart neutron-openvswitch-agent


3) How can I set this up across different interfaces instead of all on the "management" network like in the Install Guide?

 I don't follow you here


4) Is there a situation when I would not need one of the br-ex, br-int, br-tun interfaces?

You might not need br-ex if you invoke external network provider flat or vlan
$neutron net-create external1 -- --router:external=true \ --provider:network_type=flat \ --provider:physical_network=physnet1$ neutron subnet-create --disable-dhcp external1 10.1.0.0/24


You might not need br-tun doing ML2&OVS&VLAN deployment


The funny thing here is that ML2&OVS&VXLAN deployment would be done by packstack perfectly well.
ML2&OVS&VLAN with minor errors on Compute nodes which might be fixed pretty soon using correct templates from Controller/Network ( or Network ) node. Initial packstack deployment doesn't have any problems with switching to external network provider. The only case is attempt to setup HA 3 node Controller either production Swift Nodes configuration. I am not sure about swift, but HA 3 Controllers setup would be possible to perform quickly via Triple0, otherwise it is going to be hard manual configuration with clear understanding MariaGalera DB Multimaster Synchronous replication, rabbitmqueues and etc and etc.

more

So are you using the openvswitch_agent.ini file to create the bridges for you? When you restart the service it creates the bridges? Also, I meant to say what's the process involved for having the management network and external network on different NICs.

( 2016-03-13 10:27:31 -0500 )edit

Of course, it is not a problem to to configure external bridges manually, but due my laziness I have packstack responsible for all networks configs. No complaints so far, I don't mean in any way that packstack is ready for production, just I am. Kudos again to Red Hat's great work.

( 2016-03-13 12:13:06 -0500 )edit

This answer helped me a lot. I didn't have the openvswitch nor the neutron-ovs-cleanup service started. I was able to get this working in my test lab. Thank you.

( 2016-04-26 18:42:20 -0500 )edit

I agree with Mohit, On each node with access to the flat network you need to create an OVS bridge. This must be done outside of openstack with the ovs-vsctl command

ovs-vsctl add-br br252


And then add a port on the bridge to a PHYSICAL NIC. Make sure you turn off system configuration on the interface because OVS needs complete control of the hardware on a bridge. Bonding (LACP) and VLAN tagging (802.1q) WILL NOT WORK on an OVS bridge, it needs a physical port

ovs-vsctl add-port br252 eno2


Then configure your ml2 openvswitch_agent.ini for the vlans. This worked for me on CentOS7 with DVR.

cat /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.0.14
enable_tunneling = True
bridge_mappings = ext252:br252
[agent]
l2_population = True
tunnel_types = vxlan,gre
enable_distributed_routing = True
arp_responder = True
[ml2_type_flat]
flat_networks = ext252
[ml2_type_vlan]
network_vlan_ranges = ext250,ext251,ext252
[ml2_type_gre]
tunnel_id_ranges = 1000:2000
[ml2_type_vxlan]
vni_ranges = 1000:2000
vxlan_group = 239.1.1.1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
enable_ipset = True


Then restart neutron and nova agents.

You will see your new bridge and ports in OVS.

ovs-vsctl show
.....
Bridge "br252"
Port "phy-br252"
Interface "phy-br252"
type: patch
options: {peer="int-br252"}
Port "eno2"
Interface "eno2"
Port "br252"
Interface "br252"
type: internal

more

Thank you for the detailed reply. If I wanted to use LACP would I have to set that up in the system configuration file (ifcfg-eth0) or is it simply not possible with OVS?

( 2016-03-13 10:32:10 -0500 )edit
1. http://www.opencloudblog.com/?p=460

1. yes , you need to create bridges

2. You need all 3 of them

more

Thank you for then informative links.

( 2016-03-13 10:28:20 -0500 )edit

You wrote "unless you can convince me that packstack is production ready"
That is correct packstack is not recommended for production deployments ( officially )
Officially TripleO is recommended for RDO deployment in production. Please, follow
http://docs.openstack.org/developer/t...

View also
https://raw.githubusercontent.com/ape...
RDO Manager is now called as upstream version Triple0.

more