Revision history [back]

Packstack won't work for you until on both boxes you will switch to eth0,eth1
On CentOS 7.2 this procedure seems to me a bit easier than on CentOS 7.0
Just add to linux cmd "net.ifnames=0" in /boot/grub2/grub.cfg
Reboot box and run ifconfig , updating udev rules seems to be not needed. Binding
eth0 (eth1) to corresponding MACs will be done for you by system ( at least via my experience )
Then rename ifcfg-* file to ifrcfg-eth0 , ifcfg-eth1 ( updating device names correspondently )
You don't need 2 NICs on Compute . I forgot when I did ML2&OVS&VLAN set up ( just ML2&OVS&VXLAN)
but it should not be matter , unless you intend to run DVR ( then "yes" one for data/vm VLAN connection or VXLAN tunnel to Controller , another one as port for BR-EX on Compute Node ) . Otherwise , standard Neutron Routing via single Neutron L3 point of failure on Controller/Network Node.

Packstack won't work for you until on both boxes you will switch to eth0,eth1
On CentOS 7.2 this procedure seems to me a bit easier than on CentOS 7.0
Just add to linux cmd "net.ifnames=0" in /boot/grub2/grub.cfg
Reboot box and run ifconfig , updating udev rules seems to be not needed. Binding
eth0 (eth1) to corresponding MACs will be done for you by system ( at least via my experience )
Then rename ifcfg-* file to ifrcfg-eth0 , ifcfg-eth1 ( updating device names correspondently )
You don't need 2 NICs on Compute . I forgot when I did ML2&OVS&VLAN set up ( just ML2&OVS&VXLAN)
but it should not be matter , unless you intend to run DVR ( then "yes" one for data/vm VLAN connection or VXLAN tunnel to Controller , another one as port for BR-EX on Compute Node ) . Otherwise , standard Neutron Routing via single Neutron L3 point of failure on Controller/Network Node.
See also http://bderzhavets.blogspot.com/2014/11/rdo-setup-two-real-node.html ( skipping Gluster 3.5.2 details and all related to multiple Cinder's backends )

Packstack won't work for you until on both boxes you will switch to eth0,eth1
On CentOS 7.2 this procedure seems to me a bit easier than on CentOS 7.0
Just add to linux cmd "net.ifnames=0" in /boot/grub2/grub.cfg
Reboot box and run ifconfig , updating udev rules seems to be not needed. Binding
Binding eth0 (eth1) to corresponding MACs will be done for you by system ( at least via my experience )
Then rename ifcfg-* file to ifrcfg-eth0 , ifcfg-eth1 ( updating device names correspondently )
You don't need 2 NICs on Compute . I forgot when I did ML2&OVS&VLAN set up ( just ML2&OVS&VXLAN)
but it should not be matter , unless you intend to run DVR ( then "yes" one for data/vm VLAN connection or VXLAN tunnel to Controller , another one as port for BR-EX on Compute Node ) . Otherwise , standard Neutron Routing via single Neutron L3 point of failure on Controller/Network Node.
See also http://bderzhavets.blogspot.com/2014/11/rdo-setup-two-real-node.html ( skipping Gluster 3.5.2 details and all related to multiple Cinder's backends )

Packstack won't work for you until on both boxes you will switch to eth0,eth1
On CentOS 7.2 this procedure seems to me a bit easier than on CentOS 7.0
Just add to linux cmd "net.ifnames=0" in /boot/grub2/grub.cfg
Reboot box and run ifconfig , updating udev rules seems to be not needed.
Binding eth0 (eth1) to corresponding MACs will be done for you by system ( at least via my experience )
Then rename ifcfg-* file to ifrcfg-eth0 , ifcfg-eth1 ( updating device names correspondently )
You don't need 2 NICs on Compute . I forgot when I did ML2&OVS&VLAN set up ( just ML2&OVS&VXLAN)
but it should not be matter , unless you intend to run DVR ( then "yes" one for data/vm VLAN connection or VXLAN tunnel to Controller , another one as port for BR-EX on Compute Node ) . Otherwise , standard Neutron Routing via single Neutron L3 point of failure on Controller/Network Node.
See also http://bderzhavets.blogspot.com/2014/11/rdo-setup-two-real-node.html ( skipping Gluster 3.5.2 details and all related to multiple Cinder's backends )

 NIC eth0 from http://www.tuxfixer.com/install-openstack-on-centos-7-rhel-7/ is not needed on Compute Node. It has nothing to do there. Bridges br-eth1 with port eth1 are responsible for VLAN data/vm connection between Compute and Controller.

I would use use eth1 NICs as VTEPs for VXLAN ( would work via any cheap dumb switch, no VLAN tagging headaches on Switch trunk ports ).

UPDATE 03/04/2016
Sample for VXLAN tunnel looks like

cat openvswitch_agent.ini | grep -v ^#|grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
arp_responder = True
enable_distributed_routing = True
drop_flows_on_start=False
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

END UPDATE
Packstack won't work for you until on both boxes you will switch to eth0,eth1
On CentOS 7.2 this procedure seems to me a bit easier than on CentOS 7.0
Just add to linux cmd "net.ifnames=0" in /boot/grub2/grub.cfg
Reboot box and run ifconfig , updating udev rules seems to be not needed.
Binding eth0 (eth1) to corresponding MACs will be done for you by system ( at least via my experience )
Then rename ifcfg-* file to ifrcfg-eth0 , ifcfg-eth1 ( updating device names correspondently )
You don't need 2 NICs on Compute . I forgot when I did ML2&OVS&VLAN set up ( just ML2&OVS&VXLAN)
but it should not be matter , unless you intend to run DVR ( then "yes" one for data/vm VLAN connection or VXLAN tunnel to Controller , another one as port for BR-EX on Compute Node ) . Otherwise , standard Neutron Routing via single Neutron L3 point of failure on Controller/Network Node.
See also http://bderzhavets.blogspot.com/2014/11/rdo-setup-two-real-node.html ( skipping Gluster 3.5.2 details and all related to multiple Cinder's backends )

 NIC eth0 from http://www.tuxfixer.com/install-openstack-on-centos-7-rhel-7/ is not needed on Compute Node. It has nothing to do there. Bridges br-eth1 with port eth1 are responsible for VLAN data/vm connection between Compute and Controller.

I would use use eth1 NICs as VTEPs for VXLAN ( would work via any cheap dumb switch, no VLAN tagging headaches on Switch trunk ports ).

UPDATE 03/04/2016
Sample for VXLAN tunnel looks like

cat openvswitch_agent.ini | grep -v ^#|grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
False
arp_responder = True
False
enable_distributed_routing = True
False
drop_flows_on_start=False
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

END UPDATE
Packstack won't work for you until on both boxes you will switch to eth0,eth1
On CentOS 7.2 this procedure seems to me a bit easier than on CentOS 7.0
Just add to linux cmd "net.ifnames=0" in /boot/grub2/grub.cfg
Reboot box and run ifconfig , updating udev rules seems to be not needed.
Binding eth0 (eth1) to corresponding MACs will be done for you by system ( at least via my experience )
Then rename ifcfg-* file to ifrcfg-eth0 , ifcfg-eth1 ( updating device names correspondently )
You don't need 2 NICs on Compute . I forgot when I did ML2&OVS&VLAN set up ( just ML2&OVS&VXLAN)
but it should not be matter , unless you intend to run DVR ( then "yes" one for data/vm VLAN connection or VXLAN tunnel to Controller , another one as port for BR-EX on Compute Node ) . Otherwise , standard Neutron Routing via single Neutron L3 point of failure on Controller/Network Node.
See also http://bderzhavets.blogspot.com/2014/11/rdo-setup-two-real-node.html ( skipping Gluster 3.5.2 details and all related to multiple Cinder's backends )

 NIC eth0 from http://www.tuxfixer.com/install-openstack-on-centos-7-rhel-7/ is not needed on Compute Node. It has nothing to do there. Bridges br-eth1 with port eth1 are responsible for VLAN data/vm connection between Compute and Controller.

I would use use eth1 NICs as VTEPs for VXLAN ( would work via any cheap dumb switch, no VLAN tagging headaches on Switch trunk ports ).

UPDATE 03/04/2016
Sample for VXLAN GRE tunnel looks like

cat openvswitch_agent.ini | grep -v ^#|grep -v ^$
[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
enable_tunneling=True
= 192.168.1.127
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
drop_flows_on_start=False
=gre
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

END UPDATE
Packstack won't work for you until on both boxes you will switch to eth0,eth1
On CentOS 7.2 this procedure seems to me a bit easier than on CentOS 7.0
Just add to linux cmd "net.ifnames=0" in /boot/grub2/grub.cfg
Reboot box and run ifconfig , updating udev rules seems to be not needed.
Binding eth0 (eth1) to corresponding MACs will be done for you by system ( at least via my experience )
Then rename ifcfg-* file to ifrcfg-eth0 , ifcfg-eth1 ( updating device names correspondently )
You don't need 2 NICs on Compute . I forgot when I did ML2&OVS&VLAN set up ( just ML2&OVS&VXLAN)
but it should not be matter , unless you intend to run DVR ( then "yes" one for data/vm VLAN connection or VXLAN tunnel to Controller , another one as port for BR-EX on Compute Node ) . Otherwise , standard Neutron Routing via single Neutron L3 point of failure on Controller/Network Node.
See also http://bderzhavets.blogspot.com/2014/11/rdo-setup-two-real-node.html ( skipping Gluster 3.5.2 details and all related to multiple Cinder's backends )

 NIC eth0 from http://www.tuxfixer.com/install-openstack-on-centos-7-rhel-7/ is not needed on Compute Node. It has nothing to do there. Bridges br-eth1 with port eth1 are responsible for VLAN data/vm connection between Compute and Controller.

I would use use eth1 NICs as VTEPs for VXLAN ( would work via any cheap dumb switch, no VLAN tagging headaches on Switch trunk ports ).

UPDATE 03/04/2016
As of now I see 2 mistakes in

[ovs]
enable_tunneling=True
tunnel_bridge = br-tun

Sample for GRE tunnel looks like

[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.1.127
[agent]
polling_interval = 2
tunnel_types =gre
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

END UPDATE
Packstack won't work for you until on both boxes you will switch to eth0,eth1
On CentOS 7.2 this procedure seems to me a bit easier than on CentOS 7.0
Just add to linux cmd "net.ifnames=0" in /boot/grub2/grub.cfg
Reboot box and run ifconfig , updating udev rules seems to be not needed.
Binding eth0 (eth1) to corresponding MACs will be done for you by system ( at least via my experience )
Then rename ifcfg-* file to ifrcfg-eth0 , ifcfg-eth1 ( updating device names correspondently )
You don't need 2 NICs on Compute . I forgot when I did ML2&OVS&VLAN set up ( just ML2&OVS&VXLAN)
but it should not be matter , unless you intend to run DVR ( then "yes" one for data/vm VLAN connection or VXLAN tunnel to Controller , another one as port for BR-EX on Compute Node ) . Otherwise , standard Neutron Routing via single Neutron L3 point of failure on Controller/Network Node.
See also http://bderzhavets.blogspot.com/2014/11/rdo-setup-two-real-node.html ( skipping Gluster 3.5.2 details and all related to multiple Cinder's backends )

 NIC eth0 from http://www.tuxfixer.com/install-openstack-on-centos-7-rhel-7/ is not needed on Compute Node. It has nothing to do there. Bridges br-eth1 with port eth1 are responsible for VLAN data/vm connection between Compute and Controller.

I would use use eth1 NICs as VTEPs for VXLAN ( would work via any cheap dumb switch, no VLAN tagging headaches on Switch trunk ports ).