Ask Your Question
0

Network Problem Creating Instance with Single NIC Compute Node

asked 2016-02-16 11:11:43 -0600

jmhal gravatar image

updated 2016-02-16 11:14:32 -0600

Hello Everyone,

I'm trying to install OpenStack 1.7.2 from RDO on CentOS 7. I started following this tutorial, but since I don't have two NICs on the Compute Node, I devised this architecture:

image description

On the Controller node, I installed via packstack all the management services, including neutron-server. On the Compute Node, there are only the nova services and the neutron openvswitch agent.

The Controller Node has the following configurations:

/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers =openvswitch
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges =physnet1:1000:2000
[ml2_type_gre]
[ml2_type_vxlan]
[ml2_type_geneve]
[securitygroup]
enable_security_group = True

/etc/neutron/plugins/ml2/openvswitch_agent.ini:

[ovs]
integration_bridge = br-int
bridge_mappings =physnet1:br-ens1f1
enable_tunneling=False
[agent]
polling_interval = 2
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

/etc/sysconfig/network-scripts/ifcfg-ens1f0:

NAME="ens1f0"
DEVICE="ens1f0"
HWADDR="XX:XX:XX:XX:XX:XX"
ONBOOT="yes"

/etc/sysconfig/network-scripts/ifcfg-ens1f0:

NAME="ens1f1"
DEVICE="ens1f1"
HWADDR=XX:XX:XX:XX:XX:XX
ONBOOT=yes

/etc/sysconfig/network-scripts/ifcfg-br-ex:

NAME="br-ex"
DEVICE="br-ex"
ONBOOT="yes"
NETBOOT="yes"
IPV6INIT="yes"
BOOTPROTO="static"
IPADDR="172.10.10.1"
PREFIX="24"
GATEWAY="172.10.10.193"
DNS1="172.10.10.192"
TYPE="OVSBridge"
DEVICETYPE="ovs"
DEFROUTE="yes

/etc/sysconfig/network-scripts/ifcfg-br-ens1f1:

NAME="br-ens1f1"
DEVICE="br-ens1f1"
ONBOOT=yes
NETBOOT=yes
IPV6INIT=yes
BOOTPROTO=static
IPADDR="192.168.0.1"
NETMASK="255.255.255.0"
DNS1="192.168.0.1"
DNS2="200.17.41.36"
TYPE="OVSBridge"
DEVICETYPE="ovs"

On the controller, result of ovs-vsctl show:

54dc7bbf-c954-4d81-b121-96d0d0866e08
    Bridge br-ex
        Port "ens1f0"
            Interface "ens1f0"
        Port "qg-f42cded3-35"
            Interface "qg-f42cded3-35"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
    Bridge "br-ens1f1"
        Port "ens1f1"
            Interface "ens1f1"
        Port "br-ens1f1"
            Interface "br-ens1f1"
                type: internal
        Port "phy-br-ens1f1"
            Interface "phy-br-ens1f1"
                type: patch
                options: {peer="int-br-ens1f1"}
    Bridge br-int
        fail_mode: secure
        Port "int-br-ens1f1"
            Interface "int-br-ens1f1"
                type: patch
                options: {peer="phy-br-ens1f1"}
        Port "tap9ea1a119-58"
            tag: 1
            Interface "tap9ea1a119-58"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qr-7022af0e-4a"
            tag: 1
            Interface "qr-7022af0e-4a"
                type: internal
    ovs_version: "2.4.0"

On the Compute Node, I have the following settings:

/etc/neutron/plugins/ml2/openvswitch_agent.ini:

[ovs]
integration_bridge = br-int
enable_tunneling=False
[agent]
polling_interval = 2
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

/etc/sysconfig/network-scripts/ifcfg-eno1:

NAME="eno1"
DEVICE="eno1"
HWADDR="XX:XX:XX:XX:XX:XX"
ONBOOT=yes

/etc/sysconfig/network-scripts/ifcfg-br-eno1:

NAME="br-eno1"
DEVICE="br-eno1"
ONBOOT=yes
NETBOOT=yes
IPV6INIT=yes
BOOTPROTO=static
IPADDR="192.168.0.2"
PREFIX="24"
GATEWAY="192.168.0.1"
DNS1="192.168.0.1"
DNS2="8.8.8.8"
TYPE="OVSBridge"
DEVICETYPE="ovs"

On the Compute Node, result of ovs-vsctl show:

b9dfa0a7-191c-42ab-9730-ed2e17e57943
    Bridge "br-eno1"
        Port "eno1"
            Interface "eno1"
        Port "phy-br-eno1"
            Interface "phy-br-eno1"
                type: patch
                options: {peer="int-br-eno1"}
        Port "br-eno1"
            Interface "br-eno1"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port "int-br-eno1"
            Interface "int-br-eno1"
                type: patch
                options: {peer="phy-br-eno1"}
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.4.0"

I uploaded the cirros image and created the following networks/subnet:

# neutron net-list
+--------------------------------------+----------+------------------------------------------------------+
| id                                   | name ...
(more)
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2016-03-03 11:52:11 -0600

jmhal gravatar image

updated 2016-03-04 09:20:09 -0600

Found a workraround that is working for me:

First of all, I enabled GRE for Neutron. On the controller, the file /etc/neutron/plugins/ml2/ml2_conf.ini is this way:

[ml2]
type_drivers = flat,vlan,gre
tenant_network_types = vlan,gre
mechanism_drivers =openvswitch
path_mtu = 0
[ml2_type_flat]
flat_networks = external 
[ml2_type_vlan]
#network_vlan_ranges =physnet1:1000:2000
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[ml2_type_geneve]

And the file /etc/neutron/plugins/ml2/openvswitch_agent.ini looks like this:

[ovs]
integration_bridge = br-int
local_ip = 192.168.0.1
bridge_mappings = external:br-ex
[agent]
polling_interval = 2
tunnel_types = gre
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
enable_ipset = True

On the compute nodes, the file /etc/neutron/plugins/ml2/openvswitch_agent.ini is this way:

[ml2]
type_drivers = flat,vlan,gre,
tenant_network_types = vlan,gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
integration_bridge = br-int
local_ip = 192.168.0.13
enable_tunneling=False
[agent]
polling_interval = 2
tunnel_types = gre
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Then I proceeded to delete all existing bridges on the nodes, including br-int, br-tun, etc, using the ovs-vsctrl command. After that, I restarted the whole system and the bridges were recreated. Somehow, now I could create instances with no trouble. The output for ovs-vsctl show on the controller:

54dc7bbf-c954-4d81-b121-96d0d0866e08
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qg-23f7c43c-6e"
            tag: 2
            Interface "qg-23f7c43c-6e"
                type: internal
        Port "qr-c1456155-03"
            tag: 1
            Interface "qr-c1456155-03"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "tapc3f44a81-1b"
            tag: 1
            Interface "tapc3f44a81-1b"
                type: internal
    Bridge br-tun
        Port "gre-c0a80010"
            Interface "gre-c0a80010"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.0.1", out_key=flow, remote_ip="192.168.0.16"}
         Port "gre-c0a8000b"
            Interface "gre-c0a8000b"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.0.1", out_key=flow, remote_ip="192.168.0.11"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "gre-c0a80011"
            Interface "gre-c0a80011"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.0.1", out_key=flow, remote_ip="192.168.0.17"}
        Port "gre-c0a8000e"
            Interface "gre-c0a8000e"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.0.1", out_key=flow, remote_ip="192.168.0.14"}
        Port "gre-c0a8000f"
            Interface "gre-c0a8000f"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.0.1", out_key=flow, remote_ip="192.168.0.15"}
        Port "gre-c0a80012"
            Interface "gre-c0a80012"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.0.1", out_key=flow, remote_ip="192.168.0.18"}
        Port "gre-c0a8000d"
            Interface "gre-c0a8000d"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.0.1", out_key=flow, remote_ip="192.168.0.13"}
        Port "gre-c0a8000c"
            Interface "gre-c0a8000c"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.0.1", out_key=flow, remote_ip="192.168.0.12"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "ens1f0"
            Interface "ens1f0"
        Port br-ex
            Interface br-ex
                type: internal ...
(more)
edit flag offensive delete link more

Comments

Sounds like you worked with RDO Kilo. What kind of problems have Neutron Server and neutron-agent-daemons (services ) with GRE ? Could you please elaborate what means enabling GRE for Neutron
I believe it's just editing ml2_conf.ini on Controller, what else ?

dbaxps gravatar imagedbaxps ( 2016-03-03 14:10:31 -0600 )edit

I forgot and "enabling tunneling" in OVS section of openvswitch_agent.ini

dbaxps gravatar imagedbaxps ( 2016-03-03 14:14:54 -0600 )edit

You are correct openvswitch_agent.ini needs more tuning. Packstack spoiled me ;)
Configuration supported by Neutron Server, should be provided to neutron-openvswitch-agent

dbaxps gravatar imagedbaxps ( 2016-03-03 15:04:07 -0600 )edit

Bridges were recreated by neutron-openvswitch-agent, when it starts it reads openvswitch_agent.ini and recreates all mentioned in *.ini file bridges

dbaxps gravatar imagedbaxps ( 2016-03-03 15:10:41 -0600 )edit

Oops ! I don't see "tunnel_bridge br-tun" in your openvswitch_agent.ini
Please repost ovs-vsctl show on all nodes.

dbaxps gravatar imagedbaxps ( 2016-03-03 15:20:18 -0600 )edit
1

answered 2016-02-16 15:24:51 -0600

dbaxps gravatar image

updated 2016-03-03 15:58:33 -0600

UPDATE 03/04/2016
As of now I see 2 mistakes in

[ovs]
enable_tunneling=True
tunnel_bridge = br-tun

Sample for GRE tunnel looks like

[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.1.127
[agent]
polling_interval = 2
tunnel_types =gre
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

END UPDATE
Packstack won't work for you until on both boxes you will switch to eth0,eth1
On CentOS 7.2 this procedure seems to me a bit easier than on CentOS 7.0
Just add to linux cmd "net.ifnames=0" in /boot/grub2/grub.cfg
Reboot box and run ifconfig , updating udev rules seems to be not needed.
Binding eth0 (eth1) to corresponding MACs will be done for you by system ( at least via my experience )
Then rename ifcfg-* file to ifrcfg-eth0 , ifcfg-eth1 ( updating device names correspondently )
You don't need 2 NICs on Compute . I forgot when I did ML2&OVS&VLAN set up ( just ML2&OVS&VXLAN)
but it should not be matter , unless you intend to run DVR ( then "yes" one for data/vm VLAN connection or VXLAN tunnel to Controller , another one as port for BR-EX on Compute Node ) . Otherwise , standard Neutron Routing via single Neutron L3 point of failure on Controller/Network Node.
See also http://bderzhavets.blogspot.com/2014/... ( skipping Gluster 3.5.2 details and all related to multiple Cinder's backends )

 NIC eth0 from http://www.tuxfixer.com/install-openstack-on-centos-7-rhel-7/ is not needed on Compute Node. It has nothing to do there. Bridges br-eth1 with port eth1 are responsible for VLAN data/vm connection between Compute and Controller.

I would use use eth1 NICs as VTEPs for VXLAN ( would work via any cheap dumb switch, no VLAN tagging headaches on Switch trunk ports ).

edit flag offensive delete link more

Comments

Thank you for answer. I will rename the interfaces and try again. If it works out, I'll reply here.

jmhal gravatar imagejmhal ( 2016-02-16 16:48:23 -0600 )edit

Any update @jmhal?

Eduardo Gonzalez gravatar imageEduardo Gonzalez ( 2016-02-29 15:25:38 -0600 )edit

Can you please, create an answer with your comment and mark it as right answer? It will help other people with similar issues i a future.

Regards

Eduardo Gonzalez gravatar imageEduardo Gonzalez ( 2016-03-02 15:41:38 -0600 )edit

@Eduardo Gonzalez ,
Are you sure, you've got desired feed ?

dbaxps gravatar imagedbaxps ( 2016-03-03 15:42:55 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-02-16 11:11:43 -0600

Seen: 1,676 times

Last updated: Mar 04 '16