Ask Your Question
2

vxlan tunnels messed up after L2poulation enabled -RDO Juno

asked 2014-11-28 22:00:18 -0500

Anil gravatar image
        Hi All

    4 node setup

    2 compute , 1 controller , 1 network node.

  Is anyone aware of any bug/issue with Ml2 l2poulation + vxlan in Juno , My setup was working fine before l2 population was enabled to test DVR in Lab . all vxlan tunnels disappear after l2pop was enabled & getting below error ..tried to clear endpoints in database but no luck ,, but tunnels visible again  once l2pop is disabled . below output for ref.

 [root@controller ~]# tail -f /var/log/neutron/server.log
        2014-11-28 22:17:34.051 3815 WARNING neutron.plugins.ml2.drivers.type_vxlan [req-bf4907c5-76bf-42f8-b2f9-31c317e060bc None] Vxlan endpoint with ip 10.0.0.222 already exists
        2014-11-28 22:18:15.670 3816 WARNING neutron.plugins.ml2.drivers.type_vxlan [req-212adc5f-10b4-4b18-93a1-7e81f8fa3f11 None] Vxlan endpoint with ip 10.0.0.223 already exists
        2014-11-28 22:18:33.785 3816 WARNING neutron.plugins.ml2.drivers.type_vxlan [req-957ab4bf-68a4-42b6-8252-71a431740d4b None] Vxlan endpoint with ip 10.0.0.221 already exists

        [root@compute1 ~]# ovs-vsctl show
        ce4a8de9-3d33-443e-9e48-7a590305af7a
            Bridge br-ex
                Port "eth0"
                    Interface "eth0"
                Port br-ex
                    Interface br-ex
                        type: internal
            Bridge br-int
                fail_mode: secure
                Port patch-tun
                    Interface patch-tun
                        type: patch
                        options: {peer=patch-int}
                Port br-int
                    Interface br-int
                        type: internal
            Bridge br-tun
                Port patch-int
                    Interface patch-int
                        type: patch
                        options: {peer=patch-tun}
                Port br-tun
                    Interface br-tun
                        type: internal
            ovs_version: "2.1.3"

        [root@compute1 ~]# 

        MariaDB [neutron]> select * from ml2_vxlan_endpoints ;
        +------------+----------+
        | ip_address | udp_port |
        +------------+----------+
        | 10.0.0.221 |     4789 |
        | 10.0.0.222 |     4789 |
        | 10.0.0.223 |     4789 |
        +------------+----------+
        3 rows in set (0.00 sec)

        MariaDB [neutron]> 

        MariaDB [neutron]> delete from ml2_vxlan_endpoints where udp_port = 4789 ;
        Query OK, 3 rows affected (0.01 sec)

        MariaDB [neutron]> select * from ml2_vxlan_endpoints ;
        Empty set (0.00 sec)

        MariaDB [neutron]> 


    Tried to restart neutron-openvswitch-agent on compute & network nodes but again same warning in neutron server log    
    endpoint wth <ip> exist. & tunnels are not visible.

    >controller node Ml2.conf
    [root@controller ~]# grep -v ^# /etc/neutron/plugin.ini | grep -v ^$
    [ml2]
    type_drivers = vxlan
    tenant_network_types = vxlan
    mechanism_drivers = openvswitch,l2population
    [ml2_type_flat]
    [ml2_type_vlan]
    [ml2_type_gre]
    [ml2_type_vxlan]
    vni_ranges =10:100
    vxlan_group =224.0.0.1
    [agent]
    tunnel_types = vxlan
    l2_population = True
    enable_distributed_routing = True
    [securitygroup]
    enable_security_group = True

    other nodes    # grep -v ^# /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini  | grep -v ^$
    [ovs]
    enable_tunneling = True
    integration_bridge = br-int
    tunnel_bridge = br-tun
    local_ip = 10.0.0.222
    [agent]
    polling_interval = 2
    tunnel_types = vxlan
    vxlan_udp_port = 4789
    l2_population = True
    arp_responder = False
    enable_distributed_routing = True
    [securitygroup]
    firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
edit retag flag offensive close merge delete

Comments

Assuming ML2 + OVS >= 2.1. On each compute node, in the conf file you pass to the OVS agent (plugin.ini / ml2_conf.ini):

[agent]
arp_responder = True
dbaxps gravatar imagedbaxps ( 2014-11-29 00:19:45 -0500 )edit

3 answers

Sort by » oldest newest most voted
4

answered 2014-11-29 00:39:20 -0500

dbaxps gravatar image

updated 2014-11-30 09:07:20 -0500

You need to enable the arp_responder flag in the [agent] section in the conf file you pass to the OVS agent on each compute node.Assuming ML2 + OVS >= 2.1:-

[agent]
arp_responder = True

On Network Node

[root@juno1 ~(keystone_admin)]# cat /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 65537:69999
vxlan_group =239.1.1.2
[securitygroup]
enable_security_group = True

On each Compute Node

[root@juno2 openvswitch]# cat ovs_neutron_plugin.ini
[ovs]
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =192.168.0.137
bridge_mappings =physnet1:br-ex
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
arp_responder = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

See OVS ARP Responder – Theory and Practice

Verification status neutron-openvswich-agent on particular Compute Node :-

[root@juno1 ~(keystone_admin)]# neutron agent-list | grep juno2.localdomain
| cca01c72-975c-40f7-a4e8-c9d258fced3c | Open vSwitch agent | juno2.localdomain | :-)   | True           | neutron-openvswitch-agent |
[root@juno1 ~(keystone_admin)]# neutron agent-show  cca01c72-975c-40f7-a4e8-c9d258fced3c
+---------------------+--------------------------------------------+
| Field               | Value                                      |
+---------------------+--------------------------------------------+
| admin_state_up      | True                                       |
| agent_type          | Open vSwitch agent                         |
| alive               | True                                       |
| binary              | neutron-openvswitch-agent                  |
| configurations      | {                                          |
|                     |      "arp_responder_enabled": true,        |
|                     |      "tunneling_ip": "192.168.0.137",      |
|                     |      "devices": 4,                         |
|                     |      "l2_population": true,                |
|                     |      "tunnel_types": [                     |
|                     |           "vxlan"                          |
|                     |      ],                                    |
|                     |      "enable_distributed_routing": false,  |
|                     |      "bridge_mappings": {                  |
|                     |           "physnet1": "br-ex"              |
|                     |      }                                     |
|                     | }                                          |
| created_at          | 2014-11-01 12:52:46                        |
| description         |                                            |
| heartbeat_timestamp | 2014-11-30 08:48:51                        |
| host                | juno2.localdomain                          |
| id                  | cca01c72-975c-40f7-a4e8-c9d258fced3c       |
| started_at          | 2014-11-30 05:59:50                        |
| topic               | N/A                                        |
+---------------------+--------------------------------------------+
edit flag offensive delete link more

Comments

Thanks, that solved it for me!

rodr1go gravatar imagerodr1go ( 2015-08-04 02:29:14 -0500 )edit
1

answered 2015-02-04 22:29:25 -0500

Hello,

I meet seem problem but my diffrence is I use "gre" tunnel .

Once enable the population , all of the nodes missed the tunnel connection inforamation in "br-tun" ovs bridge. but disable population all of the tunel info are back, so what's I missed?

I have added the "arp_responder = True" but it still don't work.

[agent]
arp_responder = True

[root@compute1 ~]# ovs-vsctl show

4e2b9403-3251-481a-86eb-2a9272c9cf09
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
        Port "fg-a67ad606-43"
            Interface "fg-a67ad606-43"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qr-85097111-fe"
            tag: 1
            Interface "qr-85097111-fe"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvofb7c2370-ee"
            tag: 1
            Interface "qvofb7c2370-ee"
    ovs_version: "2.3.1-git3282e51"

[root@network0 ~]# ovs-vsctl show

d21e3790-d015-44e4-ab97-8ee91beda7b2
    Bridge br-ex
        Port "qg-4cf25f12-ed"
            Interface "qg-4cf25f12-ed"
                type: internal
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qr-85097111-fe"
            tag: 1
            Interface "qr-85097111-fe"
                type: internal
        Port "tap6dac23c1-16"
            tag: 1
            Interface "tap6dac23c1-16"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "sg-faa9cf1a-54"
            tag: 1
            Interface "sg-faa9cf1a-54"
                type: internal
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.3.1-git3282e51"

[root@compute1 ~]# cat /etc/neutron/neutron.conf |grep -v ^$|grep -v ^#

[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller0
rabbit_password = openstack
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
verbose = True
dvr_base_mac = fa:16:3f:00:00:00
router_distributed = True
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://controller0:5000/v2.0
identity_uri = http://controller0:35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
[database]
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

[root@compute1 ~]# cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep -v ^$|grep -v ^#

[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 10.0.1.31
enable_tunneling = True
[agent]
tunnel_types = gre
l2_population = True
enable_distributed_routing = True
arp_responder = True

[root@compute1 ~]# cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini |grep -v ^$|grep -v ^#

[ovs]
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.1.31
[agent]
enable_distributed_routing = True
arp_responder = True
l2_population = True
[securitygroup]

[root@compute1 ~]# cat /etc/neutron/l3_agent.ini |grep -v ^$|grep -v ^#

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
external_network_bridge = br-ex
verbose = True
debug = True
agent_mode = dvr

[root@network0 ~]# cat /etc/neutron/neutron.conf |grep -v ^$|grep -v ^#

[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller0
rabbit_password = openstack
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
verbose = True
dvr_base_mac = fa:16:3f:00:00:00
router_distributed = True
debug = True
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://controller0:5000/v2 ...
(more)
edit flag offensive delete link more

Comments

Thank you so much for helping me solve my problem. I was puzzled why my VXLAN tunnel endpoint was missing, and your insight about them coming and going with l2_population being set to False/True led me to a solution! (I ended up setting l2_population=False)

m01 gravatar imagem01 ( 2015-05-13 02:00:00 -0500 )edit
0

answered 2014-11-30 20:47:15 -0500

Anil gravatar image

updated 2014-11-30 21:52:11 -0500

Tnx .. Man .. it works like charm .. just a single mistake & Whole day gone for troubleshooting .
But now below (br-tun) down on all compute & network nodes

[root@network-7 ~]# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000f260ef3bd14f
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(patch-int): addr:12:88:04:dd:52:4c
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 2(vxlan-0a0000df): addr:a6:42:b1:61:f5:4c
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 3(vxlan-0a0000de): addr:6a:0d:56:15:f7:c5
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-tun): addr:f2:60:ef:3b:d1:4f
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

[root@network-7 ~]# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000d617f453b447
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(tapc42108d6-e1): addr:d6:17:f4:53:b4:47
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 3(tap0d2994d8-64): addr:d6:17:f4:53:b4:47
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 4(patch-tun): addr:6e:7f:7b:5d:c7:22
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 5(qr-4c3aca29-b5): addr:d6:17:f4:53:b4:47
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 6(sg-82f37f66-a0): addr:d6:17:f4:53:b4:47
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 7(qr-d780e02e-3b): addr:d6:17:f4:53:b4:47
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 8(sg-99ca382c-fb): addr:d6:17:f4:53:b4:47
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:d6:17:f4:53:b4:47
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
edit flag offensive delete link more

Comments

Disable, l2population and verify ovs-ofctl show br-tun.

dbaxps gravatar imagedbaxps ( 2014-11-30 23:23:57 -0500 )edit

Same even after l2pop disable .. looks like prob is somewhere else .. is it supposed to be disabled .. or something wrong with config ..as per below link it should be down http://techbackground.blogspot.sg/201...

Anil gravatar imageAnil ( 2014-12-01 01:56:30 -0500 )edit

i checked my previous icehouse setup & both br-int and br-tun are up

Anil gravatar imageAnil ( 2014-12-01 01:57:40 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-11-28 22:00:18 -0500

Seen: 3,563 times

Last updated: Feb 04 '15