Ask Your Question
1

VXLAN tunnels on br-tun by Mgmt and Data Interfaces [closed]

asked 2015-05-06 08:59:03 -0500

updated 2015-05-08 17:34:55 -0500

smaffulli gravatar image

Hi all,

we have a 1 controller node + 2 computer nodes, one of them is the network node. At the computer nodes, we had only 1 mgmt interface with addressing 10.50.0.23X and the br-tun vxlan tunnel was formed over this interfaces.

We just add a new interface on both computer nodes with 10.13.111.12X IP address. So, we have changed the br-tun vxlan local_ip to the new ones, but the br-tun vxlan tunnels is pointing also to the old interfaces.

Here you have the ovs .ini config file of compute-02 node (all the outputs are from this node):

[15:46:46][root@ostack-02 ~]$  cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini | grep -v -P "#|^$"
[ovs]
enable_tunneling = True
tunnel_type = vxlan
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.13.111.122
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[15:47:10][root@ostack-02 ~]$

On the following output you can see a sample of the ifconfig with eth2.50 used for mgmt and eth4.917 used for data:

eth2.50: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.50.0.232  netmask 255.255.255.0  broadcast 10.50.0.255
        ether f8:bc:12:b2:d8:90  txqueuelen 0  (Ethernet)
eth4.917: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.13.111.122  netmask 255.255.255.0  broadcast 10.13.111.255
        ether f8:bc:12:b2:de:42  txqueuelen 0  (Ethernet)

Here you can see how the br-tun looks like, with 3 different vxlan tunnels:

[15:25:07][root@ostack-02 ~]$ ovs-vsctl show
d9e769f8-103e-4ccc-9714-025a1181fd0e
    Bridge br-tun
        Port "vxlan-0a0d6f79"
            Interface "vxlan-0a0d6f79"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.13.111.121"}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a3200e8"
            Interface "vxlan-0a3200e8"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.50.0.232"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a3200e7"
            Interface "vxlan-0a3200e7"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.50.0.231"}
    Bridge br-int
        fail_mode: secure
        Port "qvo6307173b-dd"
            tag: 1
            Interface "qvo6307173b-dd"
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.1.3"
[15:25:07][root@ostack-02 ~]$

We've set up two VMs on different nodes and they can ping each other, so the the br-tun and vxlan communication between the compute hosts is working.

So my question is:

  • As far as I understand, there should be only one vxlan interface under the br-tun, that's the one with the eth4.917 data interface: "local_ip="10.13.111.122", remote_ip="10.13.111.121". Am I right?
  • If the above is right, why and what is the purpose of the other two interfaces? Is it due to a misconfiguration?

We've tried to ... (more)

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by smaffulli
close date 2015-05-08 17:35:26.117271

Comments

1 answer

Sort by ยป oldest newest most voted
3

answered 2015-05-08 04:05:02 -0500

Thanks for the bug link! That was it :).

Had to change the table to "ml2_vxlan_endpoints" because we use VXLANs tunnels instead of GRE Tunnels.

[10:45:12][root@ostack-02 ~]$ mysql -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 6143
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2014, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> use neutron;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [neutron]> select * from ml2_vxlan_endpoints ;
+---------------+----------+
| ip_address    | udp_port |
+---------------+----------+
| 10.13.111.121 |     4789 |
| 10.13.111.122 |     4789 |
| 10.50.0.231   |     4789 |
| 10.50.0.232   |     4789 |
+---------------+----------+
4 rows in set (0.00 sec)

MariaDB [neutron]>
MariaDB [neutron]> delete from ml2_vxlan_endpoints where ip_address = "10.50.0.231";
Query OK, 1 row affected (0.00 sec)

MariaDB [neutron]> 
MariaDB [neutron]> delete from ml2_vxlan_endpoints where ip_address = "10.50.0.232";
Query OK, 1 row affected (0.00 sec)

MariaDB [neutron]> select * from ml2_vxlan_endpoints ;
+---------------+----------+
| ip_address    | udp_port |
+---------------+----------+
| 10.13.111.121 |     4789 |
| 10.13.111.122 |     4789 |
+---------------+----------+
2 rows in set (0.00 sec)

MariaDB [neutron]>

Also performed a " systemctl restart neutron-openvswitch-agent" on the compute nodes and we're done!

Thanks!

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2015-05-06 08:59:03 -0500

Seen: 1,819 times

Last updated: May 08 '15