Ask Your Question

Revision history [back]

[OVS] VXLAN tunnels on br-tun by Mgmt and Data Interfaces

Hi all,

we have a 1 controller node + 2 computer nodes, one of them is the network node. At the computer nodes, we had only 1 mgmt interface with addressing 10.50.0.23X and the br-tun vxlan tunnel was formed over this interfaces.

We just add a new interface on both computer nodes with 10.13.111.12X IP address. So, we have changed the br-tun vxlan local_ip to the new ones, but the br-tun vxlan tunnels is pointing also to the old interfaces.

Here you have the ovs .ini config file of compute-02 node (all the outputs are from this node):

[15:46:46][root@ostack-02 ~]$  cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini | grep -v -P "#|^$"
[ovs]
enable_tunneling = True
tunnel_type = vxlan
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.13.111.122
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[15:47:10][root@ostack-02 ~]$

On the following output you can see a sample of the ifconfig with eth2.50 used for mgmt and eth4.917 used for data:

eth2.50: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.50.0.232  netmask 255.255.255.0  broadcast 10.50.0.255
        ether f8:bc:12:b2:d8:90  txqueuelen 0  (Ethernet)
eth4.917: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.13.111.122  netmask 255.255.255.0  broadcast 10.13.111.255
        ether f8:bc:12:b2:de:42  txqueuelen 0  (Ethernet)

Here you can see how the br-tun looks like, with 3 different vxlan tunnels:

[15:25:07][root@ostack-02 ~]$ ovs-vsctl show
d9e769f8-103e-4ccc-9714-025a1181fd0e
    Bridge br-tun
        Port "vxlan-0a0d6f79"
            Interface "vxlan-0a0d6f79"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.13.111.121"}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a3200e8"
            Interface "vxlan-0a3200e8"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.50.0.232"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a3200e7"
            Interface "vxlan-0a3200e7"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.50.0.231"}
    Bridge br-int
        fail_mode: secure
        Port "qvo6307173b-dd"
            tag: 1
            Interface "qvo6307173b-dd"
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.1.3"
[15:25:07][root@ostack-02 ~]$

We've set up two VMs on different nodes and they can ping each other, so the the br-tun and vxlan communication between the compute hosts is working.

So my question is:

  • As far as I understand, there should be only one vxlan interface under the br-tun, that's the one with the eth4.917 data interface: "local_ip="10.13.111.122", remote_ip="10.13.111.121". Am I right?
  • If the above is right, why and what is the purpose of the other two interfaces? Is it due to a misconfiguration?

We've tried to restart the ovs-agents and reboot the servers (all of them) but the 3 vxlan tunnels continue to be showed.

If you need any additional info, please let me know.

Thanks in advance, Carlos

[SOLVED] [OVS] VXLAN tunnels on br-tun by Mgmt and Data Interfaces

Hi all,

we have a 1 controller node + 2 computer nodes, one of them is the network node. At the computer nodes, we had only 1 mgmt interface with addressing 10.50.0.23X and the br-tun vxlan tunnel was formed over this interfaces.

We just add a new interface on both computer nodes with 10.13.111.12X IP address. So, we have changed the br-tun vxlan local_ip to the new ones, but the br-tun vxlan tunnels is pointing also to the old interfaces.

Here you have the ovs .ini config file of compute-02 node (all the outputs are from this node):

[15:46:46][root@ostack-02 ~]$  cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini | grep -v -P "#|^$"
[ovs]
enable_tunneling = True
tunnel_type = vxlan
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.13.111.122
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[15:47:10][root@ostack-02 ~]$

On the following output you can see a sample of the ifconfig with eth2.50 used for mgmt and eth4.917 used for data:

eth2.50: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.50.0.232  netmask 255.255.255.0  broadcast 10.50.0.255
        ether f8:bc:12:b2:d8:90  txqueuelen 0  (Ethernet)
eth4.917: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.13.111.122  netmask 255.255.255.0  broadcast 10.13.111.255
        ether f8:bc:12:b2:de:42  txqueuelen 0  (Ethernet)

Here you can see how the br-tun looks like, with 3 different vxlan tunnels:

[15:25:07][root@ostack-02 ~]$ ovs-vsctl show
d9e769f8-103e-4ccc-9714-025a1181fd0e
    Bridge br-tun
        Port "vxlan-0a0d6f79"
            Interface "vxlan-0a0d6f79"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.13.111.121"}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a3200e8"
            Interface "vxlan-0a3200e8"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.50.0.232"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a3200e7"
            Interface "vxlan-0a3200e7"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.50.0.231"}
    Bridge br-int
        fail_mode: secure
        Port "qvo6307173b-dd"
            tag: 1
            Interface "qvo6307173b-dd"
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.1.3"
[15:25:07][root@ostack-02 ~]$

We've set up two VMs on different nodes and they can ping each other, so the the br-tun and vxlan communication between the compute hosts is working.

So my question is:

  • As far as I understand, there should be only one vxlan interface under the br-tun, that's the one with the eth4.917 data interface: "local_ip="10.13.111.122", remote_ip="10.13.111.121". Am I right?
  • If the above is right, why and what is the purpose of the other two interfaces? Is it due to a misconfiguration?

We've tried to restart the ovs-agents and reboot the servers (all of them) but the 3 vxlan tunnels continue to be showed.

If you need any additional info, please let me know.

Thanks in advance, Carlos

[SOLVED] [OVS] VXLAN tunnels on br-tun by Mgmt and Data Interfaces

Hi all,

we have a 1 controller node + 2 computer nodes, one of them is the network node. At the computer nodes, we had only 1 mgmt interface with addressing 10.50.0.23X and the br-tun vxlan tunnel was formed over this interfaces.

We just add a new interface on both computer nodes with 10.13.111.12X IP address. So, we have changed the br-tun vxlan local_ip to the new ones, but the br-tun vxlan tunnels is pointing also to the old interfaces.

Here you have the ovs .ini config file of compute-02 node (all the outputs are from this node):

[15:46:46][root@ostack-02 ~]$  cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini | grep -v -P "#|^$"
[ovs]
enable_tunneling = True
tunnel_type = vxlan
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.13.111.122
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[15:47:10][root@ostack-02 ~]$

On the following output you can see a sample of the ifconfig with eth2.50 used for mgmt and eth4.917 used for data:

eth2.50: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.50.0.232  netmask 255.255.255.0  broadcast 10.50.0.255
        ether f8:bc:12:b2:d8:90  txqueuelen 0  (Ethernet)
eth4.917: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.13.111.122  netmask 255.255.255.0  broadcast 10.13.111.255
        ether f8:bc:12:b2:de:42  txqueuelen 0  (Ethernet)

Here you can see how the br-tun looks like, with 3 different vxlan tunnels:

[15:25:07][root@ostack-02 ~]$ ovs-vsctl show
d9e769f8-103e-4ccc-9714-025a1181fd0e
    Bridge br-tun
        Port "vxlan-0a0d6f79"
            Interface "vxlan-0a0d6f79"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.13.111.121"}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a3200e8"
            Interface "vxlan-0a3200e8"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.50.0.232"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a3200e7"
            Interface "vxlan-0a3200e7"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.13.111.122", out_key=flow, remote_ip="10.50.0.231"}
    Bridge br-int
        fail_mode: secure
        Port "qvo6307173b-dd"
            tag: 1
            Interface "qvo6307173b-dd"
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.1.3"
[15:25:07][root@ostack-02 ~]$

We've set up two VMs on different nodes and they can ping each other, so the the br-tun and vxlan communication between the compute hosts is working.

So my question is:

  • As far as I understand, there should be only one vxlan interface under the br-tun, that's the one with the eth4.917 data interface: "local_ip="10.13.111.122", remote_ip="10.13.111.121". Am I right?
  • If the above is right, why and what is the purpose of the other two interfaces? Is it due to a misconfiguration?

We've tried to restart the ovs-agents and reboot the servers (all of them) but the 3 vxlan tunnels continue to be showed.

If you need any additional info, please let me know.

Thanks in advance, Carlos