Ask Your Question
0

vxlan peers not being created (mitaka openvswitch agent native driver)

asked 2016-05-26 22:53:21 -0500

don gravatar image

updated 2019-01-07 15:10:00 -0500

My vxlan peers are not coming up (not being created). Below is the config of a compute node. There are no errors, it just doesn't create the vxlan tunnel interface. Can you suggest where to look?

(This is on Ubunto 16.04 with Mitaka)

If i look in the database:

mysql> select * from ml2_vxlan_endpoints;
+---------------+----------+---------+
| ip_address    | udp_port | host    |
+---------------+----------+---------+
| 10.100.10.122 |     4789 | nubo-19 |
| 10.100.10.123 |     4789 | nubo-20 |
| 10.100.10.124 |     4789 | nubo-21 |
| 10.100.10.126 |     4789 | nubo-23 |
| 10.100.10.64  |     4789 | nubo-5  |
| 10.100.10.65  |     4789 | nubo-6  |
| 10.100.10.67  |     4789 | nubo-8  |
| 10.100.10.68  |     4789 | nubo-9  |
| 10.100.10.75  |     4789 | nubo-16 |
| 167.88.32.134 |     4789 | nubo-4  |
+---------------+----------+---------+
10 rows in set (0.00 sec)

the node w/ the config below is nubo-23 (they are all the same).

I don't see any errors in the logs. Not a 1.

i'm expecting to see an entry in br-tun like:

Bridge br-tun Port "vxlan-0a640a97" Interface "vxlan-0a640a97" type: vxlan options: {df_default="true", in_key=flow, local_ip="xx.xx.xx.xx", out_key=flow, remote_ip="10.100.10.151"}

but i have no entries.

This is on Ubuntu 16.04.

mysql> select * from ml2_vxlan_allocations where allocated = 1;
+-----------+-----------+
| vxlan_vni | allocated |
+-----------+-----------+
|        63 |         1 |
+-----------+-----------+
1 row in set (0.00 sec)

ml2_conf.ini::

[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch

[ml2_type_flat]
flat_networks = *

[ml2_type_vxlan]
vni_ranges = 1:32000
#vxlan_group = 239.1.1.2

[securitygroup]
enable_security_group = false

openvswitch_agent.ini::

[agent]
arp_responder = True
l2_population = True
minimize_polling = True
ovsdb_monitor_respawn_interval = 30
polling_interval = 5
prevent_arp_spoofing = False
tunnel_types = vxlan

[ovs]
bridge_mappings = physnet1:br-ex
enable_tunneling=True
local_ip = 10.100.10.126
of_interface = native
ovsdb_connection = tcp:127.0.0.1:6640
ovsdb_interface = native
tenant_network_type = vxlan
tunnel_id_ranges = 1:32000
tunnel_type = vxlan

[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
#firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group = False

ovs-vsctl output::

1b3be5ce-7ae1-41b1-9bc0-7c4e2e088414
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "enp3s0f1"
            Interface "enp3s0f1"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.5.0"

ifconfig::

br-ex     Link encap:Ethernet  HWaddr d0:43:1e:d6:03:35 
          inet addr:10.100.10.126  Bcast:10.100.10.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:295129 errors:0 dropped:4 overruns:0 frame:0
          TX packets:45436 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:27115572 (27.1 MB)  TX bytes:9312921 (9.3 MB)

br-int    Link encap:Ethernet  HWaddr b2:e7:33:0a:8c:4f 
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric ...
(more)
edit retag flag offensive close merge delete

Comments

the tunnels interfaces are created when needed. Did you try starting a vm? There are no vm ports in your br-int.

darragh-oreilly gravatar imagedarragh-oreilly ( 2016-05-27 08:18:12 -0500 )edit

Good point. I just tried spin up instance, now i have tap#### in the br-int, but still no vxlan tunnel in the br-tun.

don gravatar imagedon ( 2016-05-27 16:51:32 -0500 )edit

3 answers

Sort by ยป oldest newest most voted
0

answered 2016-05-29 14:51:08 -0500

don gravatar image

OK so the problem was kind of stupid :) but when neutron-server was started, it was missing --config=...ml2 --config=...openvswitch. E.g. it had the /etc/neutron/neutron.conf, but not the ml2/openvswitch ini files. This in turn causes no errors to occur anywhere, but it doesn't know what to do so it cowardly gives up :)

edit flag offensive delete link more

Comments

Hi,Don,I'm using(mitaka with ovs and vxlan ,2 physical servers,controller+network+compute on one server,and compute on another server),booting VM on the first server was perfect,but on the compute node it has no ip.When I executed the "ovs-vsctl show" there's no vxlan peers, do you finally solved?

feyman2009 gravatar imagefeyman2009 ( 2017-02-03 20:20:42 -0500 )edit
0

answered 2016-05-27 02:42:24 -0500

MIB gravatar image

Try below conifg : to enable l2 population.

ml2_conf.ini

[ml2]

type_drivers = vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,l2population

edit flag offensive delete link more

Comments

OK i tried that, but no difference.

don gravatar imagedon ( 2016-05-27 07:18:03 -0500 )edit

Hope you tired below command after making changes in /etc/..../ml2_config.ini

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

MIB gravatar imageMIB ( 2016-05-30 02:24:37 -0500 )edit
-1

answered 2016-05-27 00:39:41 -0500

dbaxps gravatar image

updated 2019-01-07 15:11:14 -0500

don gravatar image

You wrote :- Bridge br-tun Port "vxlan-0a640a97" Interface "vxlan-0a640a97" type: vxlan options: {df_default="true", in_key=flow, local_ip="xx.xx.xx.xx", out_key=flow, remote_ip="10.100.10.151"}

VTEPs across all landscape usually are located on the same network say 10.100.10.0/24. If for instance eth1 are
supposed to support VTEPs, corresponding IPs are supposed to be assigned before deployment starts ( at least on RDO)
edit flag offensive delete link more

Comments

This example is from my working system where the system has 2 IP, and its picking the other for the endpoint. But that system is working. There is no requirement for vxlan to be on a single subnet.

don gravatar imagedon ( 2016-05-27 07:16:27 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-05-26 22:53:21 -0500

Seen: 891 times

Last updated: Jan 07