Ask Your Question
5

Neutron: VXLAN in Havana

asked 2013-11-14 20:58:14 -0600

slogan621 gravatar image

updated 2013-11-15 16:36:17 -0600

Has anyone been able to successfully run either OVS or ML2 with VXLAN networking and can help me understand what pitfalls I might run into in doing so?

I have a stable/havana cluster that works great with GRE running in both OVS and ML2 (I've since switch to ML2, chasing issues in a deprecated plugin just seems like a fool's endeavour :-)

I'm devstack based and basically follow the directions on the ML2 wiki

I see the usual br-tun, br-int, and associated vxlan port on both controller and compute node, e.g.,

sudo ovs-vsctl show
a426636d-4a2d-4a25-bc5b-8945409e1cab
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-192.168.3.3"
            Interface "vxlan-192.168.3.3"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.2", out_key=flow, remote_ip="192.168.3.3"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Port "qvob86969c3-c8"
            tag: 1
            Interface "qvob86969c3-c8"
        Port "qvo9da1c5b9-44"
            tag: 1
            Interface "qvo9da1c5b9-44"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo9c11361a-7f"
            tag: 1
            Interface "qvo9c11361a-7f"
        Port "qvo483db35f-ca"
            tag: 1
            Interface "qvo483db35f-ca"
    ovs_version: "1.10.2"

This mirrors what I would see for GRE (which works) except of course the interface type on the tunnel port would be "gre". As you can see, I'm using OVS 1.10.2 which has VXLAN support (bigger picture is I am based on Ubuntu 13.10 saucy which supplies that version of OVS).

I can fire up VMs and see VMs on the compute node try to get dhcp via the console log and fail, and I also see vxlan encapsulated packets arriving on the controller from the compute node during the process. I thought maybe there might be an MTU issue after poking around on google for discussions of VXLAN and openvswitch/ml2/neutron, but inspecting interfaces (both ovs and physical) I see no indication of packet drops (at least, no reports if packet loss using ipconfig or ovs-dpctl) on either the compute or controller node (and I am guessing if MTU were an issue, I'd be seeing packet drops (right?)).

I'm still getting my feet wet with VXLAN and I'm also still learning about Linux networking, but have for the better part of the year had little problem with GRE in folsom, grizzly, and (now) havana, so thinking I'm just missing something that may be obvious to others regarding VXLAN - clearly just telling ML2 that I want to use VXLAN isn't sufficient.

ovs-vsctl from the controller:

5ee4d85c-f0c9-4ccc-be1a-a4ea685c1c8e
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "qvoc7ea9e28-d8"
            tag: 1
            Interface "qvoc7ea9e28-d8"
        Port "qvoe6f6e5fa-31"
            tag: 1
            Interface "qvoe6f6e5fa-31"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo24371d66-7a"
            tag: 1
            Interface "qvo24371d66-7a"
        Port "qvoe9f82b47-b3"
            tag: 1
            Interface "qvoe9f82b47-b3"
        Port "tapbbdf3a9d-c9"
            tag: 1
            Interface "tapbbdf3a9d-c9"
                type: internal
    Bridge br-tun
        Port "vxlan-192.168.3.2"
            Interface "vxlan-192.168.3.2"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.3", out_key=flow, remote_ip="192.168 ...
(more)
edit retag flag offensive close merge delete

5 answers

Sort by ยป oldest newest most voted
2

answered 2013-11-19 05:41:47 -0600

Jakub gravatar image

updated 2013-11-20 07:09:17 -0600

I Installed OVS plugin with VXLAN and GRE. Both work similar even switching beetween GRE and VXLAN is only done in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. Important is part [agent], because it missing in openstack documentation.

[ovs] 

tunnel_type = vxlan // or gre
tenant_network_type = vxlan // or gre

tunnel_id_ranges = 1:1000
enable_tunneling = True


integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.13.18

[agent]
tunnel_types = gre //or vxlan

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

today we succesfully run the ML2 plugin with VXLAN on Ubuntu. There is no additional package for installation, because it is integrated inside of neutron-server. We have 1CTL, 1NTW, 2 CP nodes and for ML2 plugin must be configured only CTL node, where is neutron-server installed.

1) Edit /etc/default/neutron-server

# defaults for neutron-server
# path to config file corresponding to the core_plugin specified in
# neutron.conf
#NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/ml2/ml2_conf.ini"

2) Setup service_plugins inside of /etc/neutron/neutron.conf

# Advanced service modules
# service_plugins =
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

3) Create file /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
# type_drivers = local,flat,vlan,gre,vxlan
# Example: type_drivers = flat,vlan,gre,vxlan

type_drivers = vxlan,local

# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
#
# tenant_network_types = local
# Example: tenant_network_types = vlan,gre,vxlan
tenant_network_types = vxlan

# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# mechanism_drivers =
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = cisco,logger
mechanism_drivers = openvswitch

[ml2_type_flat]
# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#
# flat_networks =
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *

[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre]
# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
# tunnel_id_ranges =
#tunnel_id_ranges = 1:1000

[ml2_type_vxlan]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
#
# vni_ranges =
#vni_ranges=400:500

vni_ranges=1:1000

# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1

After this settings everything should work!

edit flag offensive delete link more
0

answered 2013-11-17 01:21:51 -0600

We had almost exactly the same issue,it turned out that networks created through the gui defaulted to "local" type and not gre. In your case I expect they were gre and not vxlan. A quick look in the DB will show you.

edit flag offensive delete link more

Comments

Thanks. In this case, I wasn't using a GUI to install and configure. Rather, I was using devstack, which is a project that provides a set of shell scripts that are designed to install and configure OpenStack nodes. Turns out my bug was a bad OVS install, or an issue fixed by pulling latest havana.

slogan621 gravatar imageslogan621 ( 2013-11-18 12:03:09 -0600 )edit
0

answered 2013-11-15 21:17:27 -0600

slogan621 gravatar image

It's working :-) The lowdown was the OVS installation on the compute node was messed up somehow, or switching to stable/havana fixed the problem. I re-imaged my blade with Ubuntu 13.10 and re-ran stack using the same localrc I was already using and now my VMs are able to dhcp and get IP addresses.

edit flag offensive delete link more
0

answered 2013-11-15 04:20:19 -0600

darragh-oreilly gravatar image

updated 2013-11-16 02:39:46 -0600

I think with Havana and OVS (ML2 or OVS plugin), VXLAN works the same way as GRE. VXLAN is just being used as a different framing protocol - IP/UDP vs IP/GRE.

It can't be anything todo with MTU as DHCP packets are very small. Also OVS 1.10.0+ is now doing IP fragmentation, and that problem was only for some Internet sites that ignored the ICMP errors.

I guess it's something todo with the flows on controller's br-tun. Can you append the output of sudo ovs-ofctl dump-flows br-tun to the question:

Also try running that command, get the instance to do more DHCP requests (from VNC console), re-run the command and compare the statistics to see which flows are being hit.

Also can you include sudo ovs-vsctl show from the controller.

Update: The second flow in table 0 (matches port 2, the vxlan port) is not being hit. Linux does not seem to be sending the incoming VxLAN packets to it. VxLAN uses UDP port 4789 by default, so this should give something: netstat -an | grep 4789

Ubuntu 13.10 was probably not used by Neutron developers during the Havana cycle. What kernel is it? You might try some different kernel versions or upgrade OVS. I think 13.10 uses the 3.11 kernel, and the OVS FAQ says its requires OVS 2.1.x

edit flag offensive delete link more

Comments

Thanks for the response. Updated the original question with the info you requested. Note again the ovs-vsctl show output is similar to what I get for GRE except for the different interface type on the GRE/VXLAN port. I'm going to rerun GRE and look at flows for diffs. What do you expect to see?

slogan621 gravatar imageslogan621 ( 2013-11-15 12:57:26 -0600 )edit
0

answered 2014-01-10 08:12:52 -0600

ben_tremblay gravatar image

I tried similar devstack configuration using ml2, ovs and vxlan on ubuntu 13.10. I had some problems with ovs not creating the tunnels properly. After some troubleshooting without success I finally updated openvswitch to a more recent version. The key package to update was openvswitch-datapath-dkms which seemed to be faulty on my ubuntu distribution. Now using version ovs 1.10.2 and everything works smoothly.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

4 followers

Stats

Asked: 2013-11-14 20:58:14 -0600

Seen: 4,765 times

Last updated: Jan 10 '14