Ask Your Question

ML2 neutron plugin installation and configuration

asked 2013-11-05 16:09:23 -0500

Jakub gravatar image


have anyone ever tried to install and configure ML2 plugin with OVS GRE/VXLAN/VLAN? Please can you provide some install instructions, because in the documentation is nothing.

I tested OVS plugin with GRE and VXLAN and it works! Now I would like to use ML2 plugin.

edit retag flag offensive close merge delete


can you please elaborate how you tested ovs plugin with GRE nd VXLAN

Amitabh Sinha gravatar imageAmitabh Sinha ( 2014-08-21 12:33:26 -0500 )edit

2 answers

Sort by ยป oldest newest most voted

answered 2013-11-19 05:13:30 -0500

Jakub gravatar image

updated 2013-11-20 07:00:31 -0500

Hi, today we succesfully run the ML2 plugin with VXLAN on Ubuntu. There is no additional package for installation, because it is integrated inside of neutron-server. We have 1CTL, 1NTW, 2 CP nodes and for ML2 plugin must be configured only CTL node, where is neutron-server installed.

1) Edit /etc/default/neutron-server

# defaults for neutron-server
# path to config file corresponding to the core_plugin specified in
# neutron.conf

2) Setup service_plugins inside of /etc/neutron/neutron.conf

# Advanced service modules
# service_plugins =
service_plugins =,

3) Create file /etc/neutron/plugins/ml2/ml2_conf.ini

# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
# type_drivers = local,flat,vlan,gre,vxlan
# Example: type_drivers = flat,vlan,gre,vxlan

type_drivers = vxlan,local

# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
# tenant_network_types = local
# Example: tenant_network_types = vlan,gre,vxlan
tenant_network_types = vxlan

# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# mechanism_drivers =
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = cisco,logger
mechanism_drivers = openvswitch

# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
# flat_networks =
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *

# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
# tunnel_id_ranges =
#tunnel_id_ranges = 1:1000

# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
# vni_ranges =


# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
# vxlan_group =
# Example: vxlan_group =

After this settings everything should work!

edit flag offensive delete link more


Hi , I tried it on CentOS but the thing is , if I configure neutron with ML2 the neutron-server stops working. I have some workaround but its not a permanent solution. You can check it on openstack-mailing list

RomilGupta gravatar imageRomilGupta ( 2013-11-19 06:02:46 -0500 )edit

answered 2013-11-14 20:37:20 -0500

slogan621 gravatar image

updated 2013-11-14 20:40:28 -0500

With devstack, it was pretty easy (and since it worked, I feel justified in posting the answer for those using devstack). Just added


to the localrc, replacing the previous value of Q_PLUGIN (for me it was openvswitch).

GRE worked with only that change. VXLAN is not working for me, after setting:


I'm still trying to track that down. Could be some issue with fragementation/MTUs. I see vxlan packets flying around, but my guest VMs can't do dhcp and thus come up with no IP address on the veth.

More info on this page:

edit flag offensive delete link more


vxlan works great. One or both of the following was needed to fix it. 1) Make sure OVS 1.10.2 or new is installed. I fixed this by reimaging with Ubuntu 13.10. 2) I recloned devstack and switch to stable/havana branch (git checkout stable/havana) before running devstack.

slogan621 gravatar imageslogan621 ( 2013-11-18 12:08:49 -0500 )edit

Hi Slogan , Can you pls put some light on multi node devstack setup , with NN and CN node. if possible could you pls share localrc file.

RomilGupta gravatar imageRomilGupta ( 2013-11-19 04:14:55 -0500 )edit

Hi Slogan, is it possible to apply our own custom configurations like ml2, ovs plugin etc through localrc file. Like i'm generalising the scope of localrc file to nova, cinder etc. Where can i find that doc?

dilip gravatar imagedilip ( 2016-11-29 11:51:40 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2013-11-05 16:09:23 -0500

Seen: 6,856 times

Last updated: Dec 04 '13