Ask Your Question

Compute nodes requirements for ML2 Flat Network?

asked 2014-03-31 01:31:23 -0500

ThiagoCMC gravatar image

updated 2014-03-31 03:45:31 -0500

darragh-oreilly gravatar image


I'm trying to install ML2 on a Flat Network (simplest topology) but, what are the requirements for it?

At my "controller / network" node, I have ml2_conf.ini:

type_drivers = flat,local

mechanism_drivers = openvswitch

flat_networks = physnet1




sql_connection = mysql://neutronUser:neutronPass@

enable_tunneling = False
local_ip =
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth0

But, what should I install at the compute node? I mean, should I still use the Ubuntu package "neutron-plugin-openvswitch-agent" at the compute node even after migrating to ML2? Or, there is no need for an "ml2 agent" on compute node itself?

I'm trying it with IceHouse on Ubuntu 14.04, so, ML2 seems to be already activated by default everywhere... I just need to know how to configure the compute node (controller / network ready - I think)...

Tks! Thiago

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2014-03-31 03:50:46 -0500

darragh-oreilly gravatar image

Yes, you will need to install neutron-plugin-openvswitch-agent on the compute nodes and any nodes running agents like DHCP, L3 etc.

There is no such thing as an ML2 agent. ML2 is a server side ("plugin") thing only, and it works with the existing OVS, linuxbridge and some others agents.

edit flag offensive delete link more


Awesome! Tks!

One more question:

  • Is the package "neutron-plugin-openvswitch" also required on a compute node?! I'm trying Flat Network (no L3, no GRE for instance)...

BTW, the package "neutron-plugin-openvswitch-agent" doesn't depend on "neutron-plugin-openvswitch" in IceHouse/Ubuntu 14.04, so, I think it isn't a requirement anymore (on compute node) BUT, then, where should I put the bridge_mappings setup of the compute nodes (which is mapped to my ovs bridge "br-eth0") if there is no "ovs_neutron_plugin.ini" installed on it...? New [ovs] section directly on ml2_conf.ini ??? Or it will somehow, come from the neutron-server itself!?


ThiagoCMC gravatar imageThiagoCMC ( 2014-04-01 00:42:08 -0500 )edit

I haven't tried IceHouse on Ubuntu 14.04, so I'm not familiar with the packaging. I'm surprised there is no ovs_neutron_plugin.ini. Have a look at the upstart job for the agent to see the path/name of the config file(s) it is loading. You should be able to put the bridge mapping in one of those.

darragh-oreilly gravatar imagedarragh-oreilly ( 2014-04-01 00:58:38 -0500 )edit

Thank you! I'm seeing that neutron-openvswitch-agent is called with ml2_conf.ini from its upstart job...

Yep, ovs_neutron_plugin.ini comes with package neutron-plugin-openvswitch but, new neutron-plugin-openvswitch-agent doesn't depends on it anymore... Maybe that's the way it should be from now on... I'll try it!


ThiagoCMC gravatar imageThiagoCMC ( 2014-04-01 01:06:38 -0500 )edit

No bridge_mappings does not come from the server. And it actually can be different on each node - ie node1 might map physnet1 to br-physnet1 while node2 might map physnet1 to br-xyz. ML2 doesn't care. It will work as long as br-physnet1 and br-xyz are connected to the same physical link.

darragh-oreilly gravatar imagedarragh-oreilly ( 2014-04-01 01:10:21 -0500 )edit

Got it! Makes sense... :-)

ThiagoCMC gravatar imageThiagoCMC ( 2014-04-01 01:11:50 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2014-03-31 01:31:23 -0500

Seen: 1,727 times

Last updated: Mar 31 '14