# Compute nodes requirements for ML2 Flat Network?

Stackers!

I'm trying to install ML2 on a Flat Network (simplest topology) but, what are the requirements for it?

At my "controller / network" node, I have ml2_conf.ini:

[ml2]
type_drivers = flat,local

mechanism_drivers = openvswitch

[ml2_type_flat]
flat_networks = physnet1

[ml2_type_vlan]

[ml2_type_gre]

[ml2_type_vxlan]

[database]
sql_connection = mysql://neutronUser:neutronPass@192.168.1.10/neutron

[ovs]
enable_tunneling = False
local_ip = 192.168.1.10
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth0


But, what should I install at the compute node? I mean, should I still use the Ubuntu package "neutron-plugin-openvswitch-agent" at the compute node even after migrating to ML2? Or, there is no need for an "ml2 agent" on compute node itself?

I'm trying it with IceHouse on Ubuntu 14.04, so, ML2 seems to be already activated by default everywhere... I just need to know how to configure the compute node (controller / network ready - I think)...

Tks! Thiago

edit retag close merge delete

Sort by » oldest newest most voted

Yes, you will need to install neutron-plugin-openvswitch-agent on the compute nodes and any nodes running agents like DHCP, L3 etc.

There is no such thing as an ML2 agent. ML2 is a server side ("plugin") thing only, and it works with the existing OVS, linuxbridge and some others agents.

more

Awesome! Tks!

One more question:

• Is the package "neutron-plugin-openvswitch" also required on a compute node?! I'm trying Flat Network (no L3, no GRE for instance)...

BTW, the package "neutron-plugin-openvswitch-agent" doesn't depend on "neutron-plugin-openvswitch" in IceHouse/Ubuntu 14.04, so, I think it isn't a requirement anymore (on compute node) BUT, then, where should I put the bridge_mappings setup of the compute nodes (which is mapped to my ovs bridge "br-eth0") if there is no "ovs_neutron_plugin.ini" installed on it...? New [ovs] section directly on ml2_conf.ini ??? Or it will somehow, come from the neutron-server itself!?

Cheers!

( 2014-04-01 00:42:08 -0500 )edit

I haven't tried IceHouse on Ubuntu 14.04, so I'm not familiar with the packaging. I'm surprised there is no ovs_neutron_plugin.ini. Have a look at the upstart job for the agent to see the path/name of the config file(s) it is loading. You should be able to put the bridge mapping in one of those.

( 2014-04-01 00:58:38 -0500 )edit

Thank you! I'm seeing that neutron-openvswitch-agent is called with ml2_conf.ini from its upstart job...

Yep, ovs_neutron_plugin.ini comes with package neutron-plugin-openvswitch but, new neutron-plugin-openvswitch-agent doesn't depends on it anymore... Maybe that's the way it should be from now on... I'll try it!

Best!

( 2014-04-01 01:06:38 -0500 )edit

No bridge_mappings does not come from the server. And it actually can be different on each node - ie node1 might map physnet1 to br-physnet1 while node2 might map physnet1 to br-xyz. ML2 doesn't care. It will work as long as br-physnet1 and br-xyz are connected to the same physical link.

( 2014-04-01 01:10:21 -0500 )edit

Got it! Makes sense... :-)

( 2014-04-01 01:11:50 -0500 )edit

# Get to know Ask OpenStack

Resources for moderators