Ask Your Question

neutron vxlan and discovery

asked 2015-10-18 08:04:47 -0500

highland gravatar image

Hello Team

One tenant, 10 compute nodes with multiple VMs on each, all of those VMs using one network_type = vxlan.

a. If i do understand correctly between each pair of nodes we need to build vxlan tunnel ? (45 tunnels in total)

b. how does the node1 knows to which node sent the packet ? (via which vxlan tunnel) ? Is it trying to ask all other nodes for mac addresses of all VMs and then sending unicast in data plane to specific vtep ? Or maybe use multicast for that ?


edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted

answered 2015-10-18 08:12:41 -0500

dbaxps gravatar image

updated 2015-10-18 10:37:47 -0500

ML2 – Address Population
Per [1] :-

When using the ML2 plugin with tunnels and a new port goes up, ML2 sends a update_port_postcommit notification which is picked up and processed by the l2pop mechanism driver. l2 pop then gathers the IP and MAC of the port, as well as the host that the port was scheduled on; It then sends an RPC notification to all layer 2 agents. The agents uses the notification to solve the three issues detailed above

OVS ARP Responder – Theory and Practice
Per [2] :-

With l2pop disabled, when VM 1 sends an initial message to VM 2, compute node A won’t know the MAC address of VM 2 and will be forced to flood the message out all tunnels, to all compute nodes. When the reply is received, node A would learn the MAC address of VM 2 along with the remote node and tunnel ID. This way, future floods are prevented. L2pop prevents even the initial flood by pre-populating the tables, as the Neutron service is aware of VM MAC addresses, scheduling, and tunnel IDs. More information may be found in the dedicated L2pop post.

Turning it on

Assuming ML2 + OVS >= 2.1:
    Turn on GRE or VXLAN tenant networks as you normally would
    Enable l2pop
    On the Neutron API node, in the conf file 
    you pass to the Neutron service (plugin.ini / ml2_conf.ini):

mechanism_drivers = openvswitch,l2population

On each compute node, in the conf file you pass to the OVS agent(plugin.ini / ml2_conf.ini):

l2_population = True

Enable the ARP responder: On each compute node, in the conf file 
you pass to the OVS agent (plugin.ini / ml2_conf.ini):

arp_responder = True

To summarize, you must use VXLAN or GRE tenant networks, you must enable l2pop, 
and finally you need to enable the arp_responder flag in the [agent] section 
in the conf file you pass to the OVS agent on each compute node.

See also

Two topics are implemented with this mechanism driver :
1. populating ARP proxy so that ARP broadcasts are not broadcasted in the network fabric anymore; (only available for Linuxbridge in havana-1, but OVS implementation should land soon with an ebtabless driver)
2. populating forwarding tables because when ARP proxy is used, virtual switch forwarding tables don't learn where to send packets anymore.
edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2015-10-18 08:04:47 -0500

Seen: 506 times

Last updated: Oct 18 '15