Ask Your Question
0

neutron vxlan and discovery

asked 2015-10-18 08:04:47 -0500

highland gravatar image

Hello Team

One tenant, 10 compute nodes with multiple VMs on each, all of those VMs using one network_type = vxlan.

a. If i do understand correctly between each pair of nodes we need to build vxlan tunnel ? (45 tunnels in total)

b. how does the node1 knows to which node sent the packet ? (via which vxlan tunnel) ? Is it trying to ask all other nodes for mac addresses of all VMs and then sending unicast in data plane to specific vtep ? Or maybe use multicast for that ?

Thanks

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
1

answered 2015-10-18 08:12:41 -0500

dbaxps gravatar image

updated 2015-10-18 10:37:47 -0500

See:-
ML2 – Address Population
1. http://assafmuller.com/2014/02/23/ml2...
Per [1] :-

When using the ML2 plugin with tunnels and a new port goes up, ML2 sends a update_port_postcommit notification which is picked up and processed by the l2pop mechanism driver. l2 pop then gathers the IP and MAC of the port, as well as the host that the port was scheduled on; It then sends an RPC notification to all layer 2 agents. The agents uses the notification to solve the three issues detailed above

OVS ARP Responder – Theory and Practice
2. http://assafmuller.com/2014/05/21/ovs...
Per [2] :-

With l2pop disabled, when VM 1 sends an initial message to VM 2, compute node A won’t know the MAC address of VM 2 and will be forced to flood the message out all tunnels, to all compute nodes. When the reply is received, node A would learn the MAC address of VM 2 along with the remote node and tunnel ID. This way, future floods are prevented. L2pop prevents even the initial flood by pre-populating the tables, as the Neutron service is aware of VM MAC addresses, scheduling, and tunnel IDs. More information may be found in the dedicated L2pop post.

Turning it on

Assuming ML2 + OVS >= 2.1:
    Turn on GRE or VXLAN tenant networks as you normally would
    Enable l2pop
    On the Neutron API node, in the conf file 
    you pass to the Neutron service (plugin.ini / ml2_conf.ini):

[ml2]
mechanism_drivers = openvswitch,l2population

On each compute node, in the conf file you pass to the OVS agent(plugin.ini / ml2_conf.ini):

[agent]
l2_population = True

Enable the ARP responder: On each compute node, in the conf file 
you pass to the OVS agent (plugin.ini / ml2_conf.ini):

[agent]
arp_responder = True

To summarize, you must use VXLAN or GRE tenant networks, you must enable l2pop, 
and finally you need to enable the arp_responder flag in the [agent] section 
in the conf file you pass to the OVS agent on each compute node.

See also https://wiki.openstack.org/wiki/L2pop...

Two topics are implemented with this mechanism driver :
1. populating ARP proxy so that ARP broadcasts are not broadcasted in the network fabric anymore; (only available for Linuxbridge in havana-1, but OVS implementation should land soon with an ebtabless driver)
2. populating forwarding tables because when ARP proxy is used, virtual switch forwarding tables don't learn where to send packets anymore.
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-10-18 08:04:47 -0500

Seen: 352 times

Last updated: Oct 18 '15