@kumar lakshman kumar wrote :-
if you booted both the VMs on different network,it should go to neutron controller to resolve the VM location/address
In case of VXLAN tunneling deployment I would follow https://assafmuller.com/2014/05/21/ov...
Assuming ML2 + OVS >= 2.1:
Turn on GRE or VXLAN tenant networks as you normally would
Enable l2pop
On the Neutron API node, in the conf file you pass to the Neutron service (plugin.ini / ml2_conf.ini):
[ml2]
mechanism_drivers = openvswitch,l2population
On each compute node, in the conf file you pass to the OVS agent (plugin.ini / ml2_conf.ini):
[agent]
l2_population = True
Enable the ARP responder: On each compute node, in the conf file you pass to the OVS agent (plugin.ini / ml2_conf.ini):
[agent]
arp_responder = True
As far as I understand suggested above configuration for Neutron (API) Server (on Controller) and Neutron-openvswitch-agent running on Compute will result catching arp broadcast request issued by VM1 and respond to it via pre-populated arp-tables located on the same Compute Node due to L2POP&&ARP Responder architecture design