What happens if two VMs in the same host communicate via network? Will it consume network bandwidth?

Hi,

If two VMs running on the same host communicate (i.e. sending large files) via network, does it actually consume the network bandwidth?

edit retag close merge delete

Sort by » oldest newest most voted

Hi Skyo,

I Feel its completely depend on you configuration.

1. if you have booted both the VMs on same network, both will be connected to same software L2 switch no network bandwidth will be consumed.
2. if you booted both the VMs on different network,it should go to neutron controller to resolve the VM location/address but if neutron and host(and VMs on it) on same location and network dont think any bandwidth is consumed
more

@kumar lakshman kumar, thanks for your answer. In the case of 1 (both the VMs are booted on same network), are they affected by the quota (i.e. quot:vif_inbound_average)?

( 2016-04-17 00:11:05 -0500 )edit

It depends on your configuration.In Vlan tagging deployment : If you have 2 vms, which belong the same tenant and are on the same physical host then all the packets are tagged with a local vlan tag (fe 2) and hence when the packets reach the br-int they are just forwarded to the right host . without getting outside the physical host. I am pretty sure something similar happens if you have GRE or VxLAn. I dont know about FlatDchp configurations

more

@kumar lakshman kumar wrote :- if you booted both the VMs on different network,it should go to neutron controller to resolve the VM location/address
In case of VXLAN tunneling deployment I would follow https://assafmuller.com/2014/05/21/ov...

     Assuming ML2 + OVS >= 2.1:
Turn on GRE or VXLAN tenant networks as you normally would
Enable l2pop
On the Neutron API node, in the conf file you pass to the Neutron service (plugin.ini / ml2_conf.ini):
[ml2]
mechanism_drivers = openvswitch,l2population

On each compute node, in the conf file you pass to the OVS agent (plugin.ini / ml2_conf.ini):
[agent]
l2_population = True
Enable the ARP responder: On each compute node, in the conf file you pass to the OVS agent (plugin.ini / ml2_conf.ini):
[agent]
arp_responder = True


As far as I understand suggested above configuration for Neutron (API) Server (on Controller) and Neutron-openvswitch-agent running on Compute will result catching arp broadcast request issued by VM1 and respond to it via pre-populated arp-tables located on the same Compute Node due to L2POP&&ARP Responder architecture design

more