# What's the neutron traffic flow between VMs?

I recently focused on neutron traffic flow between VMs, and got confused about this part of the instructions on the OpenStack Docs: https://docs.openstack.org/newton/networking-guide/deploy-ovs-selfservice.html (East-west scenario 1: Instances on the same network)

I used my local environment to do following tests:

env

1. CentOS7
2. OpenStack Newton

Deployment

1. 1 Controller 1 Compute 1 Network
2. Two VMs which belongs to the same selfservice subnet on the Compute Node with floating ip

Action

SSH into one VM and do icmp request to another one

Result

It's interesting to find there are packets forwarding through overlay interface of Network Node, and also tap interface of l3-agent router.

But according to the previous OpenStack Docs, VMs on the same network should communicate directly.

Questions:

1. VMs on the same network should communicate directly, floating ip or fixed ip? I think there are differences.

2. What's the reason caused VMs on the same network in my deployment communicate with Network Node? Some mistakes in neutron configurations?

Configurations that may make sense:

/etc/neutron/neutron.conf

[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins = router


/etc/neutron/plugin.ini

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security

[ml2_type_vxlan]
vni_ranges = 1:1000

[ml2_type_flat]
flat_networks = provider

[securitygroup]
enable_ipset = true


/etc/neutron/plugins/ml2/openvswitch.ini

[agent]
tunnel_types = vxlan
l2_population = true

[ovs]
local_ip = **
bridge_mappings = provider:br-provider

[securitygroup]
firewall_driver = iptables_hybrid

edit retag close merge delete

Sort by » oldest newest most voted

With openvswitch and vlans the following communication path happens.

• The vm is connected to a tap device which is connected to a linux bridge(both defined in the vm xml file)
• a veth pair connects the linux bridge to the openvswitch integration bridge(br-int)
• The br-int assigns an internal vlan id to the veth port and tags the packets coming from the vm
• It follows flows found with (ovs-ofctl dump-flows br-int) to see if they need to go out vlan/flat networks if defined. If not it sends it over br-int as it would a normal bridge.
• br-int usually has an openvswitch patch port connecting it to br-tun. The traffic enters br-tun next.
• br-tun will usually catch the vxlan multicast/unicast traffic, strip off the internal vlan id and send the traffic outbound to one of the several vxlan-XXX ports in br-tun
• Each of these ports represent a multicast connection to things like the neutron node, other compute nodes... and so on.
• Each of these have a 'local_ip' setting. This causes openvswitch to send the multicast traffic to the defined local_ip address. This will be the 'local_ip' setting in your openvswitch.ini file
• vxlan communication across vm's is handled by this multicast traffic

You will want to make sure that whatever you use for local_ip is allowed to communicate across your network via multicast to make sure vm communication is possible. Defining a static ip here that is set up on a local interface on each device connected to the same network would make sure this happens

more