Revision history [back]

With openvswitch and vlans the following communication path happens.

  • The vm is connected to a tap device which is connected to a linux bridge(both defined in the vm xml file)
  • a veth pair connects the linux bridge to the openvswitch integration bridge(br-int)
  • The br-int assigns an internal vlan id to the veth port and tags the packets coming from the vm
  • It follows flows found with (ovs-ofctl dump-flows br-int) to see if they need to go out vlan/flat networks if defined. If not it sends it over br-int as it would a normal bridge.
  • br-int usually has an openvswitch patch port connecting it to br-tun. The traffic enters br-tun next.
  • br-tun will usually catch the vxlan multicast/unicast traffic, strip off the internal vlan id and send the traffic outbound to one of the several vxlan-XXX ports in br-tun
  • Each of these ports represent a multicast connection to things like the neutron node, other compute nodes... and so on.
  • Each of these have a 'local_ip' setting. This causes openvswitch to send the multicast traffic to the defined local_ip address. This will be the 'local_ip' setting in your openvswitch.ini file
  • vxlan communication across vm's is handled by this multicast traffic

You will want to make sure that whatever you use for local_ip is allowed to communicate across your network via multicast to make sure vm communication is possible. Defining a static ip here that is set up on a local interface on each device connected to the same network would make sure this happens