# VXLAN Overlay Networking with Ubuntu OpenStack Mitaka

Dear audience,

I am interested to get aware about any experience with VXLAN Overlay Networking in Mitaka for a successful deployment. In my setup the VXLAN Network is not working. VMs can be launched, but no access to the VMs, even without Security Groups.

I am using a MultiNode Environment with one Controller Node acting also a Network Node, and 2 Compute Nodes. As Core Plugin I am using ML2 with Mechanism Driver for Open vSwitch and L2Population. I am following the installation procedure as desribed in the official OpenStack documentation released under "http://docs.openstack.org/mitaka/install-guide-ubuntu/neutron-controller-install.html" and "http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html". Network Option 2 (Self-Services Networks) is used in my setup.

My questions are as follows: 1) Is it required to manually add the physical Overlay Network port to the OVS Bridge br-tun? (e.g. ovs-vsctl add-port br-tun eth2)

2) In case the physical port (in my case eth2) is added to the OVS Bridge br-tun on all Nodes, no connectivity anymore exists on the Overlay Network on host level (Linux Level). When I start as example a ping from the Network Node to the VTEP IP on the Compute Node (ping from 172.18.0.100 located on Network Node to 172.18.0.101 located on Compute Node) on Linux Level/Host Level any message like ARP Request/ICMP Request are just not replied by the existing port. Tests show, that this only happens, in case the physical Overlay Network ports (eth2 in my case) are added manually to the OVS Bridge br-tun. In case I remove the pyhsical ports for the Overlay Network from br-tun, the connectivity between the nodes on the Overlay Network is fine. But when the ports are added to br-tun, no connectivity anymore on the Overlay Network from Host/Node Level.

Is this a normal behaviour? Can this be an issue for normal functionality for VXLAN Networking? Please also see my config. Do you see any issue in my configuration as followed?

> The physical Interfaces for the Overlay Network (eth2) on all nodes are configured with a static
IP in /etc/networking/interfaces // Controller/Network Node: 172.18.0.100 // Compute1: 172.18.0.101 // Compute 2: 172.18.0.102

> OVS Agent is configured as follows (openvswitch_agent.ini)

On Controller/Network Node:

[agent]
tunnel_types = gre,vxlan
l2_population = True

[ovs]
tunnel_bridge = br-tun
int_peer_patch_port = patch-tun
tun_peer_patch_port = patch-int
local_ip = 172.18.0.100
bridge_mappings = vlan:br-vlan,external:br-ex,vxlan:br-tun
enable_tunneling = true
tunnel_type = gre,vxlan

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = true
enable_ipset = true

On Compute Node (e.g. Compute1):

[agent]
tunnel_types = gre,vxlan
l2_population = True

[ovs]
tunnel_bridge = br-tun
int_peer_patch_port = patch-tun
tun_peer_patch_port = patch-int
local_ip = 172.18.0.101
bridge_mappings = vlan:br-vlan,vxlan:br-tun
enable_tunneling = true
tunnel_type = gre,vxlan

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = true
enable_ipset = true

> ML2 Configuraion File just exists on the Controller/Network Node and is set as follows

[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = vlan,vxlan,gre ...
edit retag close merge delete