Ask Your Question

ThomasHammann's profile - activity

2019-03-05 15:03:22 -0500 received badge  Famous Question (source)
2016-12-15 10:31:55 -0500 received badge  Notable Question (source)
2016-12-15 10:31:55 -0500 received badge  Popular Question (source)
2016-10-27 10:47:26 -0500 received badge  Famous Question (source)
2016-10-04 14:52:02 -0500 received badge  Notable Question (source)
2016-09-16 02:54:06 -0500 received badge  Popular Question (source)
2016-09-15 03:19:26 -0500 asked a question VXLAN Overlay Networking with Ubuntu OpenStack Mitaka

Dear audience,

I am interested to get aware about any experience with VXLAN Overlay Networking in Mitaka for a successful deployment. In my setup the VXLAN Network is not working. VMs can be launched, but no access to the VMs, even without Security Groups.

I am using a MultiNode Environment with one Controller Node acting also a Network Node, and 2 Compute Nodes. As Core Plugin I am using ML2 with Mechanism Driver for Open vSwitch and L2Population. I am following the installation procedure as desribed in the official OpenStack documentation released under "http://docs.openstack.org/mitaka/install-guide-ubuntu/neutron-controller-install.html" and "http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html". Network Option 2 (Self-Services Networks) is used in my setup.

My questions are as follows: 1) Is it required to manually add the physical Overlay Network port to the OVS Bridge br-tun? (e.g. ovs-vsctl add-port br-tun eth2)

2) In case the physical port (in my case eth2) is added to the OVS Bridge br-tun on all Nodes, no connectivity anymore exists on the Overlay Network on host level (Linux Level). When I start as example a ping from the Network Node to the VTEP IP on the Compute Node (ping from 172.18.0.100 located on Network Node to 172.18.0.101 located on Compute Node) on Linux Level/Host Level any message like ARP Request/ICMP Request are just not replied by the existing port. Tests show, that this only happens, in case the physical Overlay Network ports (eth2 in my case) are added manually to the OVS Bridge br-tun. In case I remove the pyhsical ports for the Overlay Network from br-tun, the connectivity between the nodes on the Overlay Network is fine. But when the ports are added to br-tun, no connectivity anymore on the Overlay Network from Host/Node Level.

Is this a normal behaviour? Can this be an issue for normal functionality for VXLAN Networking? Please also see my config. Do you see any issue in my configuration as followed?

> The physical Interfaces for the Overlay Network (eth2) on all nodes are configured with a static
  IP in /etc/networking/interfaces // Controller/Network Node: 172.18.0.100 // Compute1: 172.18.0.101 // Compute 2: 172.18.0.102

> OVS Agent is configured as follows (openvswitch_agent.ini)

       On Controller/Network Node:

        [agent]
        tunnel_types = gre,vxlan
        l2_population = True

        [ovs]
        tunnel_bridge = br-tun
        int_peer_patch_port = patch-tun
        tun_peer_patch_port = patch-int
        local_ip = 172.18.0.100
        bridge_mappings = vlan:br-vlan,external:br-ex,vxlan:br-tun
        enable_tunneling = true
        tunnel_type = gre,vxlan

        [securitygroup]
        firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
        enable_security_group = true
        enable_ipset = true

             On Compute Node (e.g. Compute1):

        [agent]
        tunnel_types = gre,vxlan
        l2_population = True

        [ovs]
        tunnel_bridge = br-tun
        int_peer_patch_port = patch-tun
        tun_peer_patch_port = patch-int
        local_ip = 172.18.0.101
        bridge_mappings = vlan:br-vlan,vxlan:br-tun
        enable_tunneling = true
        tunnel_type = gre,vxlan

        [securitygroup]
        firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
        enable_security_group = true
        enable_ipset = true

> ML2 Configuraion File just exists on the Controller/Network Node and is set as follows

        [ml2]
        type_drivers = flat,vlan,gre,vxlan
        tenant_network_types = vlan,vxlan,gre ...
(more)
2016-09-15 01:18:51 -0500 received badge  Famous Question (source)
2016-09-14 09:11:15 -0500 received badge  Notable Question (source)
2016-09-14 02:36:07 -0500 received badge  Popular Question (source)
2016-09-14 01:12:10 -0500 answered a question MTU size of VXLAN Tunnel in OpenStack Mitaka

Hello together,

thanks a lot for your support. I am not using the option "dnsmasq_config_file =/etc/neutron/dnsmasq-neutron.conf" in the dhcp_agent.ini file, because the value in the "dhcp-option=26,1400" is used during Path MTU discovery protocol, where DHCP Server would like to assign an MTU value to instances, and my instances (cirros) does not consider PMTU.

Be aware, my issue in not any MTU size value on any instance or subnet, but only the MTU size of the physical network created with the neutron command "neutron net-create..........." with admin priviliges.

When I use in the ml2 file the value path_mtu=segment_mtu=1604 bytes, then Neutron reduces from that value 50 bytes, finally resulting in the expected 1554 bytes, which are required for VXLAN based Overlay Networks. Please see output "net-create ..." command executed with admin rights with path_mtu=segment_mtu=1604 bytes:

root@controller01:~# neutron net-create TENANT_NET100 --tenant-id fe55df4eac9a4f7b9460877e5c30a923 --provider:network_type vxlan --provider:segmentation_id 100 Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | availability_zone_hints | | | availability_zones | | | created_at | 2016-09-08T14:07:52 | | description | | | id | d2a33db5-c466-4d6e-a22a-eb1e70905007 | | ipv4_address_scope | | | ipv6_address_scope | | | mtu | 1554 | | name | TENANT_NET100 | | port_security_enabled | True | | provider:network_type | vxlan | | provider:physical_network | | | provider:segmentation_id | 100 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tags | | | tenant_id | fe55df4eac9a4f7b9460877e5c30a923 | | updated_at | 2016-09-08T14:07:52 | +---------------------------+--------------------------------------+

When I use instead of 1604 bytes the normal expected value 1554 bytes (path_mtu=segment_mtu=1604 bytes) in ml2 config, then the same output shows 1504 bytes for the physical network, which is unsufficient for VXLAN Overlay Networks:

My question is, why does Neutron when creating pyhsical networks as admin user reduce 50 bytes from the configured path_mtu/segment_mtu? This problem does not happen only Ubuntu OpenStack Mitaka, but also Ubuntu OpenStack Liberty Release.

As workaround of course we just use the value path_mtu=segment_mtu=1604 bytes in the ml2 config file.

Greetings Thomas

2016-09-13 11:02:03 -0500 asked a question TaaS with OpenStack Mitaka

Dear community,

Is there any way to test Tap-as-a-Service (TaaS) on Ubuntu OpenStack Mitaka Release (Ubuntu 14.04)? I have installed the OpenStack environment as described in "http://docs.openstack.org/mitaka/install-guide-ubuntu/". I am not sure, if TaaS is supported in that Release.

Are you aware about any installation guide or instructions to install TaasS on Ubuntu OpenStack Mitaka? Is it supported? Are you aware from which OpenStack Release TaaS is officially available?

Thanks for support Thomas

2016-09-13 11:02:00 -0500 asked a question MTU size of VXLAN Tunnel in OpenStack Mitaka
Dear support,
I am using OpenStack Mitaka running on Ubuntu 14.04 LTS Server. As core plugin ML2 is used together with mechanism driver for Open vSwitch. Installation was done as described in "http://docs.openstack.org/mitaka/install-guide-ubuntu/". ML2 is configured as Core Plugin, using Open vSwitch and L2Population as Mechnism Driver.

In the ML2 configuration file the mtu size values are configured as follows:
patch_mtu = 1554
segment_mtu = 1554
physical_network_mtus = vlan:1500,external:1500

When creating a physical network type vxlan (neutron net-create....) Neutron calculates the MTU size as a minimum between patch_mtu and segment MTU and finally substracting 50 bytes. The 50 bytes are based by the overhead of the Outer Ethernet Header (14 Byte), Outer IP Header (20 Byte), UDP Header (8 Byte) and VXLAN Header (8 Byte). 

Explanation in ML2 configuration file:
# Maximum size of an IP packet (MTU) that can traverse the underlying physical
# network infrastructure without fragmentation when using an overlay/tunnel
# protocol. Either set this to the same value as the global_physnet_mtu value
# or use it to explicitly specify a physical network MTU value that differs
# from the default global_physnet_mtu value. (integer value)
patch_mtu = 1554

Questions: 

1.) Why does Neutron not use the configured 1554 Byte MTU size in ML2 configuration file, but substracts 50 Byte, resulting in a insufficient MTU size for VXLAN Networks (in my case of 1504 Bytes)? Why is it not mentioned in the ML2 configuration file, that Neutron substracts 50 byte from the configured patch_mtu/segment_mtu value to inform users about that this important aspect? What’s the idea to reduce the patch_mtu value with 50 Bytes? 

2.) In case of updating the patch_mtu and segment mtu size in the ML2 configuartion file, tests show, that the new values do not have not any effect to already created networks, but are only effective in case of creating new networks.  
I tried to update the already created network, but MTU size is not offered as an option. Is there any procedure, how to update the new patch_mtu/segment_mtu size from the ML2 configuration into already created networks? Restarting the cluster or any process restart does not help. Is it  really required to delete and re-create phsical network to adapt an already created network tot he new values from the ml2 configuration? 
Thanks a lot for support
Thomas
2016-09-13 11:02:00 -0500 asked a question MTU size with

Test the west