发起问题

rlr 的档案 - activity

2020-02-13 03:32:55 -0500 获得奖牌  著名问题 (source)
2019-08-20 15:07:04 -0500 获得奖牌  粉丝
2019-08-16 06:34:29 -0500 获得奖牌  热门的问题 (source)
2019-08-16 06:34:29 -0500 获得奖牌  受欢迎的问题 (source)
2019-08-15 16:38:59 -0500 评论问题 How do I debug OVS/VXLAN tenant network issues?

After some investigation, I see that the ARP requests are 1) sent on the provider network instead of the tenant network, and 2) are requesting resolution of IP addresses in the tenant network. I've added the actual ARP message as seen on the controller, from the provider NIC.

2019-08-15 16:08:58 -0500 获得奖牌  编辑 (source)
2019-08-14 14:21:31 -0500 评论问题 How do I debug OVS/VXLAN tenant network issues?

Using tcpdump on both poc1 and poc2, I can see ARP requests for IP addresses using tunneling (the VXLAN tenant network) are not getting responses. I'll be spending some time figuring out how this is intended to work, but any advise to get me moving down the right path would be appreciated.

2019-08-14 09:48:54 -0500 评论问题 How do I debug OVS/VXLAN tenant network issues?

Thanks Bernd. I've updated my question to include ml2_config.ini (I don't have enough ask.openstack points yet to upload files). Thanks for pointing out the misspelling. I'll make the fix and report back.

2019-08-13 17:37:17 -0500 问了问题 How do I debug OVS/VXLAN tenant network issues?

I'm working through a Proof of Concept with Stein. I have two physical servers configured as 1) a controller/network/compute (host=poc2) and 2) just a compute (host=poc1).

There are two physical Ethernet ports on each server, and I have successfully set up a provider network on an isolated flat LAN (192.168.0.0/24) that grants VM instances access. My OpenStack management traffic is on the other LAN. I'm using openvswitch for the ML2 driver. Connectivity passes validation, with 4 VM instances on either compute node (2 each) able to ping each other in a full mesh.

The next step is to create an overlay tenant network using VXLAN over this flat provider network. I appear to have this configured correctly, as I see the tap points set up, and the ovs-vsctl command shows the br-tun with appropriate ports. Additionally, co-hosted VM instances can exchange ping traffic via their tap ports (on poc1 or on poc2), but any attempt to use the overlay/tenant network to ping between servers (poc1 and poc2) fails.

I do have the L2population driver loaded, which I know is supposed to handle the ARP updates. I suspect something is wrong with this, but It is certainly possible something else is going on.

How do I attack this problem? I can't seem to find any log files that show ARP exchanges or L2pop activities.

Thanks for any guidance. Relevant config follows...

An ARP capture via tcpdump on the controller's physical NIC (em2). Note that no messages come into the tunnel bridge.  In this case, the tenant network is the IP address space 192.168.2.128/25, and the provider network is 192.168.2.0/25.

192.168.2.12.53375 > 192.168.2.11.4789: [no cksum] VXLAN, flags [I] (0x08), vni 83
fa:16:3e:d5:bb:0d > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.203 tell 192.168.2.212, length 28


        My openvswitch agent configuration:

        [ovs]
        vxlan_udp_port=4789
        tunnel_type=vxlan
        tunnel_id_ranges=1001:2000
        tenant_network_type=vxlan
        local_ip=192.168.2.12
        enalbe_tunneling=True
        bridge_mappings=provider:br-provider
        integration_bridge=br-int
        tunnel_bridge=br-tun

        [agent]
        l2_population=True
        drop_flows_on_start=False
        tunnel_types=vxlan
        vxlan_udp_port=4789
        polling_interval=2

        [securitygroup]
        firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


        My ml2_conf.ini:

    [ml2]
    type_drivers=flat,vxlan
    tenant_network_types=vxlan
    mechanism_drivers=openvswitch,l2population
    path_mtu=0
    extension_drivers=port_security,qos

    [securitygroup]
    enable_security_group=True
    firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

    [ml2_type_geneve]
    #max_header_size=38
    #vni_ranges=10:100

    [ml2_type_flat]
    flat_networks=provider

    [ml2_type_vxlan]
    vxlan_group=224.0.0.1
    vni_ranges=10:100

    #[ovs]
    bridge_mappings = provider:br-provider
    integration_bridge = br-int
    tenant_network_type = vxlan


        My poc2 (controller/network/compute) server with two VM instances, and em2 as the physical NIC to the provider network:

        [root@poc2 ~(keystone_demo)]# ovs-vsctl show
        954225e6-4e81-42e1-ae90-8bac02f38e9d
            Manager "ptcp:6640:127.0.0.1"
                is_connected: true
            Bridge br-int
                Controller "tcp:127.0.0.1:6633"
                    is_connected: true
                fail_mode: secure
                Port "tap6c607995-02"
                    tag: 1
                    Interface "tap6c607995-02"
                        type: internal
                Port "ovn-89320c-0"
                    Interface ...
(more)
2019-08-07 17:01:36 -0500 评论回答 Issue with spawing vm on public network

Thanks Bernd! This "little detail" has been a thorn for me for too long. I thought the best incremental way forward was to create a compute node and just use the default public network from Packstack for hosted VMs. This port binding issue wouldn't go away. A private net + router works great!