Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

kolla-ansible config adds l3-agent qG-interface in br-int (==> No inet access for VMs)

I am running kolla-ansible(pip) with pike, on centos 7, all updated, in a multinode deploy with 2 nodes for ctrl, neutron, storage and compute. All nodes have four IP-cfgd interfaces eth0-3, and neutron nodes have an UN-numbered interface for external-net traffic.

globals.yml contains:

enable_neutron_provider_networks: "yes"
network_interface: "eth1"
kolla_external_vip_interface: "eth0"
api_interface: "eth1"
storage_interface: "eth3"
tunnel_interface: "eth2"
neutron_external_interface: "eth4"

I can create VMs, networks, routers, FIPs, volumes, and all look quite nice, inkluding the provider network for external acccess.

# oo network show PubInet
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2018-04-11T19:49:51Z                 |
| description               | Public Internet Access               |
| dns_domain                | None                                 |
| id                        | cfb385b8-1908-4fcc-8944-81792481012f |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | PubInet                              |
| port_security_enabled     | True                                 |
| project_id                | e410ae95250046ea8fa123e0231a0b01     |
| provider:network_type     | flat                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 4                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 2d8c854b-5b77-437e-9b9d-33c9f86d2ebc |
| tags                      |                                      |
| updated_at                | 2018-04-11T19:49:55Z                 |
+---------------------------+--------------------------------------+

BUT, the configuration for ovs, configures the routers global/external interfaces to the wrong bridge. The br-ex bridge is properly configured with the chosen eth4 as can be seen:

docker exec -it openvswitch_vswitchd bash
(openvswitch-vswitchd)[root@neutron1 /]# ovs-vsctl show
34ccd256-f498-4c06-aa06-7fa22441c5a9
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth4"
            Interface "eth4"
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qr-28133bb7-98"
            tag: 6
            Interface "qr-28133bb7-98"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qg-5debe8d7-06"
            tag: 7
            Interface "qg-5debe8d7-06"
                type: internal
        Port "tap624dda98-5d"
            tag: 6
            Interface "tap624dda98-5d"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-ac146566"
            Interface "vxlan-ac146566"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="172.20.101.104", out_key=flow, remote_ip="172.20.101.102"}
        Port br-tun
            Interface br-tun
                type: internal
(openvswitch-vswitchd)[root@neutron1 /]# exit

The l3-agents qg-interface should have been added to the br-ex bridge, since that is were the flat provider network is accessible.

Somewhat out-of-normal, the OVS configuration hasnt got an "openvswitch_agent.ini", but the same configuration (the OVS chapter) is configured and supplied in the neutron_openvswitch_agent container via /etc/kolla/neutron-openvswitch-agent/ml2_conf.ini, where it says:

[ovs]
bridge_mappings = physnet1:br-ex
datapath_type = system
ovsdb_connection = tcp:127.0.0.1:6640
local_ip = 172.20.101.104

So, why is the qg-interface not added correctly? What is missing? My memory tells me that I ran into a similar issue in my RDO packstack attempts, but in those scenarios the OVS-bridge contained additional virtual links, that weirdly got the traffic through anyway. I THINK I solved it then with the "external_network_bridge" parameter to neutron.conf, but :

 external_network_bridge¶
    Type:   string
    Default:    

    Name of bridge used for external network traffic. When this parameter is set, the L3 agent will plug an interface directly into an external bridge which will not allow any wiring by the L2 agent. Using this will result in incorrect port statuses. This option is deprecated and will be removed in Ocata.

This seems to imply that there SHOULD be "additional wiring by the l2 agent". 1) Why would you want to? The external router interface SHOULD be connected to the external network?? 2) Why ISNT it, if that is the case??

Please help!

TQ