Ask Your Question

kolla-ansible config adds l3-agent qG-interface in br-int (==> No inet access for VMs)

asked 2018-04-13 03:27:33 -0500

theque42 gravatar image

I am running kolla-ansible(pip) with pike, on centos 7, all updated, in a multinode deploy with 2 nodes for ctrl, neutron, storage and compute. All nodes have four IP-cfgd interfaces eth0-3, and neutron nodes have an UN-numbered interface for external-net traffic.

globals.yml contains:

enable_neutron_provider_networks: "yes"
network_interface: "eth1"
kolla_external_vip_interface: "eth0"
api_interface: "eth1"
storage_interface: "eth3"
tunnel_interface: "eth2"
neutron_external_interface: "eth4"

I can create VMs, networks, routers, FIPs, volumes, and all look quite nice, inkluding the provider network for external acccess.

# oo network show PubInet
| Field                     | Value                                |
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2018-04-11T19:49:51Z                 |
| description               | Public Internet Access               |
| dns_domain                | None                                 |
| id                        | cfb385b8-1908-4fcc-8944-81792481012f |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | PubInet                              |
| port_security_enabled     | True                                 |
| project_id                | e410ae95250046ea8fa123e0231a0b01     |
| provider:network_type     | flat                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 4                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 2d8c854b-5b77-437e-9b9d-33c9f86d2ebc |
| tags                      |                                      |
| updated_at                | 2018-04-11T19:49:55Z                 |

BUT, the configuration for ovs, configures the routers global/external interfaces to the wrong bridge. The br-ex bridge is properly configured with the chosen eth4 as can be seen:

docker exec -it openvswitch_vswitchd bash
(openvswitch-vswitchd)[root@neutron1 /]# ovs-vsctl show
    Manager "ptcp:6640:"
        is_connected: true
    Bridge br-ex
        Controller "tcp:"
            is_connected: true
        fail_mode: secure
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth4"
            Interface "eth4"
    Bridge br-int
        Controller "tcp:"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qr-28133bb7-98"
            tag: 6
            Interface "qr-28133bb7-98"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qg-5debe8d7-06"
            tag: 7
            Interface "qg-5debe8d7-06"
                type: internal
        Port "tap624dda98-5d"
            tag: 6
            Interface "tap624dda98-5d"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        Controller "tcp:"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-ac146566"
            Interface "vxlan-ac146566"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="", out_key=flow, remote_ip=""}
        Port br-tun
            Interface br-tun
                type: internal
(openvswitch-vswitchd)[root@neutron1 /]# exit

The l3-agents qg-interface should have been added to the br-ex bridge, since that is were the flat provider network is accessible.

Somewhat out-of-normal, the OVS configuration hasnt got an "openvswitch_agent.ini", but the same configuration (the OVS chapter) is configured and supplied in the neutron_openvswitch_agent container via /etc/kolla/neutron-openvswitch-agent/ml2_conf.ini, where it says:

bridge_mappings = physnet1:br-ex
datapath_type = system
ovsdb_connection = tcp:
local_ip =

So, why is the qg-interface not added correctly? What is missing? My memory tells me that I ran into a similar issue in my RDO packstack attempts, but in those scenarios the OVS-bridge contained additional virtual links, that weirdly got the traffic through anyway. I THINK I solved it then with the "external_network_bridge" parameter to neutron.conf, but :

    Type:   string

    Name of bridge used for external network traffic. When this parameter is set, the L3 agent will plug an interface directly ...
edit retag flag offensive close merge delete


"The l3-agents qg-interface should have been added to the br-ex bridge": qg should be a nbr-int with a int-br-ex peer poot to br-ex. Reason is routers ports are in br-int, so cannot attach a interface directly from br-ex to a router in br-int

Eduardo Gonzalez gravatar imageEduardo Gonzalez ( 2018-04-13 05:13:01 -0500 )edit

1 answer

Sort by » oldest newest most voted

answered 2018-04-13 03:38:04 -0500

theque42 gravatar image

updated 2018-04-13 03:40:37 -0500

Hello TheQue42!

If you use your eyes(the one's just above your nose) and check carefully in the ovs-vsctl output, you will see that there IS a peer=phy-br-ex in the br-int bridge!

So in other words, traffic should be possible to reach the br-ex bridge after all. So why is it not working?

Well have you checked standard, normal interface statuses?

[root@neutron1 neutron-openvswitch-agent(admin)]# ip link show dev eth4
7: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master ovs-system state ***DOWN*** mode DEFAULT qlen 1000
    link/ether 52:54:de:a1:c1:44 brd ff:ff:ff:ff:ff:ff

Do you think there might be a clue there somewhere?

[root@neutron1 neutron-openvswitch-agent(admin)]# ifconfig eth4 up
[root@neutron1 neutron-openvswitch-agent(admin)]# qrouter ping
Running cmd: [ping] in namespace: {qrouter-0b582c5f-d010-4bb1-8b33-ce22437297c0}
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=56 time=5.37 ms
64 bytes from icmp_seq=2 ttl=56 time=3.72 ms
64 bytes from icmp_seq=3 ttl=56 time=3.70 ms
--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 3.700/4.267/5.378/0.787 ms

Well, whaddayknow... :-|

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2018-04-13 03:27:33 -0500

Seen: 612 times

Last updated: Apr 13 '18