Ask Your Question

Mitaka neutron-openvswitch-agent down on compute node

asked 2016-06-04 23:51:25 -0500

Yufeng gravatar image

Hi, I have a two node openstack Mitaka testbed installed. One node act as allinone called controller, the other node act as compute node called computer. The neutron-openvswitch-agent on controller is up, while on computer is down. The error log show as below:

2016-06-05 00:40:10.211 29604 INFO neutron.common.config [-] Logging enabled!
2016-06-05 00:40:10.211 29604 INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 8.1.0
2016-06-05 00:40:10.296 29604 INFO oslo_rootwrap.client [-] Spawned new rootwrap daemon process with pid=29614
2016-06-05 00:40:10.350 29604 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-c898b853-3cce-4622-b504-0b92d481128d - - - - -] Mapping physical network provider to bridge br-ex
2016-06-05 00:40:10.351 29604 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-c898b853-3cce-4622-b504-0b92d481128d - - - - -] Bridge br-ex for physical network provider does not exist. Agent terminated!
2016-06-05 00:40:10.352 29604 INFO oslo_rootwrap.client [req-c898b853-3cce-4622-b504-0b92d481128d - - - - -] Stopping rootwrap daemon process with pid=29614
"openvswitch-agent.log" 130L, 22863C

As the log show, the bridge br-ex does not exist, while on network node(which installed on controller), the br-ex is existing:

[root@controller01 nova]# ovs-vsctl show
    Bridge br-int
        fail_mode: secure
        Port "qg-a33ecc48-f6"
            tag: 2
            Interface "qg-a33ecc48-f6"
                type: internal
        Port "qvo64cb3b30-ef"
            tag: 4095
            Interface "qvo64cb3b30-ef"
        Port "qr-0134dc62-0a"
            tag: 1
            Interface "qr-0134dc62-0a"
                type: internal
        Port int-br-vlan
            Interface int-br-vlan
                type: patch
                options: {peer=phy-br-vlan}
        Port "qvo0f8fa29e-84"
            tag: 2
            Interface "qvo0f8fa29e-84"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qvo91c2946e-6f"
            tag: 1
            Interface "qvo91c2946e-6f"
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-cad29764-ed"
            Interface "qg-cad29764-ed"
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eno49"
            Interface "eno49"
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c0a8d43d"
            Interface "vxlan-c0a8d43d"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="", out_key=flow, remote_ip=""}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.4.0"

the neutron.conf on computer:

[root@computer001 neutron]# egrep -v '(^$|^#)'  /etc/neutron/neutron.conf 
auth_strategy = keystone
rpc_backend = rabbit
auth_uri =
auth_url =
memcached_servers =
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
lock_path = /var/lib/neutron/tmp
rabbit_host =
rabbit_userid = openstack
rabbit_password = openstack

the dhcp-agent.conf on computer:

[root@computer001 neutron]# egrep -v '(^$|^#)'  /etc/neutron/dhcp_agent.ini 

the metadata-agent.conf on computer:

[root@computer001 neutron]# egrep -v '(^$|^#)'  /etc/neutron/metadata_agent.ini 

the ml2_conf.ini on computer:

[root@computer001 ml2]# egrep -v '(^$|^#)'  /etc/neutron/plugins/ml2/ml2_conf.ini 
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
flat_networks = provider
vni_ranges = 1:1000
enable_ipset = True

the openvswitch-agent.ini on computer:

[root@computer001 ml2]# egrep -v ...
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2016-07-20 19:04:40 -0500

The openvswitch-agent will not start on the compute node (computer) because it is looking for the bridge br-ex to exist on the compute node (computer). This configuration requires a bridge connection to the flat network (provider) via br-ex (defined in ml_conf.ini and openvswitch-agent.ini).

edit flag offensive delete link more



The problem have been resolved, maybe I used the method you mentioned above then. Thanks for your apply.

Yufeng gravatar imageYufeng ( 2016-08-04 22:10:08 -0500 )edit

answered 2016-08-02 00:04:13 -0500

He Qing gravatar image

You need add br-ex before starting ovs-agent like this:

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex <physical_interface>
edit flag offensive delete link more


Thanks, the problem was resolved several days ago, maybe I used the solution just as you post, I have forgotten the method I used. Anyway, appreciate your help.

Yufeng gravatar imageYufeng ( 2016-08-04 22:07:19 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2016-06-04 23:50:12 -0500

Seen: 2,570 times

Last updated: Aug 02 '16