Ask Your Question

Linux bridge agent functioning on controller, but not compute node; both nodes visible to each other, services running

asked 2017-07-11 11:38:39 -0500

zerorobben gravatar image


I'm struggling with my network configuration (or Neutron configuration). I am following the (RDO installation guide) with "Networking Option 2." Both nodes are running a recent version of CentOS 7.

[root@controller ~]# openstack network agent list # output has been modified to fit on small screens

| ID | Agent Type         | Host       | Avail Zone | Alive | State | Binary                    |
| ID | Linux bridge agent | controller | None       | True  | UP    | neutron-linuxbridge-agent |
| ID | L3 agent           | controller | nova       | True  | UP    | neutron-l3-agent          |
| ID | DHCP agent         | controller | nova       | True  | UP    | neutron-dhcp-agent        |
| ID | Metadata agent     | controller | None       | True  | UP    | neutron-metadata-agent    |

According to (the documentation), I should also have a "Linux bridge agent" on the compute node (compute1). I had issues with the controller bridge agent not appearing on this list either, but this was resolved by removing the Infiniband card on my server (apparently older versions of OpenStack don't like the non-standard MAC address). I also removed the card from the compute node, but this issue persists, which leads me to believe the problem is related to my network configuration.

The neutron-linuxbridge-agent service is running on the compute node, but it is restarting repeatedly in short intervals.

[root@compute1 ~]# systemctl | grep neutron

neutron-linuxbridge-agent.service                                                                                              loaded active running   OpenStack Neutron Linux Bridge Agent

[root@compute1 ~]# journalctl -xel

Jul 11 10:12:41 compute1 neutron-linuxbridge-agent[30559]: Could not load neutron.openstack.common.notifier.rpc_notifier
Jul 11 10:12:41 compute1 systemd[1]: neutron-linuxbridge-agent.service: main process exited, code=exited, status=1/FAILURE
Jul 11 10:12:41 compute1 systemd[1]: Unit neutron-linuxbridge-agent.service entered failed state.
Jul 11 10:12:41 compute1 systemd[1]: neutron-linuxbridge-agent.service failed.
Jul 11 10:12:41 compute1 systemd[1]: neutron-linuxbridge-agent.service holdoff time over, scheduling restart.
Jul 11 10:12:41 compute1 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent...

The error message is different on the compute node. The IP is the management IP of the controller and is associated with the device "eno1."

[root@compute1 ~]# tail -40 /var/log/neutron/linuxbridge-agent.log

2017-07-11 10:33:53.806 10798 INFO neutron.common.config [-] Logging enabled!
2017-07-11 10:33:53.806 10798 INFO neutron.common.config [-] /usr/bin/neutron-linuxbridge-agent version 10.0.1
2017-07-11 10:33:53.806 10798 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Interface mappings: {'provider': 'eno2'}
2017-07-11 10:33:53.807 10798 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Bridge mappings: {}
2017-07-11 10:33:53.827 10798 ERROR neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Tunneling cannot be enabled without the local_ip bound to an interface on the host. Please configure local_ip on the host interface to be used for tunneling and restart the agent.

The compute node is visible to the controller.

[root@controller ~]# openstack host list

| Host Name  | Service     | Zone     |
| compute1   | compute     | nova     |
| controller | consoleauth | internal |
| controller | conductor   | internal |
| controller | scheduler   | internal |

[root@controller ~]# openstack hypervisor list

| ID | Hypervisor Hostname | Hypervisor Type | Host IP ...
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2017-07-11 17:10:20 -0500

The key to the problem is the local_ip config setting in the linuxbridge_agent.ini file on the compute node. What is your value?


edit flag offensive delete link more


Hi Bernd,

Thanks for the comment. It is set to the management IP address of the controller (, as is recommended by the error message.


enable_vxlan = true

local_ip =

l2_population = true


zerorobben gravatar imagezerorobben ( 2017-07-12 09:12:09 -0500 )edit

It needs to be set to the IP address of the interface used for tunneling, but that may well be identical to the management interface.

This is puzzling. As a next step, I would set the debug flag, so that the agent reports the config values when it starts.

Bernd Bausch gravatar imageBernd Bausch ( 2017-07-12 10:17:16 -0500 )edit


I re-installed everything (on an Ubuntu Server base this time) on different nodes with the same configuration and I am now able to see both Linux bridge agents without issue. I'll try to see if my configuration files differ for the CentOS installation, but I consider this resolved.


zerorobben gravatar imagezerorobben ( 2017-07-12 12:01:38 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2017-07-11 11:38:39 -0500

Seen: 2,446 times

Last updated: Jul 11 '17