Can any body help with this issue?
After I deployed my HA OpenStack Cluster (non-production) when I shut down a VM, I got the following error messages. If I do not restart or stop neutron-linuxbridge-agent.service
(in compute node), these logs will not stop printing.
/var/log/neutron/server.log
in controller node (The full error logs are in the last part).2017-12-28 16:01:26.964 16265 INFO neutron.notifiers.nova [-] Nova event response: {u'status': u'completed', u'tag': u'd2ab84b4-8339-491b-888b-ffaede27d795', u'name': u'network-vif-unplugged', u'server_uuid': u'e6dac399-7743-46ed-a384-1cecca3ac3f4', u'code': 200} 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server [req-edcb230d-6314-4b87-b13e-51691254391d - - - - -] Exception during message handling: AgentNotFoundByTypeHost: Agent with agent_type=L3 agent and host=compute1.example.com could not be found
/var/log/neutron/linuxbridge-agent.log
in compute node (The full error logs are in the last part).2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming\n res = self.dispatcher.dispatch(message)\n', u' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', u' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch\n result = func(ctxt, **new_args)\n', u' File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py", line 234, in update_device_down\n n_const.PORT_STATUS_DOWN, host)\n', u' File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py", line 331, in notify_l2pop_port_wiring\n l2pop_driver.obj.update_port_down(port_context)\n', u' File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py", line 253, in update_port_down\n admin_context, agent_host, [port[\'device_id\']]):\n', u' File "/usr/lib/python2.7/site-packages/neutron/db/l3_agentschedulers_db.py", line 303, in list_router_ids_on_host\n context, constants.AGENT_TYPE_L3, host)\n', u' File "/usr/lib/python2.7/site-packages/neutron/db/agents_db.py", line 291, in _get_agent_by_type_and_host\n host=host)\n', u'AgentNotFoundByTypeHost: Agent with agent_type=L3 agent and host=compute1.example.com could not be found\n'].
I used Pike
to deploy my HA
OpenStack cluster, the OS is CentOS 7.x
. There are four nodes in this cluster, three controller nodes and one compute. These four nodes are all VMs in a physical host, each node has 4 cpu cores and 8GB ram. Controller and cluster services (such as pacemaker, haproxy, memcached, rabbitmq, mariadb, keystone and so on) are all deployed on controller nodes. Host names are resolved through DNS server and time of all nodes are synchronized through NTP server.
Everything seemed to work well after I deploying the HA cluster until I shut down a VM. Error messages begin to print.
This issue confused me for several days, during these days, I checked neutron conf files over and over, redeployed neutron service many times, and tried a lot of ways to locate where I did wrong and attempted to fix it, but all of these did not take effect.
Until recently I felt this may be caused by l2population
, then I tried to disable this driver.
In my first experiment, I followed "Linux bridge: Provider networks" and only deployed provider network with dhcp and metadata agents in controller nodes. All components worded well. No error messages were printed.
You can find these guides on "https://docs.openstack.org/neutron/pike/admin/deploy-lb.html"
Then I began the second one, I followed guide "Linux bridge: High availability using VRRP" deployed neutron services in my cluster without l2population
support by setting mechanism_drivers = linuxbridge
in /etc/neutron/plugins/ml2/ml2_conf.ini
in controller nodes and l2_population = false
in /etc/neutron/plugins/ml2/linuxbridge_agent.ini
in all controller and compute nodes. Everything was fine. No error messages showed.
I was almost certain that the key was l2population
.
After that, I came to begin the third test. I set mechanism_drivers = linuxbridge,l2population
and l2_population = true
base on experiment two, then restart the whole cluster. After I started and then shut down the server which was created in experiment two, error messages began to print.
The key is l2population
.
What pazzles me is the error message "Agent with agent_type=L3 agent and host=compute1.example.com could not be found" as three l3 agents are distributed in three controller nodes. And the result of openstack network agent list
could proof that.
And I don't know why this happens? Is this a bug? Or did I make a mistake in configuration?
Can anyone help me to explain this and tell me how to fix it?
Thank you in advance!
The followings are some conf files I used in my last two experiments. (Please do not mind some redundant ssl configurations, they are for test purpose.)
Package Version
The followings are the newest packages. These were updated (by commit `yum -y update`) after I failed in trying to fix it, obviously it did not work either. openstack-neutron-11.0.2-2.el7.noarch openstack-neutron-ml2-11.0.2-2.el7.noarch openstack-neutron-linuxbridge-11.0.2-2.el7.noarch
High availability using VRRP without l2population (Experiment Two)
Controller Nodes:
/etc/neutron/neutron.conf
[DEFAULT] use_ssl = true bind_host = 172.1.1.131 core_plugin = ml2 service_plugins = router allow_overlapping_ips = true auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true transport_url = rabbit://openstack:rabbit_pwd@controller1.example.com:5671,openstack:rabbit_pwd@controller2.example.com:5671,openstack:rabbit_pwd@controller3.example.com:5671 # High Available DHCP Agents dhcp_agents_per_network = 3 # High Availability for Configured Routers l3_ha = true allow_automatic_l3agent_failover = true max_l3_agents_per_router = 3 [agent] [cors] [database] connection = mysql+pymysql://neutron:neutrondb_pwd@openstack.example.com/neutron [keystone_authtoken] auth_uri = https://openstack.example.com:5000 auth_url = https://openstack.example.com:35357 memcached_servers = controller1.example.com:11211,controller2.example.com:11211,controller3.example.com:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron_pwd cafile = /etc/pki/ca-trust/source/anchors/ca.example.com.crt certfile = /etc/pki/tls/certs/controller1.example.com.crt keyfile = /etc/pki/tls/private/controller1.example.com.key [matchmaker_redis] [nova] auth_url = https://openstack.example.com:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = nova_pwd cafile = /etc/pki/ca-trust/source/anchors/ca.example.com.crt certfile = /etc/pki/tls/certs/controller1.example.com.crt keyfile = /etc/pki/tls/private/controller1.example.com.key [oslo_concurrency] lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] # TLS/SSL Support for Connecting HA RabbitMQ Services ssl = true ssl_ca_file = /etc/pki/ca-trust/source/anchors/ca.example.com.crt ssl_cert_file = /etc/pki/tls/certs/controller1.example.com.crt ssl_key_file = /etc/pki/tls/private/controller1.example.com.key # HA Configurations rabbit_retry_interval = 1 rabbit_retry_backoff = 2 rabbit_durable_queues = true rabbit_ha_queues = true [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl] version = TLSv1 ciphers = HIGH:MEDIUM:!aNULL:!MD5:!SEED:!IDEA ca_file = /etc/pki/ca-trust/source/anchors/ca.example.com.crt cert_file = /etc/pki/tls/certs/controller1.example.com.crt key_file = /etc/pki/tls/private/controller1.example.com.key
/etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT] [l2pop] [ml2] extension_drivers = port_security mechanism_drivers = linuxbridge tenant_network_types = vxlan type_drivers = flat,vlan,vxlan [ml2_type_flat] flat_networks = provider [ml2_type_geneve] [ml2_type_gre] [ml2_type_vlan] [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT] [agent] [linux_bridge] physical_interface_mappings = provider:eth0 [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] enable_vxlan = true local_ip = 172.1.1.131 l2_population = false
/etc/neutron/dhcp_agent.ini
[DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true [agent] [ovs]
/etc/neutron/l3_agent.ini
[DEFAULT] interface_driver = linuxbridge external_network_bridge = [agent] [ovs]
/etc/neutron/metadata_agent.ini
[DEFAULT] nova_metadata_protocol = https nova_metadata_insecure = false auth_ca_cert = /etc/pki/ca-trust/source/anchors/ca.example.com.crt nova_client_cert = /etc/pki/tls/certs/controller1.example.com.crt nova_client_priv_key = /etc/pki/tls/private/controller1.example.com.key nova_metadata_host = openstack.example.com metadata_proxy_shared_secret = metadata_pwd [agent] [cache]
Compute Node:
/etc/neutron/neutron.conf
[DEFAULT] core_plugin = ml2 auth_strategy = keystone transport_url = rabbit://openstack:rabbit_pwd@controller1.example.com:5671,openstack:rabbit_pwd@controller2.example.com:5671,openstack:rabbit_pwd@controller3.example.com:5671 [agent] [cors] [database] [keystone_authtoken] auth_uri = https://openstack.example.com:5000 auth_url = https://openstack.example.com:35357 memcached_servers = controller1.example.com:11211,controller2.example.com:11211,controller3.example.com:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron_pwd cafile = /etc/pki/ca-trust/source/anchors/ca.example.com.crt certfile = /etc/pki/tls/certs/compute1.example.com.crt keyfile = /etc/pki/tls/private/compute1.example.com.key [matchmaker_redis] [nova] [oslo_concurrency] lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] # TLS/SSL Support for Connecting HA RabbitMQ Services ssl = true ssl_ca_file = /etc/pki/ca-trust/source/anchors/ca.example.com.crt ssl_cert_file = /etc/pki/tls/certs/compute1.example.com.crt ssl_key_file = /etc/pki/tls/private/compute1.example.com.key # HA Configurations rabbit_retry_interval = 1 rabbit_retry_backoff = 2 rabbit_durable_queues = true rabbit_ha_queues = true [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl]
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT] [agent] [linux_bridge] physical_interface_mappings = provider:eth0 [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] enable_vxlan = true local_ip = 172.1.1.134 l2_population = false
Verify Operation
openstack network agent list
+--------------------------------------+--------------------+----------------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+----------------------+-------------------+-------+-------+---------------------------+ | 0de759d3-cac6-4e14-838e-5b544a7c3689 | DHCP agent | controller2.example.com | nova | :-) | UP | neutron-dhcp-agent | | 35a90635-7540-442d-8142-f321db50ccc1 | Linux bridge agent | compute1.example.com | None | :-) | UP | neutron-linuxbridge-agent | | 3741a4ac-0c45-4493-94bd-ba243b1ba507 | DHCP agent | controller1.example.com | nova | :-) | UP | neutron-dhcp-agent | | 4cb590ed-0eb8-4ded-893d-77941396ce15 | DHCP agent | controller3.example.com | nova | :-) | UP | neutron-dhcp-agent | | 5b4c6ec6-b38a-41a3-ae1e-869e7381e596 | Linux bridge agent | controller2.example.com | None | :-) | UP | neutron-linuxbridge-agent | | 85b9a7ed-7314-4962-bf64-12aa0ebf97f5 | Metadata agent | controller3.example.com | None | :-) | UP | neutron-metadata-agent | | 90cbcd33-5a24-4420-b5c6-a5e3565a7cb4 | Linux bridge agent | controller3.example.com | None | :-) | UP | neutron-linuxbridge-agent | | 96861d03-a1e0-43a2-8247-daa52161d214 | L3 agent | controller2.example.com | nova | :-) | UP | neutron-l3-agent | | afcfc749-35db-4215-bf4f-4ef715506b6a | Linux bridge agent | controller1.example.com | None | :-) | UP | neutron-linuxbridge-agent | | b493d949-5a33-4f3f-a55c-1523c0d052e0 | L3 agent | controller3.example.com | nova | :-) | UP | neutron-l3-agent | | be4de62e-56fb-4c0b-a49a-787c648ff849 | L3 agent | controller1.example.com | nova | :-) | UP | neutron-l3-agent | | c08524fd-6bab-4bed-a28e-bcac8f2bdf85 | Metadata agent | controller1.example.com | None | :-) | UP | neutron-metadata-agent | | fc06d455-6f4f-4592-a350-f624f888e2a8 | Metadata agent | controller2.example.com | None | :-) | UP | neutron-metadata-agent | +--------------------------------------+--------------------+----------------------+-------------------+-------+-------+---------------------------+
High availability using VRRP with l2population (Experiment Three)
Controller Nodes
diff neutron_self_controller_without_l2pop_1228/neutron.conf neutron_self_controller_1228/neutron.conf
No difference.
diff neutron_self_controller_without_l2pop_1228//dhcp_agent.ini neutron_self_controller_1228/dhcp_agent.ini
No difference.
diff neutron_self_controller_without_l2pop_1228/l3_agent.ini neutron_self_controller_1228/l3_agent.ini
No difference.
diff neutron_self_controller_without_l2pop_1228/plugins/ml2/ml2_conf.ini neutron_self_controller_1228/plugins/ml2/ml2_conf.ini
7c7 < mechanism_drivers = linuxbridge --- > mechanism_drivers = linuxbridge,l2population
diff neutron_self_controller_without_l2pop_1228/plugins/ml2/linuxbridge_agent.ini neutron_self_controller_1228/plugins/ml2/linuxbridge_agent.ini
15c15 < l2_population = false --- > l2_population = true
Compute Node
diff neutron_self_compute_without_l2pop_1228/neutron.conf neutron_self_compute_1228/neutron.conf
No difference.
diff neutron_self_compute_without_l2pop_1228/plugins/ml2/linuxbridge_agent.ini neutron_self_compute_1228/plugins/ml2/linuxbridge_agent.ini
15c15 < l2_population = false --- > l2_population = true
Error Logs
Controller Node,
/var/log/neutron/server.log
2017-12-28 16:01:26.964 16265 INFO neutron.notifiers.nova [-] Nova event response: {u'status': u'completed', u'tag': u'd2ab84b4-8339-491b-888b-ffaede27d795', u'name': u'network-vif-unplugged', u'server_uuid': u'e6dac399-7743-46ed-a384-1cecca3ac3f4', u'code': 200} 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server [req-edcb230d-6314-4b87-b13e-51691254391d - - - - -] Exception during message handling: AgentNotFoundByTypeHost: Agent with agent_type=L3 agent and host=compute1.example.com could not be found 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py", line 234, in update_device_down 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server n_const.PORT_STATUS_DOWN, host) 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py", line 331, in notify_l2pop_port_wiring 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server l2pop_driver.obj.update_port_down(port_context) 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py", line 253, in update_port_down 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server admin_context, agent_host, [port['device_id']]): 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/neutron/db/l3_agentschedulers_db.py", line 303, in list_router_ids_on_host 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server context, constants.AGENT_TYPE_L3, host) 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/neutron/db/agents_db.py", line 291, in _get_agent_by_type_and_host 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server host=host) 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server AgentNotFoundByTypeHost: Agent with agent_type=L3 agent and host=compute1.example.com could not be found 2017-12-28 16:01:27.646 16265 ERROR oslo_messaging.rpc.server
Compute Node,
/var/log/neutron/linuxbridge-agent.log
2017-12-28 16:01:32.499 1510 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-edcb230d-6314-4b87-b13e-51691254391d - - - - -] Linux bridge agent Agent out of sync with plugin! 2017-12-28 16:01:32.500 1510 INFO neutron.agent.securitygroups_rpc [req-edcb230d-6314-4b87-b13e-51691254391d - - - - -] Remove device filter for set(['tapd2ab84b4-83']) 2017-12-28 16:01:32.517 1510 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-edcb230d-6314-4b87-b13e-51691254391d - - - - -] Attachment tapd2ab84b4-83 removed 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent [req-edcb230d-6314-4b87-b13e-51691254391d - - - - -] Error occurred while removing port tapd2ab84b4-83: RemoteError: Remote error: AgentNotFoundByTypeHost Agent with agent_type=L3 agent and host=compute1.example.com could not be found [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming\n res = self.dispatcher.dispatch(message)\n', u' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', u' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch\n result = func(ctxt, **new_args)\n', u' File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py", line 234, in update_device_down\n n_const.PORT_STATUS_DOWN, host)\n', u' File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py", line 331, in notify_l2pop_port_wiring\n l2pop_driver.obj.update_port_down(port_context)\n', u' File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py", line 253, in update_port_down\n admin_context, agent_host, [port[\'device_id\']]):\n', u' File "/usr/lib/python2.7/site-packages/neutron/db/l3_agentschedulers_db.py", line 303, in list_router_ids_on_host\n context, constants.AGENT_TYPE_L3, host)\n', u' File "/usr/lib/python2.7/site-packages/neutron/db/agents_db.py", line 291, in _get_agent_by_type_and_host\n host=host)\n', u'AgentNotFoundByTypeHost: Agent with agent_type=L3 agent and host=compute1.example.com could not be found\n']. 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent Traceback (most recent call last): 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", line 336, in treat_devices_removed 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent cfg.CONF.host) 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 139, in update_device_down 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent agent_id=agent_id, host=host) 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 162, in call 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent return self._original_context.call(ctxt, method, **kwargs) 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 169, in call 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent retry=self.retry) 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 123, in _send 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent timeout=timeout, retry=retry) 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 578, in send 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent retry=retry) 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 569, in _send 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent raise result 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent RemoteError: Remote error: AgentNotFoundByTypeHost Agent with agent_type=L3 agent and host=compute1.example.com could not be found 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming\n res = self.dispatcher.dispatch(message)\n', u' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', u' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch\n result = func(ctxt, **new_args)\n', u' File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py", line 234, in update_device_down\n n_const.PORT_STATUS_DOWN, host)\n', u' File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py", line 331, in notify_l2pop_port_wiring\n l2pop_driver.obj.update_port_down(port_context)\n', u' File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py", line 253, in update_port_down\n admin_context, agent_host, [port[\'device_id\']]):\n', u' File "/usr/lib/python2.7/site-packages/neutron/db/l3_agentschedulers_db.py", line 303, in list_router_ids_on_host\n context, constants.AGENT_TYPE_L3, host)\n', u' File "/usr/lib/python2.7/site-packages/neutron/db/agents_db.py", line 291, in _get_agent_by_type_and_host\n host=host)\n', u'AgentNotFoundByTypeHost: Agent with agent_type=L3 agent and host=compute1.example.com could not be found\n']. 2017-12-28 16:01:32.881 1510 ERROR neutron.plugins.ml2.drivers.agent._common_agent
And I don't know why this happens? Is this a bug? Or did I make a mistake in configuration?
Can anyone help me to explain this and tell me how to fix it?
Thank you in advance!