Endpoint does not support RPC version 2.0. Attempted method: update_service_capabilities

asked 2016-06-20 09:52:53 -0500

Herr-Herner gravatar image

updated 2016-06-20 09:53:57 -0500

I have OpenStack Mitaka running on Ubuntu 16.04. It is an HA deployment with 2 controllers and 3 compute nodes. I am using OpenSwitch. Everything seems to be working, but on one compute node I have an error in the neutron-openvswitch-agent.log:

2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher [-] Exception during message handling: Endpoint does not support RPC version 2.0. Attempted method: update_service_capabilities
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, in _dispatch_and_reply
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher incoming.message))
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _dispatch
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher raise UnsupportedVersion(version, method=method)
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher UnsupportedVersion: Endpoint does not support RPC version 2.0. Attempted method: update_service_capabilities
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher

I have restarted nova-compute and neutron-openvswitch-agent on the compute node, but the problem remains.

These are my configuration files on the compute node:

neutron.conf:

[DEFAULT]
auth_strategy = keystone
debug = false
host = os-compute01
rpc_backend = rabbit

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken]
auth_type = password
auth_uri = http://os-controller:5000
auth_url = http://os-controller:35357
memcached_servers = os-memcache:11211
password = ******
project_domain_name = default
project_name = service
user_domain_name = default
username = neutron

[oslo_concurrency]
lock_path = $state_path/lock

[oslo_messaging_notifications]
driver = messagingv2

[oslo_messaging_rabbit]
amqp_durable_queues = true
rabbit_ha_queues = true
rabbit_hosts = os-rabbit01:5672,os-rabbit02:5672
rabbit_max_retries = 0
rabbit_password = ******
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
rabbit_userid = neutron
rabbit_virtual_host = openstack

openvswitch_agent.ini:

[agent]
arp_responder = true
l2_population = true
tunnel_types = gre

[ovs]
integration_bridge = br-int
local_ip = 10.30.19.50
tunnel_bridge = br-tun

[securitygroup]
enable_ipset = true
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

I have added some debug output in the oslo dispatcher sources:

message: {u'args': {u'service_name': u'volume', u'host': u'os-blockstorage@rbd', u'capabilities': {u'filter_function': None, u'goodness_function': None, u'multiattach': False, u'volume_backend_name': u'RBD', u'free_capacity_gb': 1446.45, u'driver_version': u'1.2.0', u'total_capacity_gb': 1446.45, u'reserved_percentage': 0, u'vendor_name': u'Open Source', u'storage_protocol': u'ceph'}}, u'version': u'2.0', u'method': u'update_service_capabilities'} 
method: update_service_capabilities
namespace: None
version: 2.0
endpoint: <neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent object at 0x7f69ed001450>
target: <Target version=1.4>

What is going on here? Is this right? The message contains 'os-blockstorage@rbd'. In my deployment, each controller runs a cinder-volume service, but because of load balancer issues both services use the same hostname that is mapped to the VIP, otherwise Cinder causes issues when one controller node fails. The endpoint that gets checked is the OVSNeutronAgent, which has version 1.4.

Has anybody an idea how to solve this issue?

edit retag flag offensive close merge delete