Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Cinder Migration: Could not determine a suitable URL

Hi, I am trying to Cinder migration of attached volumes to work in Ocata between two Ceph clusters. Migrating detached volumes works fine but it currently fails when they're attached.

My cinder.conf looks like this:

enabled_backends = standard,standard2
auth_uri =
auth_url =
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = **********************
memcache_security_strategy = ENCRYPT
memcache_secret_key = **********************
memcached_servers =,,
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
backend_host = rbd:volumes
rbd_pool = volumes
volume_backend_name = standard
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = 827ef4dc-1441-4633-aaf0-16cd1ebffa07

rbd_ceph_conf = /etc/ceph/ceph2.conf
rbd_user = cinder2
backend_host = rbd:volumes
rbd_pool = volumes
volume_backend_name = standard2
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = 4d48e8c0-3557-424d-93e7-5b2f98a6f5ae

First, it failed with something like "Endpoint not found" in one of the cinder logs. I could fix that by setting nova_endpoint_template and nova_endpoint_admin_template as suggested in one of the questions asked here. I set if equal to that is set in Keystone's service catalog which works fine:

nova_endpoint_template =
nova_endpoint_admin_template =

Then I got a "key error" for "tenant_id". Checking the default value of these in the configuration reference, I found out it had to be "project_id" so I changed that to

nova_endpoint_template =
nova_endpoint_admin_template =

It's weird since Keystone shows "tenant_id" and it works, but okay. I not got past the "key error" but hit the next message I cannot figure out:

DiscoveryFailure: Could not determine a suitable URL for the plugin

Of course, this line only doesn't help alot, so I uploaded everything I got after running

openstack volume migrate --host rbd:volumes@standard2#standard2 vol1

in any of Cinder's logs on the controller:

Any ideas are very welcome.

cheers Mathias