# Revision history [back]

### Cinder Migration: Could not determine a suitable URL

Hi, I am trying to Cinder migration of attached volumes to work in Ocata between two Ceph clusters. Migrating detached volumes works fine but it currently fails when they're attached.

My cinder.conf looks like this:

[DEFAULT]
...
enabled_backends = standard,standard2
...
[keystone_authtoken]
auth_uri = http://10.0.1.2:5000
auth_url = http://10.0.1.2:35357
project_domain_id = default
user_domain_id = default
project_name = service
memcache_security_strategy = ENCRYPT
memcache_secret_key = **********************
memcached_servers = 10.0.1.6:11211,10.0.1.5:11211,10.0.1.7:11211
...
[standard]
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
backend_host = rbd:volumes
rbd_pool = volumes
volume_backend_name = standard
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = 827ef4dc-1441-4633-aaf0-16cd1ebffa07

[standard2]
rbd_ceph_conf = /etc/ceph/ceph2.conf
rbd_user = cinder2
backend_host = rbd:volumes
rbd_pool = volumes
volume_backend_name = standard2
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = 4d48e8c0-3557-424d-93e7-5b2f98a6f5ae


First, it failed with something like "Endpoint not found" in one of the cinder logs. I could fix that by setting nova_endpoint_template and nova_endpoint_admin_template as suggested in one of the questions asked here. I set if equal to that is set in Keystone's service catalog which works fine:

nova_endpoint_template = http://10.0.1.2:8774/v2.1/%(tenant_id)s


Then I got a "key error" for "tenant_id". Checking the default value of these in the configuration reference, I found out it had to be "project_id" so I changed that to

nova_endpoint_template = http://10.0.1.2:8774/v2.1/%(project_id)s


It's weird since Keystone shows "tenant_id" and it works, but okay. I not got past the "key error" but hit the next message I cannot figure out:

DiscoveryFailure: Could not determine a suitable URL for the plugin


Of course, this line only doesn't help alot, so I uploaded everything I got after running

openstack volume migrate --host rbd:volumes@standard2#standard2 vol1


in any of Cinder's logs on the controller: https://gist.github.com/mewald1/e65aa6674f6b9369c441c34b4907a9a7.

Any ideas are very welcome.

cheers Mathias