Ask Your Question

unable to modify user capabilities in ceph

asked 2020-03-05 03:35:31 -0600

updated 2020-03-09 03:21:58 -0600

I have trouble connecting cinder to ceph. Error messages from the cinder logs:

Update driver status failed: (config name rbd-1) is uninitialized.

Error connecting to ceph cluster.: rados.ObjectNotFound: [errno 2] error connecting to the cluster

Bad or unexpected response from the storage volume backend API: Error connecting to ceph cluster

I believe this is due to wrong permissions on the ceph side.

Current permissions:

    caps: [mon] profile rbd
    caps: [osd] profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images
    caps: [mon] profile rbd
    caps: [osd] profile rbd pool=backups

According to the (ceph documentation), which is (referenced by the OpenStack documentation), the permissions should be created like this:

ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images' mgr 'profile rbd pool=volumes, profile rbd pool=vms'
ceph auth get-or-create client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups' mgr 'profile rbd pool=backups'

Which doesn't match with the existing permissions.

To modify the existing permissions I changed this to:

ceph auth caps client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images' mgr 'profile rbd pool=volumes, profile rbd pool=vms'
ceph auth caps client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups' mgr 'profile rbd pool=backups'

This however results in the following error message, for both commands:

Error EINVAL: mon capability parse failed, stopped at 'pool=images' of 'profile rbd pool=images'

Trying to create an additional client using the original commands from the documentation results in the same error messages.

How can I modify the permissions so cinder can connect properly?

Other clients seem to be working, gnocchi works, glance works, both with ceph backend enabled. It's just the cinder containers that keep restarting and reporting errors.

Versions involved:

  • OpenStack: Train
  • Ceph: Nautilus
  • deployment: kolla-ansible, ceph-ansible
  • both ceph and openstack are deployed in docker containers

Additional information from comments:


rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
backend_host = rbd:volumes
rbd_pool = volumes
volume_backend_name = rbd-1
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = 4cc23cf2-2428-4efa-a0ac-757abaa70e01
edit retag flag offensive close merge delete


I don't think the client requires mgr capabilities, at least I didn't set any in any of my settings. Your error message also seems to complain about exactly that, it stopped right after the osd caps (pool=images). The first configuration you posted seem fine though. Is the keyring file correct?

eblock gravatar imageeblock ( 2020-03-05 06:04:28 -0600 )edit

The keyring files seem correct. I just recreated them from ceph auth get and redeployed them, but the content was already the same. The error messages persist.

geschnei gravatar imagegeschnei ( 2020-03-05 07:49:31 -0600 )edit

Can you share the cinder config (rbd part)? Have you checked if the network connection is properly configured?

eblock gravatar imageeblock ( 2020-03-06 14:44:01 -0600 )edit

I edited the rbd block from cinder.conf into the question. ceph.conf is absolutely identical for cinder and glance (where it works).

geschnei gravatar imagegeschnei ( 2020-03-09 03:22:59 -0600 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2020-03-10 09:47:20 -0600

I found the actual problem. After checking everything again and again I noticed that the cinder containers didn't even contain the keyring files. Comparing my setup with one from a blogpost I found that I had the wrong directory structure in /etc/kolla/config/:

$ tree /etc/kolla/
|-- config
|   |-- cinder
|   |   |-- ceph.conf
|   |   |-- cinder-backup.conf
|   |   `-- cinder-volume.conf
|   |-- cinder-backup
|   |   |-- ceph.client.cinder-backup.keyring
|   |   `-- ceph.client.cinder.keyring
|   |-- cinder-volume
|   |   `-- ceph.client.cinder.keyring

After moving the directories cinder-volume and cinder-backup into the directory cinder and ran kolla-ansible reconfigure the cinder volume containers started properly.

$ tree /etc/kolla/
|-- config
|   |-- cinder
|   |   |-- ceph.conf
|   |   |-- cinder-backup
|   |   |   |-- ceph.client.cinder-backup.keyring
|   |   |   `-- ceph.client.cinder.keyring
|   |   |-- cinder-backup.conf
|   |   |-- cinder-volume
|   |   |   `-- ceph.client.cinder.keyring
|   |   `-- cinder-volume.conf

I am able to create volumes now.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2020-03-05 03:19:54 -0600

Seen: 559 times

Last updated: Mar 10 '20