Ask Your Question
0

ceph cinder attach volume to running instance

asked 2016-08-06 18:22:44 -0600

jsm gravatar image

updated 2016-08-07 13:15:22 -0600

I cannot get cinder with ceph as backend to attach a volume to a running instance.

UPDATE: I can attach the volume in horizon, to a shutdown instance, but afterwards it cannot boot with the volume attached to it. Reading logs from /var/log/libvirt/instance-xxxx.log I get the following error:

error reading header from volume-VOLUMEID

Invoking the listed qemu-kvm command directly gives the same error. The mon_addr and key seems to be correct.

qemu-kvm: -drive file=rbd:rbd/volume-VOLID:id=volumes:key=KEYauth_supported=cephx\;none:mon_host="10.0.0.50\:6789",if=none,id=drive-virtio-disk1,format=raw,serial=xxxx,cache=none:

error reading header from volume-c5b414ae-60db-41da-9e4b-d5

I'm hitting this error internal error unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk1' could not be initialized. I'm using 4.4.16-1.el7.elrepo.x86_64 on CentOS Linux release 7.2.1511 (Core). Mitaka dedicated network node. Attaching works fine when the instance is shut off.

List off installed packages: (matching virt kvm qemu)

  1. centos-release-virt-common.noarch 1-1.el7.centos @extras
  2. libvirt-client.x86_64 1.2.17-13.el7_2.5 @updates
  3. libvirt-daemon.x86_64 1.2.17-13.el7_2.5 @updates
  4. libvirt-daemon-config-nwfilter.x86_64 1.2.17-13.el7_2.5 @updates
  5. libvirt-daemon-driver-interface.x86_64 1.2.17-13.el7_2.5 @updates
  6. libvirt-daemon-driver-network.x86_64 1.2.17-13.el7_2.5 @updates
  7. libvirt-daemon-driver-nodedev.x86_64 1.2.17-13.el7_2.5 @updates
  8. libvirt-daemon-driver-nwfilter.x86_64 1.2.17-13.el7_2.5 @updates
  9. libvirt-daemon-driver-qemu.x86_64 1.2.17-13.el7_2.5 @updates
  10. libvirt-daemon-driver-secret.x86_64 1.2.17-13.el7_2.5 @updates
  11. libvirt-daemon-driver-storage.x86_64 1.2.17-13.el7_2.5 @updates
  12. libvirt-daemon-kvm.x86_64 1.2.17-13.el7_2.5 @updates
  13. libvirt-python.x86_64 1.2.17-2.el7 @base
  14. virt-what.x86_64 1.13-6.el7 @anaconda
  15. libvirt-daemon-kvm.x86_64 1.2.17-13.el7_2.5 @updates
  16. qemu-kvm-common-ev.x86_64 10:2.3.0-31.el7_2.10.1 @centos-qemu-ev
  17. qemu-kvm-ev.x86_64 10:2.3.0-31.el7_2.10.1 @centos-qemu-ev
  18. centos-release-qemu-ev.noarch 1.0-1.el7 @extras
  19. ipxe-roms-qemu.noarch 20160127-1.git6366fa7a.el7 @centos-openstack-mitaka
  20. libvirt-daemon-driver-qemu.x86_64 1.2.17-13.el7_2.5 @updates
  21. qemu-img-ev.x86_64 10:2.3.0-31.el7_2.10.1 @centos-qemu-ev
  22. qemu-kvm-common-ev.x86_64 10:2.3.0-31.el7_2.10.1 @centos-qemu-ev
  23. qemu-kvm-ev.x86_64 10:2.3.0-31.el7_2.10.1 @centos-qemu-ev
edit retag flag offensive close merge delete

3 answers

Sort by ยป oldest newest most voted
0

answered 2018-02-01 09:34:12 -0600

guimaluf gravatar image

This is probably related with ceph authentication issues. nova-compute access ceph cinder-volumes pools to attach the disk.

There was a change in Ocata version that is related to that: https://bugs.launchpad.net/puppet-nov...

edit flag offensive delete link more
0

answered 2016-09-15 09:04:50 -0600

shorton gravatar image

updated 2016-09-19 10:46:40 -0600

UPDATE: my issue is resolved; it was related to not having sufficient cephx auth set for cinder and nova ceph clients. For others that might encounter this problem, here is my ceph auth list:

arcuser@arccloud01:~$ sudo ceph auth list installed auth entries:

mds.arccloud01 key: xxx caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow rwx osd.0 key: xxx caps: [mon] allow profile osd caps: [osd] allow * osd.1 key: xxx caps: [mon] allow profile osd caps: [osd] allow * osd.2 key: xxx caps: [mon] allow profile osd caps: [osd] allow * client.admin key: xxx caps: [mds] allow * caps: [mon] allow * caps: [osd] allow * client.bootstrap-mds key: xxx caps: [mon] allow profile bootstrap-mds client.bootstrap-osd key: xxx caps: [mon] allow profile bootstrap-osd client.bootstrap-rgw key: xxx caps: [mon] allow profile bootstrap-rgw client.cephadmin key: xxx caps: [mds] allow rw path=/volumes/_nogroup/1cd9bc07-e883-4e90-8e88-6ee46b4f04de caps: [mon] allow r caps: [osd] allow rw pool=cephfs_data namespace=fsvolumens_1cd9bc07-e883-4e90-8e88-6ee46b4f04de client.cindervolumes key: xxx caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=cindervolumes, allow rwx pool=glanceimages, allow rwx pool=novapool, allow rwx pool=manila client.glanceimages key: xxx caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=glanceimages client.manila key: xxx caps: [mds] allow * caps: [mon] allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create" caps: [osd] allow rw client.novapool key: xxx caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=cindervolumes, allow rwx pool=glanceimages, allow rwx pool=novapool, allow rwx pool=manila

Hello jsm, I have a very similar issue and just saw your post... I was about to post this:

Hello, I have working Mitaka deployment on Ubuntu 16.04. I am using Ceph RBD backed Nova Ephemeral storage, Cinder volumes, and Glance Images. Neutron is configured for provider network using linux bridge agent. Everything is working correctly, except I am unable to attach Cinder volume to Nova instance. I have googled several similar bugs, but all of them are old and give inconclusive solutions. Upon attempting nova volume attach, it gives a response, but logs show an internal error, and the attach is failed. The error I am getting:

libvirtError: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk1'
I should add that this only occurs on a running instance. On a shutdown instance, it works fine. Is this the same as your issue? Do I have something misconfigured? Perhaps we can help each other... From nova-compute.log:

ERROR nova.virt.block_device [req-5c342ce8-fc35-4245-8bff-5731859257c3 996a74a0b9de4473b64cf1d94ff38c9f 47a164e5de59452987ee2fc215169e49 - - -] [instance: fd3620de-6c48-4019-

a0c6-d6bcc084f095] Driver failed to attach volume 009579fd-52b7-46e3-8a51-c09bef28852d at /dev/vdb
 Traceback (most recent call last):
   File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 279, in attach
     device_type=self['device_type'], encryption=encryption)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1359, in attach_volume
     self._disconnect_volume(connection_info, disk_dev)
   File "/usr/local/lib/python2 ...
(more)
edit flag offensive delete link more
0

answered 2016-08-07 22:45:52 -0600

jsm gravatar image

updated 2016-08-07 22:46:36 -0600

I tried to map the volume in question on the cinder-volume node:

sudo rbd map volume-VOLID --name client.USERID -m MONITORIP -k /etc/ceph/client.USERID.keyring

This resulted in:

RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable

Adding

rbd default format = 2
rbd default features = 3

to /etc/ceph/ceph.conf on the cinder-volume node fixed it. (I also installed ceph-common on the compute node)

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-08-06 18:22:44 -0600

Seen: 1,706 times

Last updated: Feb 01 '18