Ask Your Question
0

instance reboot failure when using ceph

asked 2014-10-24 12:43:30 -0600

bgyako gravatar image

updated 2014-10-24 12:57:26 -0600

Hi All,

Pretty stumped, need help. So I'm using ceph for cinder storage and swift for glance. Have a cluster set up for controllers, using Icehouse and ceph firefly.

The Issue: Once I configure a compute node to use ceph with secret UUID, everything is working, if I reboot existing vm instance get below error. If I create new instance everything is working.

Steps to create secret UUID:

Copy client.cinder.key from ceph

create secret.xml file

sudo virsh secret-define --file secret.xml

sudo virsh secret-set-value --secret $UUID --base64 $(cat client.cinder.key)

nova.conf on compute node:

` [DEFAULT]

libvirt_images_type=rbd libvirt_images_rbd_pool=ceph_datastore

libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf

rbd_user=cinder

rbd_secret_uuid=[uuid from the last command]

libvirt_inject_password=false

libvirt_inject_key=false

libvirt_inject_partition=-2`

Error:

error: Failed to start domain instance-00000129 error: internal error: process exited while connecting to monitor: qemu-system-x86_64: -drive file=rbd:ceph_datastore/c1c83bc5-f941-4f81-a62e-41e241024892_disk:id=cinder:key=AQB8n/tT4N8PAxAAQlzZ1PEiPERmBGGpqr7jLQ==:auth_supported=cephx\;none:mon_host=10.1.0.13\:6789\;10.1.0.14\:6789\;10.1.0.15\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=none: error reading header from c1c83bc5-f941-4f81-a62e-41e241024892_disk qemu-system-x86_64: -drive file=rbd:ceph_datastore/c1c83bc5-f941-4f81-a62e-41e241024892_disk:id=cinder:key=AQB8n/tT4N8PAxAAQlzZ1PEiPERmBGGpqr7jLQ==:auth_supported=cephx\;none:mon_host=10.1.0.13\:6789\;10.1.0.14\:6789\;10.1.0.15\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=none: could not open disk image rbd:ceph_datastore/c1c83bc5-f941-4f81-a62e-41e241024892_disk:id=cinder:key=AQB8n/tT4N8PAxAAQlzZ1PEiPERmBGGpqr7jLQ==:auth_supported=cephx\;none:mon_host=10.1.0.13\:6789\;10.1.0.14\:6789\;10.1.0.15\:6789: Could not open 'rbd:ceph_datastore/c1c83bc5-f941-4f81-a62e-41e241024892_disk:id=cinder:key=AQB8n/tT4N8PAxAA

edit retag flag offensive close merge delete

Comments

We use same configuration and we never had this issue (We use Ubuntu 14.04.1). Do you have a proper populated ceph.conf on your machine? It seems to be a auth issue, really strange it only happens when you reboot the machine...

xtrill gravatar imagextrill ( 2014-10-27 12:17:49 -0600 )edit

ceph.conf is populated.

bgyako gravatar imagebgyako ( 2014-10-27 14:51:54 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2014-10-24 16:51:46 -0600

jtaguinerd gravatar image

Hi,

Can you try to comment out the following in your nova.conf

libvirt_images_type

libvirt_images_rbd_pool

libvirt_images_rbd_ceph_conf

If you have defined backup_driver, remove that as well. Your cinder.conf should contain the following:

rbd_user

rbd_secret_uuid

rbd_pool=ceph_datastore

volume_driver=cinder.volume.drivers.rbd.RBDDriver

It would be good also to check if you have created ceph client keyrings for ceph_datastore pool and copied the keyrings and ceph.conf to your cinder-volume nodes

edit flag offensive delete link more

Comments

Key rings are created and copied to compute nodes and controller running cinder. Noticed something else this only happens to instances that already existed, new instance that are created after config changes are working properly. Last note, the cinder config does not have secret UUID specified.

bgyako gravatar imagebgyako ( 2014-10-27 14:54:18 -0600 )edit

Should I create secret file on Controller and specify the UUID in cinder? Cinder runs on controller only, not on compute nodes.

bgyako gravatar imagebgyako ( 2014-10-27 14:55:10 -0600 )edit

Not sure, but on multibackend configurations you need to configure secret UUID in cinder, as cinder sends the secret UUID to NOVA. You don't need to create the secret in controller as it doesn't run KVM, but it needs to be specified.

xtrill gravatar imagextrill ( 2014-10-27 15:14:44 -0600 )edit

So add the UUID in cinder config on controller that the compute node uses?

bgyako gravatar imagebgyako ( 2014-10-27 15:39:52 -0600 )edit

Yes, if you have installed cinder-volume in the controller node, then add the UUID using rbd_secret_uuid to cinder.conf

xtrill gravatar imagextrill ( 2014-10-27 15:42:05 -0600 )edit
0

answered 2015-02-20 12:58:09 -0600

bobyakov gravatar image

I believe I found the answer:

On the compute node nova.conf needed below change

instead of: #libvirt_images_type=rbd #libvirt_images_rbd_pool=tst_datastore

Used: rbd_user=username rbd_secret_uuid=Secret UUID rbd_pool=tst_datastore

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-10-24 12:43:30 -0600

Seen: 933 times

Last updated: Feb 20 '15