Ask Your Question

Ceph Cache Tiering Error error listing images [closed]

asked 2016-01-26 11:16:24 -0500

kelepirci gravatar image

updated 2016-01-26 16:35:12 -0500

Hello All,

I have installed 3 nodes compute and 6 nodes ceph with total 39 OSDs. Everything works perfect but when I try to implement ssd cache tiring to the system, I start to get following error.

The error is from a running VM. Also I can not start a new vm. If I disable cache tiring from ceph, everything works perfect again.

Is there any specific configuration that I need to make?

error listing images

File "/usr/lib/python2.7/dist-packages/nova/compute/", line 349, in decorated_function return function(self, context, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/compute/", line 2766, in terminate_instance do_terminate_instance(instance, bdms) File "/usr/lib/python2.7/dist-packages/oslo_concurrency/", line 445, in inner return f(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/compute/", line 2764, in do_terminate_instance self._set_instance_error_state(context, instance) File "/usr/lib/python2.7/dist-packages/oslo_utils/", line 85, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/nova/compute/", line 2754, in do_terminate_instance self._delete_instance(context, instance, bdms, quotas) File "/usr/lib/python2.7/dist-packages/nova/", line 149, in inner rv = f(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/compute/", line 2723, in _delete_instance quotas.rollback() File "/usr/lib/python2.7/dist-packages/oslo_utils/", line 85, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/nova/compute/", line 2693, in _delete_instance self._shutdown_instance(context, instance, bdms) File "/usr/lib/python2.7/dist-packages/nova/compute/", line 2620, in _shutdown_instance requested_networks) File "/usr/lib/python2.7/dist-packages/oslo_utils/", line 85, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/nova/compute/", line 2609, in _shutdown_instance block_device_info) File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/", line 751, in destroy destroy_disks, migrate_data) File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/", line 865, in cleanup self._cleanup_rbd(instance) File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/", line 928, in _cleanup_rbd LibvirtDriver._get_rbd_driver().cleanup_volumes(instance) File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/", line 275, in cleanup_volumes volumes = rbd.RBD().list(client.ioctx) File "/usr/lib/python2.7/dist-packages/", line 276, in list raise make_ex(ret, 'error listing images')
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by capsali
close date 2016-02-02 04:35:32.047451


There is not any specific configuration to make this working as I know, so you probably configured something wrong on Ceph side. Which commands you are using on Ceph for this?

Alen Komljen gravatar imageAlen Komljen ( 2016-01-27 02:52:03 -0500 )edit

I have installed OpenStack with Mirantis Fuel 7.0. So I am not sure what kind of commends they use to install ceph. But I think they use puppet to install ceph.

I also suspect his error is due to a bug. Because my ceph version is 0.8.9. Current stable release is 0.8.11.

kelepirci gravatar imagekelepirci ( 2016-01-27 03:51:57 -0500 )edit

Just to be clear, I have followed this article to activate caching tier: (

kelepirci gravatar imagekelepirci ( 2016-01-27 03:54:42 -0500 )edit

Ceph caching tier works perfect if I put a file to backing storage with following command: rados -p rbd put object1 /tmp/file1

kelepirci gravatar imagekelepirci ( 2016-01-27 03:56:03 -0500 )edit

nova image-list works when using cache tier?

capsali gravatar imagecapsali ( 2016-01-27 04:56:09 -0500 )edit

1 answer

Sort by » oldest newest most voted

answered 2016-01-27 07:54:56 -0500

kelepirci gravatar image

Thx guys.

I have found the problem.

I had created ceph pool named compute-cache as ssd pool. But I have forget to give rwx permission to compute-cache pool for client.compute.

Now everything is good and in working order.

edit flag offensive delete link more


I was facing the same issue, however I tried to start a VM from volume and get permission denied error, so it was easy to find a problem.

Alen Komljen gravatar imageAlen Komljen ( 2016-01-28 02:31:17 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2016-01-26 11:16:24 -0500

Seen: 419 times

Last updated: Jan 27 '16