Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Nova can't provision instance after image name change with a ceph backend

This is a weird issue that does not happen in our Juno setup, but happens in our Kilo setup. The configuration between the two setups is pretty much the same, with only kilo-specific changes done (namely, moving lines around to new sections).

Here's how to reproduce: 1.Upload an image into glance. 2.Rename that image either through command line or horizon. 3.Try to provision an instance with that image using the boot from image option.

Now, here's what the default behavior is supposed to be from my observations: -When the image is uploaded into ceph, a snapshot is created automatic inside ceph (this is NOT an instance snapshot per say, but a ceph internal snapshot). -When an instance is booted from image in nova, this snapshot gets a clone in the nova ceph pool. Nova then uses that clone as the instance's disk. This is called copy-on-write cloning.

Here's when things get funky: -After an image is renamed and an instance is booted from it, the copy-on-write cloning does not happen. Nova looks for the disk and, of course, fails to find it in its pool, thus . There's no trace anywhere of the copy-on-write clone failing (In part because ceph doesn't log client commands, from what I see).

Here's the error in the compute logs:

/var/log/upstart/nova-compute.log:2015-06-18 18:34:06.507 10070 DEBUG nova.compute.manager [req-4e2ab669-abdb-419a-b9ca-1740af96e154 - - - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: rebuilding, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1241
/var/log/upstart/nova-compute.log:2015-06-18 18:34:06.539 10070 INFO nova.compute.manager [req-4e2ab669-abdb-419a-b9ca-1740af96e154 - - - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] During sync_power_state the instance has a pending task (rebuilding). Skip.
/var/log/upstart/nova-compute.log:2015-06-18 18:34:06.802 10070 DEBUG nova.virt.libvirt.vif [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='',created_at=2015-06-18T22:10:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description='snapme',display_name='bug',ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(58),host='compute4.prod.cloud.gtcomm.net',hostname='snapme',id=331,image_ref='3fc7d726-2863-475a-8b3d-2eecfee32b32',info_cache=InstanceInfoCache,instance_type_id=58,kernel_id='',key_data=None,key_name=None,launch_index=0,launched_at=2015-06-18T22:10:41Z,launched_on='compute4.prod.cloud.gtcomm.net',locked=False,locked_by=None,memory_mb=2048,metadata={meta_var='example'},new_flavor=None,node='compute4.prod.cloud.gtcomm.net',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f4ecb18ba284692baae27b87abf1f2b',ramdisk_id='',reservation_id='r-f7lxpv8y',root_device_name='/dev/vda',root_gb=20,scheduled_at=None,security_groups=SecurityGroupList,shutdown_terminate=False,system_metadata={image_base_image_ref='14d59af7-8e21-4e26-82dc-290308edd899',image_container_format='bare',image_disk_format='raw',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='b8aa993c-6526-4565-b6e6-ee0a82d0a73a',image_min_disk='20',image_min_ram='0',image_network_allocated='True',image_owner_id='5f4ecb18ba284692baae27b87abf1f2b',image_user_id='8ca1f02ca85e4211ad943acce23488ac',network_allocated='True'},tags=<?>,task_state='rebuilding',terminated_at=None,updated_at=2015-06-18T22:34:04Z,user_data='I2Nsb3VkLWNvbmZpZwpkaXNhYmxlX3Jvb3Q6IDAKc3NoX3B3YXV0aDogMQpwYXNzd29yZDogdGVzdHRlc3QKY2hwYXNzd2Q6IHsgZXhwaXJlOiBGYWxzZSB9CmJvb3RjbWQ6CiAtIGlmY29uZmlnIGV0aDEgMTczLjIwOS40NC4xMjcgbmV0bWFzayAyNTUuMjU1LjI1NS4wCiAtIHJvdXRlIGFkZCBkZWZhdWx0IGd3IDE3My4yMDkuNDQuMQogLSByb3V0ZSBkZWwgZGVmYXVsdCBndyAxMC4wLjAuMQogLSBlY2hvICduYW1lc2VydmVyIDguOC44LjgnID4gL2V0Yy9yZXNvbHYuY29uZg==',user_id='8ca1f02ca85e4211ad943acce23488ac',uuid=b8aa993c-6526-4565-b6e6-ee0a82d0a73a,vcpu_model=<?>,vcpus=2,vm_mode=None,vm_state='active') vif=VIF({'profile': {}, 'ovs_interfaceid': u'6dbad98f-3556-494e-86ff-cebe746d982a', 'preserve_on_delete': True, 'network': Network({'bridge': u'br-int', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': [], 'address': u'173.209.44.127'})], 'version': 4, 'meta': {u'dhcp_server': u'173.209.44.110'}, 'dns': [], 'routes': [], 'cidr': u'173.209.44.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': u'gateway', 'address': u'173.209.44.1'})})], 'meta': {u'injected': False, u'tenant_id': u'dfcd0863f170402689d89771da0ea3ff'}, 'id': u'09968d6d-d95e-48e6-a13c-2e34ea64ed25', 'label': u'public'}), 'devname': u'tap6dbad98f-35', 'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 'details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address': u'fa:16:3e:8d:f6:84', 'active': True, 'type': u'ovs', 'id': u'6dbad98f-3556-494e-86ff-cebe746d982a', 'qbg_params': None}) unplug /usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py:774
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.094 10070 DEBUG oslo_concurrency.processutils [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] Running cmd (subprocess): mv /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a_del execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:199
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.106 10070 DEBUG oslo_concurrency.processutils [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] CMD "mv /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a_del" returned: 0 in 0.012s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:225
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.107 10070 INFO nova.virt.libvirt.driver [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Deleting instance files /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a_del
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.108 10070 INFO nova.virt.libvirt.driver [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Deletion of /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a_del complete
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.932 10070 INFO nova.virt.libvirt.driver [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Creating image
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.964 10070 DEBUG nova.virt.libvirt.rbd_utils [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] rbd image b8aa993c-6526-4565-b6e6-ee0a82d0a73a_disk does not exist __init__ /usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py:60
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.994 10070 DEBUG nova.virt.libvirt.rbd_utils [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] rbd image b8aa993c-6526-4565-b6e6-ee0a82d0a73a_disk does not exist __init__ /usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py:60
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 ERROR nova.compute.manager [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Setting instance vm_state to ERROR
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Traceback (most recent call last):
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6362, in _error_out_instance_on_exception
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     yield
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3036, in rebuild_instance
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     self._rebuild_default_impl(**kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2890, in _rebuild_default_impl
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     block_device_info=new_block_device_info)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2347, in spawn
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     admin_pass=admin_password)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2745, in _create_image
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     instance, size, fallback_from_host)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5875, in _try_fetch_image_cache
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     size=size)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 231, in cache
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     *args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 727, in create_image
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     prepare_template(target=base, max_size=size, *args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 445, in inner
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     return f(*args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 221, in fetch_func_sync
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     fetch_func(target=target, *args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2737, in clone_fallback_to_fetch
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     backend.clone(context, disk_images['image_id'])
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 752, in clone
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     include_locations=True)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/image/api.py", line 93, in get
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     show_deleted=show_deleted)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 301, in show
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     _reraise_translated_image_exception(image_id)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 299, in show
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     image = self._client.call(context, version, 'get', image_id)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 219, in call
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     return getattr(client.images, method)(*args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/glanceclient/v2/images.py", line 106, in get
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     return self.model(**body)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/warlock/core.py", line 32, in __init__
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     base_class.__init__(self, *args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/warlock/model.py", line 35, in __init__
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     raise ValueError(str(exc))
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] ValueError: None is not of type u'string'
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] 
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Failed validating u'type' in schema[u'properties'][u'ramdisk_id']:
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     {u'description': u'ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image.',
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]      u'is_base': False,
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]      u'pattern': u'^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$',
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]      u'type': u'string'}
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] 
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] On instance[u'ramdisk_id']:
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     None
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] 
/var/log/upstart/nova-compute.log:2015-06-18 18:34:10.587 10070 DEBUG nova.compute.manager [req-ea353685-b21a-426e-b167-144c1b7af2ac 333d8b78b02346c19a5276756fac11bf 9979d17619e849a9acbed5b2583adde9 - - -]

There's a few things I notice here that I'd like to point out :

-Nova create an ephemeral drive file, then proceeds to delete it before using rbd_utils instead. While strange, this may be the intended but somewhat dirty behavior, as nova consider it an ephemeral instance, before realizing that it's actually a ceph instance and doesn't need its ephemeral disk. Or maybe these conjectures are completely wrong and this is part of the issue.

-Nova "creates the image" (I'm guessing it's the copy-on-write cloning happening here). What exactly happens here isn't very clear, but then it complains that it can't find the clone in its pool to use as block device.

So, I would like to know, is anyone else using kilo with ceph as a backend that can test this and tell me if he can reproduce? It may be a bug, or it could simply be a configuration error. I want to make sure before submitting an openstack bug in launchpad.

Nova can't provision instance after image name change with a ceph backend

This is a weird issue that does not happen in our Juno setup, but happens in our Kilo setup. The configuration between the two setups is pretty much the same, with only kilo-specific changes done (namely, moving lines around to new sections).

Here's how to reproduce: 1.Upload an image into glance. 2.Rename that image either through command line or horizon. 3.Try to provision an instance with that image using the boot from image option.

Now, here's what the default behavior is supposed to be from my observations: -When the image is uploaded into ceph, a snapshot is created automatic inside ceph (this is NOT an instance snapshot per say, but a ceph internal snapshot). -When an instance is booted from image in nova, this snapshot gets a clone in the nova ceph pool. Nova then uses that clone as the instance's disk. This is called copy-on-write cloning.

Here's when things get funky: -After an image is renamed and an instance is booted from it, the copy-on-write cloning does not happen. Nova looks for the disk and, of course, fails to find it in its pool, thus failing to provision the instance . There's no trace anywhere of the copy-on-write clone failing (In part because ceph doesn't log client commands, from what I see).

Here's the error in the compute logs:

/var/log/upstart/nova-compute.log:2015-06-18 18:34:06.507 10070 DEBUG nova.compute.manager [req-4e2ab669-abdb-419a-b9ca-1740af96e154 - - - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: rebuilding, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1241
/var/log/upstart/nova-compute.log:2015-06-18 18:34:06.539 10070 INFO nova.compute.manager [req-4e2ab669-abdb-419a-b9ca-1740af96e154 - - - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] During sync_power_state the instance has a pending task (rebuilding). Skip.
/var/log/upstart/nova-compute.log:2015-06-18 18:34:06.802 10070 DEBUG nova.virt.libvirt.vif [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='',created_at=2015-06-18T22:10:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description='snapme',display_name='bug',ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(58),host='compute4.prod.cloud.gtcomm.net',hostname='snapme',id=331,image_ref='3fc7d726-2863-475a-8b3d-2eecfee32b32',info_cache=InstanceInfoCache,instance_type_id=58,kernel_id='',key_data=None,key_name=None,launch_index=0,launched_at=2015-06-18T22:10:41Z,launched_on='compute4.prod.cloud.gtcomm.net',locked=False,locked_by=None,memory_mb=2048,metadata={meta_var='example'},new_flavor=None,node='compute4.prod.cloud.gtcomm.net',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f4ecb18ba284692baae27b87abf1f2b',ramdisk_id='',reservation_id='r-f7lxpv8y',root_device_name='/dev/vda',root_gb=20,scheduled_at=None,security_groups=SecurityGroupList,shutdown_terminate=False,system_metadata={image_base_image_ref='14d59af7-8e21-4e26-82dc-290308edd899',image_container_format='bare',image_disk_format='raw',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='b8aa993c-6526-4565-b6e6-ee0a82d0a73a',image_min_disk='20',image_min_ram='0',image_network_allocated='True',image_owner_id='5f4ecb18ba284692baae27b87abf1f2b',image_user_id='8ca1f02ca85e4211ad943acce23488ac',network_allocated='True'},tags=<?>,task_state='rebuilding',terminated_at=None,updated_at=2015-06-18T22:34:04Z,user_data='I2Nsb3VkLWNvbmZpZwpkaXNhYmxlX3Jvb3Q6IDAKc3NoX3B3YXV0aDogMQpwYXNzd29yZDogdGVzdHRlc3QKY2hwYXNzd2Q6IHsgZXhwaXJlOiBGYWxzZSB9CmJvb3RjbWQ6CiAtIGlmY29uZmlnIGV0aDEgMTczLjIwOS40NC4xMjcgbmV0bWFzayAyNTUuMjU1LjI1NS4wCiAtIHJvdXRlIGFkZCBkZWZhdWx0IGd3IDE3My4yMDkuNDQuMQogLSByb3V0ZSBkZWwgZGVmYXVsdCBndyAxMC4wLjAuMQogLSBlY2hvICduYW1lc2VydmVyIDguOC44LjgnID4gL2V0Yy9yZXNvbHYuY29uZg==',user_id='8ca1f02ca85e4211ad943acce23488ac',uuid=b8aa993c-6526-4565-b6e6-ee0a82d0a73a,vcpu_model=<?>,vcpus=2,vm_mode=None,vm_state='active') vif=VIF({'profile': {}, 'ovs_interfaceid': u'6dbad98f-3556-494e-86ff-cebe746d982a', 'preserve_on_delete': True, 'network': Network({'bridge': u'br-int', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': [], 'address': u'173.209.44.127'})], 'version': 4, 'meta': {u'dhcp_server': u'173.209.44.110'}, 'dns': [], 'routes': [], 'cidr': u'173.209.44.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': u'gateway', 'address': u'173.209.44.1'})})], 'meta': {u'injected': False, u'tenant_id': u'dfcd0863f170402689d89771da0ea3ff'}, 'id': u'09968d6d-d95e-48e6-a13c-2e34ea64ed25', 'label': u'public'}), 'devname': u'tap6dbad98f-35', 'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 'details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address': u'fa:16:3e:8d:f6:84', 'active': True, 'type': u'ovs', 'id': u'6dbad98f-3556-494e-86ff-cebe746d982a', 'qbg_params': None}) unplug /usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py:774
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.094 10070 DEBUG oslo_concurrency.processutils [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] Running cmd (subprocess): mv /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a_del execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:199
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.106 10070 DEBUG oslo_concurrency.processutils [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] CMD "mv /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a_del" returned: 0 in 0.012s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:225
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.107 10070 INFO nova.virt.libvirt.driver [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Deleting instance files /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a_del
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.108 10070 INFO nova.virt.libvirt.driver [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Deletion of /var/lib/nova/instances/b8aa993c-6526-4565-b6e6-ee0a82d0a73a_del complete
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.932 10070 INFO nova.virt.libvirt.driver [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Creating image
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.964 10070 DEBUG nova.virt.libvirt.rbd_utils [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] rbd image b8aa993c-6526-4565-b6e6-ee0a82d0a73a_disk does not exist __init__ /usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py:60
/var/log/upstart/nova-compute.log:2015-06-18 18:34:08.994 10070 DEBUG nova.virt.libvirt.rbd_utils [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] rbd image b8aa993c-6526-4565-b6e6-ee0a82d0a73a_disk does not exist __init__ /usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py:60
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 ERROR nova.compute.manager [req-d0ff8267-74b1-4d7e-9d8e-5df65f5782b4 8ca1f02ca85e4211ad943acce23488ac 5f4ecb18ba284692baae27b87abf1f2b - - -] [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Setting instance vm_state to ERROR
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Traceback (most recent call last):
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6362, in _error_out_instance_on_exception
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     yield
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3036, in rebuild_instance
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     self._rebuild_default_impl(**kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2890, in _rebuild_default_impl
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     block_device_info=new_block_device_info)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2347, in spawn
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     admin_pass=admin_password)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2745, in _create_image
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     instance, size, fallback_from_host)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5875, in _try_fetch_image_cache
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     size=size)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 231, in cache
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     *args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 727, in create_image
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     prepare_template(target=base, max_size=size, *args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 445, in inner
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     return f(*args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 221, in fetch_func_sync
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     fetch_func(target=target, *args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2737, in clone_fallback_to_fetch
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     backend.clone(context, disk_images['image_id'])
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 752, in clone
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     include_locations=True)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/image/api.py", line 93, in get
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     show_deleted=show_deleted)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 301, in show
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     _reraise_translated_image_exception(image_id)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 299, in show
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     image = self._client.call(context, version, 'get', image_id)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 219, in call
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     return getattr(client.images, method)(*args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/glanceclient/v2/images.py", line 106, in get
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     return self.model(**body)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/warlock/core.py", line 32, in __init__
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     base_class.__init__(self, *args, **kwargs)
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]   File "/usr/lib/python2.7/dist-packages/warlock/model.py", line 35, in __init__
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     raise ValueError(str(exc))
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] ValueError: None is not of type u'string'
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] 
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] Failed validating u'type' in schema[u'properties'][u'ramdisk_id']:
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     {u'description': u'ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image.',
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]      u'is_base': False,
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]      u'pattern': u'^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$',
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]      u'type': u'string'}
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] 
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] On instance[u'ramdisk_id']:
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a]     None
/var/log/upstart/nova-compute.log:2015-06-18 18:34:09.040 10070 TRACE nova.compute.manager [instance: b8aa993c-6526-4565-b6e6-ee0a82d0a73a] 
/var/log/upstart/nova-compute.log:2015-06-18 18:34:10.587 10070 DEBUG nova.compute.manager [req-ea353685-b21a-426e-b167-144c1b7af2ac 333d8b78b02346c19a5276756fac11bf 9979d17619e849a9acbed5b2583adde9 - - -]

There's a few things I notice here that I'd like to point out :

-Nova create an ephemeral drive file, then proceeds to delete it before using rbd_utils instead. While strange, this may be the intended but somewhat dirty behavior, as nova consider it an ephemeral instance, before realizing that it's actually a ceph instance and doesn't need its ephemeral disk. Or maybe these conjectures are completely wrong and this is part of the issue.

-Nova "creates the image" (I'm guessing it's the copy-on-write cloning happening here). What exactly happens here isn't very clear, but then it complains that it can't find the clone in its pool to use as block device.

So, I would like to know, is anyone else using kilo with ceph as a backend that can test this and tell me if he can reproduce? It may be a bug, or it could simply be a configuration error. I want to make sure before submitting an openstack bug in launchpad.