Booting an instance created from a snapshot fails "corrupted"
Hi, I have this image in uploaded to glance
root@controller:~# glance image-show 2efd517e-8445-451a-a5e6-6fc1915a87cd
+-------------------------------+--------------------------------------+
| Property | Value |
+-------------------------------+--------------------------------------+
| Property 'vmware_adaptertype' | ide |
| Property 'vmware_disktype' | sparse |
| Property 'vmware_ostype' | ubuntu64Guest |
| checksum | b2e19636acac0b2944e52bd50d09618f |
| container_format | bare |
| created_at | 2014-01-15T20:49:52 |
| deleted | False |
| disk_format | vmdk |
| id | 2efd517e-8445-451a-a5e6-6fc1915a87cd |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | Ubuntu 12.10 x64 |
| protected | False |
| size | 834011136 |
| status | active |
| updated_at | 2014-01-16T19:25:38 |
+-------------------------------+--------------------------------------+
root@controller:~#
I can successfully spawn instances from this image. Taking a snapshot of such an instance using Horizon, I get a new image:
root@controller:~# nova image-show 0ae15122-3c5d-4caf-a380-47b2cb033937
+-------------------------------------+--------------------------------------+
| Property | Value |
+-------------------------------------+--------------------------------------+
| metadata vmware_ostype | ubuntu64Guest |
| metadata owner_id | 2575481bef7f4fde956202e3070fe688 |
| minDisk | 10 |
| metadata instance_type_name | m1.tiny |
| metadata vmware_adaptertype | ide |
| metadata instance_type_id | 7 |
| metadata instance_type_memory_mb | 512 |
| id | 0ae15122-3c5d-4caf-a380-47b2cb033937 |
| metadata instance_type_rxtx_factor | 1 |
| metadata vmware_disktype | sparse |
| progress | 100 |
| minRam | 0 |
| status | ACTIVE |
| updated | 2014-01-19T11:21:04Z |
| metadata instance_type_swap | 0 |
| metadata instance_type_vcpus | 1 |
| metadata vmware_image_version | 1 |
| metadata base_image_ref | 2efd517e-8445-451a-a5e6-6fc1915a87cd |
| metadata instance_type_flavorid | 1 |
| OS-EXT-IMG-SIZE:size | 10737418240 |
| metadata image_type | snapshot |
| metadata user_id | a8b5af7cb2ab401f90d5b4903c091216 |
| name | x2goserver |
| created | 2014-01-19T10:47:19Z |
| metadata instance_uuid | 23157768-3bd7-45ad-af3c-9af0c343ef54 |
| server | 23157768-3bd7-45ad-af3c-9af0c343ef54 |
| metadata instance_type_root_gb | 10 |
| metadata instance_type_ephemeral_gb | 0 |
+-------------------------------------+--------------------------------------+
root@controller:~#
When I now boot a new instance from this snapshot, I get the following error on the compute node:
2014-01-19 20:20:39.885 1979 WARNING nova.virt.vmwareapi.driver [-] Task [CopyVirtualDisk_Task] (returnval){
value = "haTask--vim.VirtualDiskManager.copyVirtualDisk-415701974"
_type = "Task"
} status: error The virtual disk is either corrupted or not a supported format.
2014-01-19 20:20:39.887 1979 ERROR nova.compute.manager [req-d1c54c3e-c47c-4596-8f2a-c7f732bd5075 a8b5af7cb2ab401f90d5b4903c091216 2575481bef7f4fde956202e3070fe688] [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] Instance failed to spawn
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] Traceback (most recent call last):
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1407, in _spawn
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] block_device_info)
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 186, in spawn
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] admin_password, network_info, block_device_info)
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 449, in spawn
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] uploaded_vmdk_path)
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 387, in _copy_virtual_disk
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] self._session._wait_for_task(instance['uuid'], vmdk_copy_task)
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 900, in _wait_for_task
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] ret_val = done.wait()
2014-01-19 20:20:39.887 1979 TRACE nova.compute.manager [instance: 8cc38d9b-e018-4cfa-aa02-eb1b9daa3cd4] File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
2014-01-19 20:20:39.887 1979 ...