Mitaka : snapshot failed - ImageNotAuthorized
Hi,
Mitaka version. Controller node (VMWare). Compute node (HP G8)
I would like to create an image from an instance and reuse this image later as reference image.
I am trying to perform a snapshot of my VM via :
[root@controller images]# nova list
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------+
| ced63b7e-e36f-4aed-ad73-162da88d82f6 | OSP53 | ACTIVE | - | Running | selfservice=172.16.1.11
| cb009e5e-5e6c-4c86-9e5c-4f80f64557a3 | OSP53_250 | SHUTOFF | - | Shutdown | selfservice=172.16.1.19 |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------+
[root@controller images]# nova image-create --poll OSP53_250 OSP53_250Snap
The instance size is :
[root@compute1 _base]# qemu-img info c5d33aa3157b23eadef677e95823eaf773a86865
image: c5d33aa3157b23eadef677e95823eaf773a86865
file format: raw
virtual size: 245G (263066746880 bytes)
disk size: 39G
My controller has enough space :
[root@controller images]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_controller-root 481G 133G 349G 28% /
I see the snapshot growing but after a while the snapshot is deleted :
-rw-r-----. 1 glance glance 3.9G Oct 18 15:30 411f16f5-644e-4198-8168-089f2ead54d6
The snapshot stays in saving mode :
[root@controller images]# openstack image list
+--------------------------------------+---------------+--------+
| ID | Name | Status |
+--------------------------------------+---------------+--------+
| 411f16f5-644e-4198-8168-089f2ead54d6 | OSP53_250Snap | saving |
| 31139f9f-bf46-4cd8-8414-455c78d0ea3e | OSP53-250 | active |
| 31682baf-b55c-47ef-b773-9c8f462006d1 | cirros | active |
| 17de666d-fd41-4082-8615-472c9d53142e | OSP53 | active |
+--------------------------------------+---------------+--------+
From Dashboard, I see status Saving blocked at 39.7GB...
I can't find any error in log.
Nova log :
2016-10-18 15:16:49.885 55303 INFO nova.compute.resource_tracker [req-79a01dd0-e6d6-43ef-8bdd-2cd9c1ef0af4 - - - - -] Compute_service record updated for compute1:compute1
2016-10-18 15:17:15.054 55303 INFO nova.compute.manager [req-a95e9d29-2a19-4287-b195-ec3fad2c9248 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] [instance: cb009e5e\
-5e6c-4c86-9e5c-4f80f64557a3] instance snapshotting
2016-10-18 15:17:15.056 55303 WARNING nova.compute.manager [req-a95e9d29-2a19-4287-b195-ec3fad2c9248 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] [instance: cb009\
e5e-5e6c-4c86-9e5c-4f80f64557a3] trying to snapshot a non-running instance: (state: 4 expected: 1)
2016-10-18 15:17:15.146 55303 INFO nova.virt.libvirt.driver [req-a95e9d29-2a19-4287-b195-ec3fad2c9248 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] [instance: cb00\
9e5e-5e6c-4c86-9e5c-4f80f64557a3] Beginning cold snapshot process
2016-10-18 15:23:54.866 55303 INFO nova.compute.resource_tracker [req-79a01dd0-e6d6-43ef-8bdd-2cd9c1ef0af4 - - - - -] Compute_service record updated for compute1:compute1
2016-10-18 15:24:20.069 55303 INFO nova.virt.libvirt.driver [req-a95e9d29-2a19-4287-b195-ec3fad2c9248 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] [instance: cb00\
9e5e-5e6c-4c86-9e5c-4f80f64557a3] Snapshot extracted, beginning image upload
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [req-a95e9d29-2a19-4287-b195-ec3fad2c9248 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] [instance: cb009e5\
e-5e6c-4c86-9e5c-4f80f64557a3] Error while trying to clean up image 411f16f5-644e-4198-8168-089f2ead54d6
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3] Traceback (most recent call last):
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 431, in deco\
rated_function
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3] self.image_api.delete(context, image_id)
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3] File "/usr/lib/python2.7/site-packages/nova/image/api.py", line 141, in delete
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3] return session.delete(context, image_id)
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3] File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 499, in delete
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3] self._client.call(context, 1, 'delete', image_id)
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager ...