# Glance Image creation checksum logic

I tried a kind of non OpenStack thing to modify changes to glance image directly.

All I did was, I used the images of glance directly to launch VM through virt-manager of the controller node by using the existing .qcow2 images available locally within glance storage location.

I did all my changes successfully like editing some files and trigger on-boot services etc. I then shut down the VM and removed the .qcow2 from the VM of virt-manager.I was forced to do chown glance:glance <image_id> done because the image permissions changed after using it with VM of virt-manager.

Next I tried launching VM using the modified image, Checksum problem made VM to reach error state in the compute nodes.

I understand like when Image is registered to glance checksum is calculated based on the image size, so my modification through virt-manager VM has changed the image size which ultimately calculated a different checksum in my compute node while VM creation that is not the same with what is written in database while registering the image to glance.

The error logs on failed VM creation clearly gives me the checksum calculated and what was expected. Then I edited the checksum in the database and then was able to launch VM in compute nodes.

So, my question is can I create checksum for a modified glance image without have to refer to logs, if so how to do that.

edit retag close merge delete

Sort by » oldest newest most voted

Hi,

We're using ceph as a glance backend and I was testing it only with RAW images (not qcow2), but I think this logic is applicable for all types of images. Probably this information will help someone and will save some time.

So, glance stores information about image location in it's database within table image_locations In case of ceph, when disk is added, glance also creates snapshot to it. And actually a snapshot is the image itself. So in order to get md5 of the image you should calculate md5sum of image location (which is retrieved from value field of image_locations table)

In case of ceph, it would be an rbd drive. So, for example, you have image with UUID c30444c5-eec2-46e2-b5d1-cae1e70a02af, so you are about to launch request:

MariaDB [glance]> select * from image_locations where image_id = 'c30444c5-eec2-46e2-b5d1-cae1e70a02af';
+----+--------------------------------------+---------------------------------------------------------------------------------------------+---------------------+---------------------+------------+---------+-----------+--------+
| id | image_id                             | value                                                                                       | created_at          | updated_at          | deleted_at | deleted | meta_data | status |
+----+--------------------------------------+---------------------------------------------------------------------------------------------+---------------------+---------------------+------------+---------+-----------+--------+
| 14 | c30444c5-eec2-46e2-b5d1-cae1e70a02af | rbd://9be0420c-8d0c-40e3-ac68-3e3cef496c63/images/c30444c5-eec2-46e2-b5d1-cae1e70a02af/snap | 2018-03-23 19:29:14 | 2018-03-23 19:29:14 | NULL       |       0 | {}        | active |
+----+--------------------------------------+---------------------------------------------------------------------------------------------+---------------------+---------------------+------------+---------+-----------+--------+


Next you need to map an rbd drive:

rbd map images/c30444c5-eec2-46e2-b5d1-cae1e70a02af@snap


And you're ready to calculate md5sum:

dd if=/dev/rbd0 | md5sum

more

# Get to know Ask OpenStack

Resources for moderators