Ask Your Question
1

Can we use qcow2 as a snapshot image when backend storage is using ceph?

asked 2015-11-03 00:56:05 -0500

senyapsudah gravatar image

Hi Sifu,

just a quick check, is it possible for us to save and instance snapshot using qcow2 when we are using ceph as our storage backend? the reason being why i'm asking about this is i believe qcow allow us to compress the snapshot image. by default our snapshot are save as raw. so if the instance image is 100GB then we will need to wait for quite sometime to finish.

please advise.

thanks.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2015-11-03 05:15:04 -0500

capsali gravatar image

As i understand ceph knows about qcow2 images but they are not supported totally (performance penalties amongst over problems). You can use ceph's copy on write future so it will not store the entire space separatly, but will only store differnces from the original image.

The problem comes if you are using nova ephemeral storage for the instances. When we used ephemeral storage a while back, when issuing a snapshot command for the instance, nova didn't know about ceph's copy on write future so it called qemu for snapshotting, making the snapshot process painfull and long. It first created a clone on the local compute storage from the insntace then uploaded it into glance. It was a long process, bandwidght intensive and we needed local space on compute nodes for snapshotting.

That is why we dropped nova's epehemeral storage in favor of boot from volume from cinder. Snapshoting a volume works as expected, making a copy on write clone of the volume that is instant. Of course this approach has it's downside, primary being an extrastep of creating a volume from an image prior to launching the instance (or use boot from image, create a new volume option from the cli because the dashboard option had a timeout problem with the image download to cinder). A benefit of using cinder as boot drive for instances is that we can take advantage of cinder multi backend future, so we can have multiple types of volumes (eg SSD, HDD) mapped to different pools in ceph that we can boot instances from!

This being said, i read somewhere that instance snapshotting has been integrated with ceph in liberty but cannot confirm this as i didn't test it yet!

Hope this helps!

edit flag offensive delete link more

Comments

hi capsali,

thanks for the input. hopefully this will be integrate in liberty soon. btw, i have one quick question. i notice that inside glance and cinder backup there are configuration related to swift. what do you think of pushing snapshot to swift? will it improve this? any advantage for us?

senyapsudah gravatar imagesenyapsudah ( 2015-11-03 06:03:05 -0500 )edit

I can't shed any light on swift because we don''t use it in our enveironment. We instead use ceph GW (the fewer the systems the better). You can use ceph as swift backend but i dont' have any info on this!

capsali gravatar imagecapsali ( 2015-11-03 06:51:29 -0500 )edit

As for your question i don't think there will be any improvements seeing that nova still uses qemu for snapshotting!

capsali gravatar imagecapsali ( 2015-11-03 06:52:02 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2015-11-03 00:56:05 -0500

Seen: 1,480 times

Last updated: Nov 03 '15