As i understand ceph
knows about qcow2
images but they are not supported totally (performance penalties amongst over problems). You can use ceph's copy on write
future so it will not store the entire space separatly, but will only store differnces from the original image.
The problem comes if you are using nova ephemeral storage
for the instances. When we used ephemeral storage
a while back, when issuing a snapshot command for the instance, nova didn't know about ceph's copy on write future so it called qemu for snapshotting, making the snapshot process painfull and long. It first created a clone on the local compute storage from the insntace then uploaded it into glance. It was a long process, bandwidght intensive and we needed local space on compute nodes for snapshotting.
That is why we dropped nova's epehemeral storage in favor of boot from volume
from cinder. Snapshoting a volume works as expected, making a copy on write clone of the volume that is instant. Of course this approach has it's downside, primary being an extrastep of creating a volume from an image prior to launching the instance (or use boot from image, create a new volume option from the cli because the dashboard option had a timeout problem with the image download to cinder). A benefit of using cinder as boot drive for instances is that we can take advantage of cinder multi backend
future, so we can have multiple types of volumes (eg SSD, HDD) mapped to different pools in ceph that we can boot instances from!
This being said, i read somewhere that instance snapshotting has been integrated with ceph in liberty but cannot confirm this as i didn't test it yet!
Hope this helps!