Ask Your Question

Instance metadata backup

asked 2019-07-26 05:14:24 -0600

Suraj Jambhulkar gravatar image

How do we co-relate the metadata of the instance when we export its snapshot from ceph to external drives, like additional drives id, which instances it was connected to.

Also share some light on storing a ceph snapshot or an image in to a backup drive or another server.

In addition to this how can we restore an instance from this image.

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2019-08-14 03:38:52 -0600

Prasanna Ram gravatar image


Many thanks for your response i am able to do the export to an external drive. But when importing i tried to do on the same image so that the instance id doesn't change with import, but ceph error-ed the below

rbd: image creation failed Importing image: 0% complete...failed. rbd: import failed: (17) File exists

If i import with different name it gets imported. Is it possible to overwrite the base image and replace with backup image so that the object id doesn't change?

Regards, Ram.

edit flag offensive delete link more


Sorry for the late response, I didn't get a notification. The rbd backup is for disaster recovery, so if you're sure that your instance is corrupted, and only then, you can delete the original and import the backup rbd. Try that with a test instance, but that should work nicely.

eblock gravatar imageeblock ( 2019-08-26 13:09:12 -0600 )edit

answered 2019-07-26 08:50:48 -0600

eblock gravatar image

The only thing you have in Ceph is the object id, e.g. <instance_id>_disk or volume-<volume_id>, and hopefully the parent information if you use ephemeral disks:

control:~ # rbd info images/77d26eb2-aa20-424c-aaac-30d1e412e79f_disk | grep parent
        parent: images/05d875e9-977d-45dc-b21d-1203782200a1@snap

The rest of the information is stored in openstack databases, for example in openstack server show <instance> you can see all attached block devices. In openstack volume show <volume> you have the mapping to the respective device name (/dev/vda, /dev/sdb etc.) and the instance it's attached to.

share some light on storing a ceph snapshot or an image in to a backup drive or another server

What exactly do you need to know? Our weekly (full) backup strategy simply creates a snapshot of the instance and then exports it to an external drive. Since glance images already have protected snapshots, you could directly export them, if required. So basically it's just

rbd snap create pool/image@snapshot
rbd export pool/image@snapshot /path/to/your/backup/image.file

Please note, exporting an image results in a flat image, you lose the parent data. I'm not sure if this has been improved somehow in the latest ceph release, I'll have to check that. But this leads to your last question, how to restore from that backup. Basically, it's just the reverse operation:

rbd import /path/to/backup/image.file pool/image

The import operation will import this image as a flat image, so it won't have parent data but will be a standalone image/instance. You have to be aware of this because if you delete the base image from glance you won't get a warning anymore that your base image has rbd children. This is not a big deal, but I wanted to mention it. So if your instance broke somehow, just import the backup file into ceph, use the same instance_id and you'll be fine. We had to do this multiple times in our production environment and it worked like a charm.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2019-07-26 05:14:24 -0600

Seen: 160 times

Last updated: Jul 26 '19