Ask Your Question
0

Instance metadata backup

asked 2019-07-26 05:14:24 -0500

Suraj Jambhulkar gravatar image

How do we co-relate the metadata of the instance when we export its snapshot from ceph to external drives, like additional drives id, which instances it was connected to.

Also share some light on storing a ceph snapshot or an image in to a backup drive or another server.

In addition to this how can we restore an instance from this image.

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2019-08-14 03:38:52 -0500

Hi,

Many thanks for your response i am able to do the export to an external drive. But when importing i tried to do on the same image so that the instance id doesn't change with import, but ceph error-ed the below

rbd: image creation failed Importing image: 0% complete...failed. rbd: import failed: (17) File exists

If i import with different name it gets imported. Is it possible to overwrite the base image and replace with backup image so that the object id doesn't change?

Regards, Ram.

edit flag offensive delete link more
0

answered 2019-07-26 08:50:48 -0500

eblock gravatar image

The only thing you have in Ceph is the object id, e.g. <instance_id>_disk or volume-<volume_id>, and hopefully the parent information if you use ephemeral disks:

control:~ # rbd info images/77d26eb2-aa20-424c-aaac-30d1e412e79f_disk | grep parent
        parent: images/05d875e9-977d-45dc-b21d-1203782200a1@snap

The rest of the information is stored in openstack databases, for example in openstack server show <instance> you can see all attached block devices. In openstack volume show <volume> you have the mapping to the respective device name (/dev/vda, /dev/sdb etc.) and the instance it's attached to.

share some light on storing a ceph snapshot or an image in to a backup drive or another server

What exactly do you need to know? Our weekly (full) backup strategy simply creates a snapshot of the instance and then exports it to an external drive. Since glance images already have protected snapshots, you could directly export them, if required. So basically it's just

rbd snap create pool/image@snapshot
rbd export pool/image@snapshot /path/to/your/backup/image.file

Please note, exporting an image results in a flat image, you lose the parent data. I'm not sure if this has been improved somehow in the latest ceph release, I'll have to check that. But this leads to your last question, how to restore from that backup. Basically, it's just the reverse operation:

rbd import /path/to/backup/image.file pool/image

The import operation will import this image as a flat image, so it won't have parent data but will be a standalone image/instance. You have to be aware of this because if you delete the base image from glance you won't get a warning anymore that your base image has rbd children. This is not a big deal, but I wanted to mention it. So if your instance broke somehow, just import the backup file into ceph, use the same instance_id and you'll be fine. We had to do this multiple times in our production environment and it worked like a charm.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2019-07-26 05:14:24 -0500

Seen: 51 times

Last updated: Jul 26