Ask Your Question
1

Cinder Backup to Ceph Failed. [closed]

asked 2015-02-09 03:47:34 -0500

idzzy gravatar image

Hello,

I'd like to ask about backup a cinder volume to Ceph. When I run "cinder-backup", error message(Exception during message handling: error calling connect) was output. Are there any rack of configuration or wrong things I did? Any advices will be helpful for me.

At first, I created a pool in Ceph for cinder backup.

# ceph osd pool create backups 128
pool 'backups' created

Also did setup ceph client of controller node which cinder service runs.

Created User for cinder backup,

# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
[client.cinder-backup]
  key = AQAdXNhUwF+wLxAABvU/pD8SNA3hgjK5MRbfbA==

Added the keyrings for client.cinder-backup,

# ceph auth get-or-create client.cinder-backup > /etc/ceph/ceph.client.cinder-backup.keyring
# chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

Restarted ceph mon for just in case. Adding test object to the backups pool worked like this.

# rados -p backups put test.obj ./test.obj
# rados -p backups ls
test.obj

On the other hand, regarding Openstack side, Modified cinder.conf like this.

# egrep "^backup_" /etc/cinder/cinder.conf
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder
backup_ceph_chunk_size=134217728
backup_ceph_pool=backups
backup_ceph_stripe_unit=0
backup_ceph_stripe_count=0
backup_driver=cinder.backup.drivers.ceph

And restarted all relevant cinder services (backup, api, volume, schedule...etc)

Then, I tried cinder-backup.

# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ba211e1c-b7c3-47af-8aab-3e385b1af927 | available | admin_vol01  |  1   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

# cinder backup-create --display-name admin_vol01_backup admin_vol01
+-----------+--------------------------------------+
|  Property |                Value                 |
+-----------+--------------------------------------+
|     id    | 8f2e39c6-081e-421d-867b-0dd96e80fb89 |
|    name   |          admin_vol01_backup          |
| volume_id | ba211e1c-b7c3-47af-8aab-3e385b1af927 |
+-----------+--------------------------------------+

# cinder backup-list
+--------------------------------------+--------------------------------------+--------+--------------------+------+--------------+-----------+
|                  ID                  |              Volume ID               | Status |        Name        | Size | Object Count | Container |
+--------------------------------------+--------------------------------------+--------+--------------------+------+--------------+-----------+
| 8f2e39c6-081e-421d-867b-0dd96e80fb89 | ba211e1c-b7c3-47af-8aab-3e385b1af927 | error  | admin_vol01_backup |  1   |     None     |    None   |
+--------------------------------------+--------------------------------------+--------+--------------------+------+--------------+-----------+

Following error message was output in cinder.log

<158>Feb  9 08:28:37 node-17 cinder-api 2015-02-09 17:28:37.479 21545 INFO cinder.api.openstack.wsgi [req-de9e82fe-49ce-481b-9553-fb5295152b51 a1c2ec19759040e39f219fcf82028bf6 a8085d1871f847bdb906ef4a28d1bd48 - - -] POST http://192.168.100.2:8776/v1/a8085d1871f847bdb906ef4a28d1bd48/backups
<156>Feb  9 08:28:37 node-17 cinder-api 2015-02-09 17:28:37.481 21545 AUDIT cinder.api.contrib.backups [req-de9e82fe-49ce-481b-9553-fb5295152b51 a1c2ec19759040e39f219fcf82028bf6 a8085d1871f847bdb906ef4a28d1bd48 - - -] Creating backup of volume ba211e1c-b7c3-47af-8aab-3e385b1af927 in container None
<158>Feb  9 08:28:37 node-17 cinder-api 2015-02-09 17:28:37.569 21545 INFO cinder.api.openstack.wsgi [req-de9e82fe-49ce-481b-9553-fb5295152b51 a1c2ec19759040e39f219fcf82028bf6 a8085d1871f847bdb906ef4a28d1bd48 - - -] http://192.168.100.2:8776/v1/a8085d1871f847bdb906ef4a28d1bd48/backups returned with HTTP 202
<156>Feb  9 08:28:37 node-17 cinder-backup 2015-02-09 17:28:37.571 24421 WARNING cinder.context [-] Arguments dropped when creating context: {u'user': u'a1c2ec19759040e39f219fcf82028bf6', u'tenant': u'a8085d1871f847bdb906ef4a28d1bd48', u'user_identity': u'a1c2ec19759040e39f219fcf82028bf6 a8085d1871f847bdb906ef4a28d1bd48 - - -'}
<158>Feb  9 08:28:37 node-17 cinder-api 2015-02-09 17:28:37.571 21545 INFO eventlet.wsgi.server [req-de9e82fe-49ce-481b-9553-fb5295152b51 a1c2ec19759040e39f219fcf82028bf6 a8085d1871f847bdb906ef4a28d1bd48 - - -] 10.30.0.2 - - [09/Feb/2015 17:28:37] "POST /v1/a8085d1871f847bdb906ef4a28d1bd48/backups HTTP/1.1" 202 615 0.101813
<158>Feb  9 08:28:37 node-17 cinder-backup 2015-02-09 17:28:37.619 24421 INFO cinder.backup.manager [req-de9e82fe-49ce-481b-9553-fb5295152b51 a1c2ec19759040e39f219fcf82028bf6 a8085d1871f847bdb906ef4a28d1bd48 - - -] Create backup started, backup: 8f2e39c6-081e-421d-867b-0dd96e80fb89 volume: ba211e1c-b7c3-47af-8aab-3e385b1af927.
<158>Feb  9 08:28:37 node-17 cinder-backup 2015-02-09 17:28:37.620 24421 INFO cinder.backup.manager [req-de9e82fe-49ce-481b-9553-fb5295152b51 a1c2ec19759040e39f219fcf82028bf6 a8085d1871f847bdb906ef4a28d1bd48 - - -] Backend not found in ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by idzzy
close date 2015-03-02 19:30:02.563688

2 answers

Sort by ยป oldest newest most voted
0

answered 2015-02-15 04:08:03 -0500

hyperbaba gravatar image

in cinder.conf you need to use user cinder-backup instead cinder user

edit flag offensive delete link more
0

answered 2015-02-16 23:34:56 -0500

idzzy gravatar image

Hyperbaba,

Oops my slight mistake. Thanks for pointing out. It works!!

Cinder.conf:

# egrep "^backup_" /etc/cinder/cinder.conf
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size=134217728
backup_ceph_pool=backups
backup_ceph_stripe_unit=0
backup_ceph_stripe_count=0
backup_driver=cinder.backup.drivers.ceph

Cinder backup:

# cinder backup-create --display-name admin_vol01_backup admin_vol01
+-----------+--------------------------------------+
|  Property |                Value                 |
+-----------+--------------------------------------+
|     id    | 71bd2ee4-a657-4f79-a842-bb7ed80b7774 |
|    name   |          admin_vol01_backup          |
| volume_id | 5175d6be-735c-411e-be21-85015c332d99 |
+-----------+--------------------------------------+

Check the backup:

# cinder backup-list
+--------------------------------------+--------------------------------------+-----------+--------------------+------+--------------+-----------+
|                  ID                  |              Volume ID               |   Status  |        Name        | Size | Object Count | Container |
+--------------------------------------+--------------------------------------+-----------+--------------------+------+--------------+-----------+
| 71bd2ee4-a657-4f79-a842-bb7ed80b7774 | 5175d6be-735c-411e-be21-85015c332d99 | available | admin_vol01_backup |  1   |     None     |  backups  |
+--------------------------------------+--------------------------------------+-----------+--------------------+------+--------------+-----------+

# cinder backup-show 71bd2ee4-a657-4f79-a842-bb7ed80b7774
+-------------------+--------------------------------------+
|      Property     |                Value                 |
+-------------------+--------------------------------------+
| availability_zone |                 nova                 |
|     container     |               backups                |
|     created_at    |      2015-02-17T05:13:05.000000      |
|    description    |                 None                 |
|    fail_reason    |                 None                 |
|         id        | 71bd2ee4-a657-4f79-a842-bb7ed80b7774 |
|        name       |          admin_vol01_backup          |
|    object_count   |                 None                 |
|        size       |                  1                   |
|       status      |              available               |
|     volume_id     | 5175d6be-735c-411e-be21-85015c332d99 |
+-------------------+--------------------------------------+

Check RADOS of Ceph:

# rados -p backups ls
backup.71bd2ee4-a657-4f79-a842-bb7ed80b7774.meta
rbd_directory
rbd_id.volume-5175d6be-735c-411e-be21-85015c332d99.backup.base
rbd_header.12d493d221400
edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-02-09 03:47:34 -0500

Seen: 677 times

Last updated: Feb 16 '15