Ask Your Question
0

Cinder Volume (Icehouse 2014.1.3-2) hangs deleting volume with Ceph (0.94)

asked 2015-04-10 09:03:40 -0500

updated 2015-04-10 09:21:54 -0500

After upgrading Ceph from Firefly to Hammer Cinder Volume is no longer deleting volumes.
Volumes are created fine.

when deleting the following debug output is seen. The cinder volume service then becomes unavailable.

The ceph cluster appears to be operating fine and as before the updates.

Has anyone seen this before? It seems to be cinder-volumes interaction with rbd.

cinder-manage service list

Binary           Host                    Zone             Status     State Updated At
cinder-scheduler ct                  nova             enabled    :-)   2015-04-10 13:35:22
cinder-volume    ct                   nova             enabled    XXX   2015-04-10 13:24:28

Debug output from cinder-volume

2015-04-10 08:56:44.482 5527 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'69bbba71888641e18ca75bb824e1ba3a', 'tenant': u'8192510bf23e44e9a5dcb7b0558092d1', 'user_identity': u'69bbba71888641e18ca75bb824e1ba3a 8192510bf23e44e9a5dcb7b0558092d1 - - -'}
2015-04-10 08:56:44.482 5527 DEBUG cinder.openstack.common.lockutils [req-88f65138-74d5-48e0-a06a-8fd8bbc1cdeb 69bbba71888641e18ca75bb824e1ba3a 8192510bf23e44e9a5dcb7b0558092d1 - - -] Got semaphore "8ad3c07b-5947-4376-b2b8-e7c608e35048-delete_volume" for method "lvo_inner2"... inner /usr/lib/python2.6/site-packages/cinder/openstack/common/lockutils.py:191
2015-04-10 08:56:44.483 5527 DEBUG cinder.openstack.common.lockutils [req-88f65138-74d5-48e0-a06a-8fd8bbc1cdeb 69bbba71888641e18ca75bb824e1ba3a 8192510bf23e44e9a5dcb7b0558092d1 - - -] Attempting to grab file lock "8ad3c07b-5947-4376-b2b8-e7c608e35048-delete_volume" for method "lvo_inner2"... inner /usr/lib/python2.6/site-packages/cinder/openstack/common/lockutils.py:202
2015-04-10 08:56:44.483 5527 DEBUG cinder.openstack.common.lockutils [req-88f65138-74d5-48e0-a06a-8fd8bbc1cdeb 69bbba71888641e18ca75bb824e1ba3a 8192510bf23e44e9a5dcb7b0558092d1 - - -] Got file lock "8ad3c07b-5947-4376-b2b8-e7c608e35048-delete_volume" at /var/lib/cinder/tmp/cinder-8ad3c07b-5947-4376-b2b8-e7c608e35048-delete_volume for method "lvo_inner2"... inner /usr/lib/python2.6/site-packages/cinder/openstack/common/lockutils.py:232
2015-04-10 08:56:44.506 5527 INFO cinder.volume.manager [req-88f65138-74d5-48e0-a06a-8fd8bbc1cdeb 69bbba71888641e18ca75bb824e1ba3a 8192510bf23e44e9a5dcb7b0558092d1 - - -] volume 8ad3c07b-5947-4376-b2b8-e7c608e35048: deleting
2015-04-10 08:56:44.507 5527 DEBUG cinder.volume.manager [req-88f65138-74d5-48e0-a06a-8fd8bbc1cdeb 69bbba71888641e18ca75bb824e1ba3a 8192510bf23e44e9a5dcb7b0558092d1 - - -] volume 8ad3c07b-5947-4376-b2b8-e7c608e35048: removing export delete_volume /usr/lib/python2.6/site-packages/cinder/volume/manager.py:399
2015-04-10 08:56:44.507 5527 DEBUG cinder.volume.manager [req-88f65138-74d5-48e0-a06a-8fd8bbc1cdeb 69bbba71888641e18ca75bb824e1ba3a 8192510bf23e44e9a5dcb7b0558092d1 - - -] volume 8ad3c07b-5947-4376-b2b8-e7c608e35048: deleting delete_volume /usr/lib/python2.6/site-packages/cinder/volume/manager.py:401


2015-04-10 08:57:05.397 5527 DEBUG cinder.volume.drivers.rbd [req-88f65138-74d5-48e0-a06a-8fd8bbc1cdeb 69bbba71888641e18ca75bb824e1ba3a 8192510bf23e44e9a5dcb7b0558092d1 - - -] volume has no backup snaps _delete_backup_snaps /usr/lib/python2.6/site-packages/cinder/volume/drivers/rbd.py:528
2015-04-10 08:57:05.413 5527 DEBUG cinder.volume.drivers.rbd [req-88f65138-74d5-48e0-a06a-8fd8bbc1cdeb 69bbba71888641e18ca75bb824e1ba3a 8192510bf23e44e9a5dcb7b0558092d1 - - -] volume volume-8ad3c07b-5947-4376-b2b8-e7c608e35048 is not a clone _get_clone_info /usr/lib/python2.6/site-packages/cinder/volume/drivers/rbd.py:551
2015-04-10 08:57:05.446 5527 DEBUG cinder.volume.drivers.rbd [req-88f65138-74d5-48e0-a06a-8fd8bbc1cdeb 69bbba71888641e18ca75bb824e1ba3a 8192510bf23e44e9a5dcb7b0558092d1 - - -] deleting rbd volume volume-8ad3c07b-5947-4376-b2b8-e7c608e35048 delete_volume /usr/lib/python2.6/site-packages/cinder/volume/drivers/rbd.py:628

After a restart of cinder-volume

2015-04-10 09:08:17.503 11651 DEBUG cinder.volume.manager [req-15123499-df0d-4fce-b66b-01dd6fea665b - - - - -] Re-exporting 3 volumes init_host /usr/lib/python2.6/site-packages/cinder/volume/manager.py:254
2015-04-10 09:08:17.503 11651 INFO cinder.volume.manager [req-15123499-df0d-4fce-b66b-01dd6fea665b - - - - -] volume 5862adc9-3ed3-4afc-8091-2d981dfa55eb: skipping export
2015-04-10 09:08:17.504 11651 INFO cinder.volume.manager [req-15123499-df0d-4fce-b66b-01dd6fea665b - - - - -] volume 8ad3c07b-5947-4376-b2b8-e7c608e35048: skipping export
2015-04-10 09:08:17.504 11651 DEBUG cinder.volume.manager [req-15123499-df0d-4fce-b66b-01dd6fea665b - - - - -] Resuming any in progress delete operations init_host /usr/lib/python2.6/site-packages/cinder/volume/manager.py:293
2015-04-10 09:08:17.504 11651 INFO cinder.volume.manager [req-15123499-df0d-4fce-b66b-01dd6fea665b - - - - -] Resuming delete on volume: 8ad3c07b-5947-4376-b2b8-e7c608e35048
2015-04-10 09:08:17.505 11651 DEBUG cinder.openstack.common.lockutils [req-15123499-df0d-4fce-b66b-01dd6fea665b ...
(more)
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2015-04-10 13:05:04 -0500

It could be bad OSDs creating long timeouts that rbd.py isn't handling... I need to check further but so far I removed two OSDs and cinder-volume is no longer hanging.

edit flag offensive delete link more
1

answered 2015-04-17 04:08:59 -0500

SGPJ gravatar image

thanks for sharing

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-04-10 09:03:40 -0500

Seen: 386 times

Last updated: Apr 10 '15