cannot remove volume

asked 2013-05-20 05:26:50 -0600

polslinux gravatar image

If i do cinder delete $volume_id i got an "error_deleting" and i cannot remove the volume! What can i do to force the remove of it?

edit retag flag offensive close merge delete

9 answers

Sort by » oldest newest most voted

answered 2013-05-20 06:06:12 -0600

gfidente gravatar image

You should really try to figure out why it is failing in the first place. Set debug=True and verbose=True in cinder.conf, restart the services and make a further delete attempt to see if anything useful pops up in the volume log file.

You can also try using the cinder force-delete command, which bypasses some checks.

Last but not least, if you're sure the volume isn't there anymore and there are no references to it (eg. in /etc/cinder/volumes), you can hack into the database and delete it from the volumes_metadata and volumes tables.

edit flag offensive delete link more


Thx for your answer! When i do cinder force-delete or cinder delete i got this message: [19332.031250] iscsi_trgt: iscsi_volume_del(303) 1 0.

polslinux gravatar imagepolslinux ( 2013-05-20 11:53:41 -0600 )edit

i tried also to delete info from cinder db: delete from volumes where id="aaaaaaaaaaaa"; ERROR 1451 (23000): cannot delete or update a parent row: a foreign key constraint fails ('cinder'.'iscsi_targets', CONSTRAINT 'iscsi_targets_ibfk_1' FOREIGN KEY('volume_id') REFERENCES 'volumes' ('id'))

polslinux gravatar imagepolslinux ( 2013-05-20 11:53:52 -0600 )edit

@polslinux I'm not sure how you got into such a situation but still, ensures the volume target definition files have been deleted and are not imported into your tgtd config and then delete the volumes from the database ... also remove the volume from the foreign tables

gfidente gravatar imagegfidente ( 2013-06-12 09:52:58 -0600 )edit

You also may (but shouldn't) enforce deleting fom db: SET foreign_key_checks = 0; DELETE FROM ... SET foreign_key_checks = 1;

laboshinl gravatar imagelaboshinl ( 2013-10-25 06:49:37 -0600 )edit

Thanks a lot for the guide. Cinder delete or force-delete didn't work for me. But I managed to resolve the problem by modifying the corresponding volume's status = "available", attachment_status="detached" in cinder.volumes and cinder.volume_attachment. Then cinder delete worked :)

聪明健康 gravatar image聪明健康 ( 2016-02-14 21:44:12 -0600 )edit

answered 2013-11-12 13:48:46 -0600

dachary gravatar image

I don't have an answer. Just another variation of the same problem.

  • Created a ceph volume using boot from image and create volume in the horizone dashboard
  • It fails and leaves a volume that does not match any ceph rbd
  • When I force-delete the volume it does nothing
  • When I delete the volume it says : "ERROR: Invalid volume: Volume status must be available or error, but current status is: error_deleting"

How can I get out of there without tampering with the database ?

edit flag offensive delete link more



You need to use the reset-state api method. Change the state from error_deleteing to error so that it's valid for delete. Still need to figure out what's up with the RBD driver that got you here in the first place though.

jgriffith gravatar imagejgriffith ( 2013-12-27 11:11:54 -0600 )edit

answered 2014-10-20 12:08:21 -0600

LZ gravatar image

updated 2014-10-20 12:19:23 -0600

I tried following directly from mysql and worked for me

Connect to your compute node instance with SSH client(I use putty)

> cinder list

> cinder snapshot-list

More cinder command can be found here (

Note down the snapshot id and volume id that you want to remove

now connect to mysql server cinder database and delete entry

> mysql -u root cinder

mysql> delete from snapshots where id='xxxxxxx-xxx-xxx-xxx-xxxxxxxxxx'

mysql> delete from volumes where id='xxxxxxx-xxx-xxx-xxx-xxxxxxxxxx'


Now snapshot and volume both removed from Volumes & Snapshots listing.

Make sure you delete snapshot first before remove volume.

Volume should NOT be deleted before connected snapshots are deleted.

edit flag offensive delete link more


This fixed my same issue with DevStack. Had to provide the mysql password from local.conf "--password=[password].

sorbert gravatar imagesorbert ( 2015-11-15 20:02:39 -0600 )edit

Can you elaborate on "Volume should NOT be deleted before connected snapshots are deleted."? Why Cinder does not allow to delete the volume which has a snapshot?

Nodir gravatar imageNodir ( 2016-06-30 20:56:42 -0600 )edit

Responding to myself after doing some reading. Apparently there is a blueprint to allow volume and snapshot deletion. We just need to provide --cascade flag to volume deletion CLI which will delete all snapshots and the volume

Nodir gravatar imageNodir ( 2016-06-30 22:42:22 -0600 )edit

answered 2014-01-28 07:37:23 -0600

gabriel_staicu gravatar image


A similar problem I encountered too. I have cinder configured to use a ceph cluster as backend. I tried to create a volume form an image but because in cinder.conf I didn't have glance_host ip configured it tried to connect to glance to the default value (localhost). So the volume was not created and has a state 'error'.

I have reset the state but when I try to delete the image image the message /var/log/upstart/cinder-volume.log is: error opening image volume-xxxxxxxxxxxxxxxxxxxxx at snapshot None.

With the 'rbd -p volumes ls' I cannot see a volume corresponding to that id. Did anyone encounter a problem like that? Is is a more elegant solution than to hack the database?

edit flag offensive delete link more

answered 2013-10-25 00:06:21 -0600

cronhan gravatar image

I also got this problem. It turns out the volume was in use by the tgtd, so I did the following:

tgt-admin -s | grep <lv_id>

followed by

tgt-admin -delete iqn...openstack:<lv_id> (basically, what you get from grep).

Then I just force-delete-ed the volume to remove it from the cinder db.

edit flag offensive delete link more

answered 2015-11-08 12:04:09 -0600

niels gravatar image

I am using CentOS 7 with Ceph and ran into the same problem. I found that there were no /var/lock/cinder and /var/lock/nova directories on the compute/cinder nodes. I created them, gave them user and group permissions, and restarted services. After that I was able to delete stuck volumes and instances.

For stuck nova instances:

# mkdir /var/lock/nova
# chown -R nova:nova /var/lock/nova
# systemctl restart openstack-nova-compute.service

For stuck ceph volumes:

# mkdir /var/lock/cinder
# chown -R cinder:cinder /var/lock/cinder
# systemctl restart openstack-cinder-volume.service

Hope that helps.

edit flag offensive delete link more

answered 2016-08-09 03:30:46 -0600

stavros gravatar image

Before going into the database and tweaking entries there you can check the /var/log/cinder/volume log to see if volume in question is reported as 'in use'. If yes you can follow the procedure here ( which worked really well for me.


edit flag offensive delete link more

answered 2015-10-26 10:43:20 -0600

mriedem gravatar image

FYI, there is some work going on here to add a force volume detach command to nova-manage. Input on that review would be helpful for people having this issue.

edit flag offensive delete link more

answered 2014-10-09 05:04:07 -0600

Ramonskie gravatar image

yes same problem here also happend with ceph my problem was that i upgraded to havana and i forgot to set the approiate settings

so what happend was cinder-volume was down cinder-api was up cinder-scheduler was up

created a snapshot but volume was never created in ceph so it kept in creating status. then i tried to delete it no luck because i needed to change te state so after that i deleted it and then it was stuck in error_deleting because it could not find the image in ceph

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2013-05-20 05:26:50 -0600

Seen: 43,736 times

Last updated: Aug 09 '16