cannot remove volume
If i do cinder delete $volume_id
i got an "error_deleting" and i cannot remove the volume! What can i do to force the remove of it?
First time here? Check out the FAQ!
If i do cinder delete $volume_id
i got an "error_deleting" and i cannot remove the volume! What can i do to force the remove of it?
You should really try to figure out why it is failing in the first place. Set debug=True
and verbose=True
in cinder.conf
, restart the services and make a further delete attempt to see if anything useful pops up in the volume log file.
You can also try using the cinder force-delete
command, which bypasses some checks.
Last but not least, if you're sure the volume isn't there anymore and there are no references to it (eg. in /etc/cinder/volumes), you can hack into the database and delete it from the volumes_metadata
and volumes
tables.
i tried also to delete info from cinder db: delete from volumes where id="aaaaaaaaaaaa";
ERROR 1451 (23000): cannot delete or update a parent row: a foreign key constraint fails ('cinder'.'iscsi_targets', CONSTRAINT 'iscsi_targets_ibfk_1' FOREIGN KEY('volume_id') REFERENCES 'volumes' ('id'))
@polslinux I'm not sure how you got into such a situation but still, ensures the volume target definition files have been deleted and are not imported into your tgtd config and then delete the volumes from the database ... also remove the volume from the foreign tables
Thanks a lot for the guide. Cinder delete or force-delete didn't work for me. But I managed to resolve the problem by modifying the corresponding volume's status = "available", attachment_status="detached" in cinder.volumes and cinder.volume_attachment. Then cinder delete worked :)
I don't have an answer. Just another variation of the same problem.
How can I get out of there without tampering with the database ?
I tried following directly from mysql and worked for me
Connect to your compute node instance with SSH client(I use putty)
> cinder list
> cinder snapshot-list
More cinder command can be found here http://docs.openstack.org/cli-reference/content/cinderclient_commands.html (http://docs.openstack.org/cli-referen...)
Note down the snapshot id and volume id that you want to remove
now connect to mysql server cinder database and delete entry
> mysql -u root cinder
mysql> delete from snapshots where id='xxxxxxx-xxx-xxx-xxx-xxxxxxxxxx'
mysql> delete from volumes where id='xxxxxxx-xxx-xxx-xxx-xxxxxxxxxx'
done.
Now snapshot and volume both removed from Volumes & Snapshots listing.
Make sure you delete snapshot first before remove volume.
Volume should NOT be deleted before connected snapshots are deleted.
Responding to myself after doing some reading. Apparently there is a blueprint to allow volume and snapshot deletion. We just need to provide --cascade flag to volume deletion CLI which will delete all snapshots and the volume https://blueprints.launchpad.net/cind...
hi
A similar problem I encountered too. I have cinder configured to use a ceph cluster as backend. I tried to create a volume form an image but because in cinder.conf I didn't have glance_host ip configured it tried to connect to glance to the default value (localhost). So the volume was not created and has a state 'error'.
I have reset the state but when I try to delete the image image the message /var/log/upstart/cinder-volume.log is: error opening image volume-xxxxxxxxxxxxxxxxxxxxx at snapshot None.
With the 'rbd -p volumes ls' I cannot see a volume corresponding to that id. Did anyone encounter a problem like that? Is is a more elegant solution than to hack the database?
I also got this problem. It turns out the volume was in use by the tgtd, so I did the following:
tgt-admin -s | grep <lv_id>
followed by
tgt-admin -delete iqn...openstack:<lv_id>
(basically, what you get from grep).
Then I just force-delete
-ed the volume to remove it from the cinder db.
I am using CentOS 7 with Ceph and ran into the same problem. I found that there were no /var/lock/cinder and /var/lock/nova directories on the compute/cinder nodes. I created them, gave them user and group permissions, and restarted services. After that I was able to delete stuck volumes and instances.
For stuck nova instances:
# mkdir /var/lock/nova
# chown -R nova:nova /var/lock/nova
# systemctl restart openstack-nova-compute.service
For stuck ceph volumes:
# mkdir /var/lock/cinder
# chown -R cinder:cinder /var/lock/cinder
# systemctl restart openstack-cinder-volume.service
Hope that helps.
Before going into the database and tweaking entries there you can check the /var/log/cinder/volume log to see if volume in question is reported as 'in use'. If yes you can follow the procedure here https://ask.openstack.org/en/question/87725/failed-to-delete-a-volume-as-the-backend-lv-is-shown-open/ (https://ask.openstack.org/en/question...) which worked really well for me.
HTH.
FYI, there is some work going on here https://review.openstack.org/#/c/184537/ to add a force volume detach command to nova-manage. Input on that review would be helpful for people having this issue.
yes same problem here also happend with ceph my problem was that i upgraded to havana and i forgot to set the approiate settings
so what happend was cinder-volume was down cinder-api was up cinder-scheduler was up
created a snapshot but volume was never created in ceph so it kept in creating status. then i tried to delete it no luck because i needed to change te state so after that i deleted it and then it was stuck in error_deleting because it could not find the image in ceph
Asked: 2013-05-20 05:26:50 -0600
Seen: 43,736 times
Last updated: Aug 09 '16
Issue with creating a volume from image via cinder
keystone v3 is not working in python-cinderclient directly
what does "source volid" mean in Cinder? [closed]
Problem Creating Volume Backups
How to create new availability zone in cinder? There is no info in mans.
Cinder service on compute not getting recognized at controller- Liberty