Ask Your Question
0

Unable to detach volume from instance during its termination process [closed]

asked 2016-01-19 09:15:19 -0500

DanielJ gravatar image

Hello all,

if I want to terminate or resize an instance that is already running for several month, then the connection to the volume can not be terminated an the termination of the instance fails. The result is that the instance is deleted from the compute node, the tap device and the bridges of this instance are removed, its volume remains in the storage backend (as it should) but the instance remains in the database in error state with the volume still attached to it (see https://ask.openstack.org/en/question/87364/best-way-to-delete-vm-from-database-if-it-is-already-deleted-from-computing-node/ (Best way to delete VM from database if it is already deleted from computing node)). Do you have any ideas why the termination of the connection failed? Do you have any suggestion what I can do so that the other "old" instances can be successfully terminated?

But if I create a new instance, its termination succeeds even if it was running on the same compute node as the failed instances.

The cloud is set up as follows: One controller node is running cinder-api, cinder-scheduler and the corresponding nova services. The three storage nodes run a Ceph cluster and each of the nodes runs cinder-volume. The compute nodes run nova-compute. The instances are created in a way that the compute nodes run the instance whereas their volumes are only stored in Ceph.

If it would help, I could provide you with the entries in the nova-api.log of the compute node on which the instance has been located whose termination has failed.

Regards,

Daniel

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by DanielJ
close date 2016-01-20 04:02:56.684767

1 answer

Sort by ยป oldest newest most voted
0

answered 2016-01-20 04:02:38 -0500

DanielJ gravatar image

The problem was that the ceph configuration file did not contain the authentication token information on two of the three storage nodes. After adding the missing information and restarting the cinder-volume service everything worked fine again.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-01-19 09:15:19 -0500

Seen: 1,469 times

Last updated: Jan 20 '16