Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Unable to detach volume from instance during its termination process

Hello all,

if I want to terminate or resize an instance that is already running for several month, then the connection to the volume can not be terminated an the termination of the instance fails. The result is that the instance is deleted from the compute node, the tap device and the bridges of this instance are removed, its volume remains in the storage backend (as it should) but the instance remains in the database in error state with the volume still attached to it (see Best way to delete VM from database if it is already deleted from computing node). Do you have any ideas why the termination of the connection failed? Do you have any suggestion what I can do so that the other "old" instances can be successfully terminated?

But if I create a new instance, its termination succeeds even if it was running on the same compute node as the failed instances.

The cloud is set up as follows: One controller node is running cinder-api, cinder-scheduler and the corresponding nova services. The three storage nodes run a Ceph cluster and each of the nodes runs cinder-volume. The compute nodes run nova-compute. The instances are created in a way that the compute nodes run the instance whereas their volumes are only stored in Ceph.

If it would help, I could provide you with the entries in the nova-api.log of the compute node on which the instance has been located whose termination has failed.