Ask Your Question
1

delete instance in error state

asked 2015-09-29 21:32:38 -0600

updated 2015-09-29 21:42:15 -0600

If there were a FAQ list in ask.openstack.org, this question would be at the top. What do I do with an instance that is in the error state?

Background: Actually I wanted to remove a Cinder volume that was in the error state. cinder delete complained that the volume was attached to my instance. When I deleted the instance, it ended up in error. I could deal with the volume by cinder reset-state --state active, but the error instance won't go away.

  • nova reset-state --active works, but when I try to delete the instance, it goes back to its error state
  • nova service-list shows everything is up. For what it's worth, neutron and cinder services are also up.
  • The instance doesn't run on its compute node anymore (virsh list --all produces an empty list)
  • On the compute node, nova-compute.log contains messages like nova.virt.libvirt.driver (...) During wait destroy, instance disappeared and Setting instance vm_state to ERROR, and a stack trace ending with oslo_messaging.rpc.dispatcher ClientException: The server has either erred or is incapable of performing the requested operation
  • In the same compute.log, I also see occasionally While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor, which confirms my investigation.

The full compute node nova-compute.log is below.

2015-09-30 11:31:01.885 1339 INFO nova.compute.manager [req-381c9890-1b1c-4f83-9ed1-fde86b4346cc 4af9177747924adeb48265fff8191f43 bc9a09ea0e594b7ba3980cd1721cf2a7 - - -] [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270] Terminating instance
2015-09-30 11:31:01.890 1339 INFO nova.virt.libvirt.driver [-] [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270] During wait destroy, instance disappeared.
2015-09-30 11:31:02.377 1339 INFO nova.virt.libvirt.driver [req-381c9890-1b1c-4f83-9ed1-fde86b4346cc 4af9177747924adeb48265fff8191f43 bc9a09ea0e594b7ba3980cd1721cf2a7 - - -] [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270] Deletion of /var/lib/nova/instances/61228cab-4a53-4dd4-82da-e1f7ddff4270_del complete
2015-09-30 11:31:04.200 1339 ERROR nova.compute.manager [req-381c9890-1b1c-4f83-9ed1-fde86b4346cc 4af9177747924adeb48265fff8191f43 bc9a09ea0e594b7ba3980cd1721cf2a7 - - -] [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270] Setting instance vm_state to ERROR
2015-09-30 11:31:04.200 1339 TRACE nova.compute.manager [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270] Traceback (most recent call last):
2015-09-30 11:31:04.200 1339 TRACE nova.compute.manager [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2725, in do_terminate_instance
2015-09-30 11:31:04.200 1339 TRACE nova.compute.manager [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270]     self._delete_instance(context, instance, bdms, quotas)
2015-09-30 11:31:04.200 1339 TRACE nova.compute.manager [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270]   File "/usr/lib/python2.7/site-packages/nova/hooks.py", line 149, in inner
2015-09-30 11:31:04.200 1339 TRACE nova.compute.manager [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270]     rv = f(*args, **kwargs)
2015-09-30 11:31:04.200 1339 TRACE nova.compute.manager [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2694, in _delete_instance
2015-09-30 11:31:04.200 1339 TRACE nova.compute.manager [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270]     quotas.rollback()
2015-09-30 11:31:04.200 1339 TRACE nova.compute.manager [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-09-30 11:31:04.200 1339 TRACE nova.compute.manager [instance: 61228cab-4a53-4dd4-82da-e1f7ddff4270]     six.reraise(self.type_, self.value, self.tb)
2015-09-30 11:31:04.200 1339 TRACE nova.compute.manager ...
(more)
edit retag flag offensive close merge delete

Comments

I was able to remove the instance and all dependent records from the nova database (and hope that I didn't break anything in the process), but I am still interested in the "right way" to handle such a situation.

Bernd Bausch gravatar imageBernd Bausch ( 2015-09-29 22:09:48 -0600 )edit

+1 - just encountered this - I have admin access to nova but not mysql.

Nathan Stewart gravatar imageNathan Stewart ( 2015-09-30 13:19:01 -0600 )edit

2 answers

Sort by » oldest newest most voted
0

answered 2018-11-02 11:59:32 -0600

novainfinite gravatar image

first make it shutoff in database and after that you can delete it.

edit flag offensive delete link more
0

answered 2016-12-14 09:57:05 -0600

shasha.tavil gravatar image

Try to restart the nova-api services on all effected compute nodes. This should delete all phantom instances.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-09-29 21:32:38 -0600

Seen: 3,740 times

Last updated: Nov 02