Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

instance cannot be deleted from the hypervisor

I installed OpenStack Kilo on Ubuntu 14.04. However the compute nodes are in down state and restarting nova-compute does not help.

I then looked into nova compute log, and found those messages every time before the compute node wend down:

2016-09-11 20:04:45.626 6275 INFO nova.compute.manager [req-BLABLA] [instance: INSTANCE_ID] Terminating instance

2016-09-11 20:04:45.630 6275 INFO nova.virt.libvirt.driver [-] [instance: INSTANCE_ID] Instance destroyed successfully.

The weird thing is that the same message repeated for the SAME INSTANCE_ID every time when I restarted nova-compute (just several minutes later it went down). So I guess the instance doesn't get destroyed as the log claims, and remains on the hypervisor.

BTW, I already set running_deleted_instance_action=reap in nova.conf, but it seems not work.

Is there a way to cleanup the deleted instance from the hypervisor? Is this problem related to the down state of nova-compute?

instance cannot be deleted from the hypervisor

I installed OpenStack Kilo on Ubuntu 14.04. However the compute nodes are in down state and restarting nova-compute does not help.

I then looked into nova compute log, and found those messages appearing every time before the compute node wend down:

2016-09-11 20:04:45.626 6275 INFO nova.compute.manager [req-BLABLA] [instance: INSTANCE_ID] Terminating instance

2016-09-11 20:04:45.630 6275 INFO nova.virt.libvirt.driver [-] [instance: INSTANCE_ID] Instance destroyed successfully.

The weird thing is that the same message repeated for the SAME INSTANCE_ID every time when I restarted nova-compute (just several minutes later it went down). So I guess the instance doesn't get destroyed as the log claims, and remains on the hypervisor.

BTW, I already set running_deleted_instance_action=reap in nova.conf, but it seems not work.

Is there a way to cleanup the deleted instance from the hypervisor? Is this problem related to the down state of nova-compute?

instance cannot be deleted from the hypervisor

I installed OpenStack Kilo on Ubuntu 14.04. However the compute nodes are in down state and restarting nova-compute does not help.

I then looked into nova compute log, and found those the following messages appearing every time before the compute node wend down:

2016-09-11 20:04:45.626 6275 INFO nova.compute.manager [req-BLABLA] [instance: INSTANCE_ID] Terminating instance

2016-09-11 20:04:45.630 6275 INFO nova.virt.libvirt.driver [-] [instance: INSTANCE_ID] Instance destroyed successfully.

The weird thing is that the same message repeated for the SAME INSTANCE_ID every time when I restarted nova-compute (just several minutes later it went down). So I guess the instance doesn't get destroyed as the log claims, and remains on the hypervisor.

BTW, I already set running_deleted_instance_action=reap in nova.conf, but it seems not work.

Is there a way to cleanup the deleted instance from the hypervisor? Is this problem related to the down state of nova-compute?

instance cannot be deleted from the hypervisor

I installed OpenStack Kilo on Ubuntu 14.04. However the compute nodes are in down state and restarting nova-compute does not help.

I then looked into nova compute log, and found the following messages appearing every time before the compute node wend down:

2016-09-11 20:04:45.626 6275 INFO nova.compute.manager [req-BLABLA] [instance: INSTANCE_ID] Terminating instance

2016-09-11 20:04:45.630 6275 INFO nova.virt.libvirt.driver [-] [instance: INSTANCE_ID] Instance destroyed successfully.

The weird thing is that the same message repeated for the SAME INSTANCE_ID every time when I restarted nova-compute (just several minutes later it went down). So I guess the instance doesn't down), which means that instance didn't get destroyed as the log claims, reported. However virsh list returns nothing. And there is no instances in /var/lib/nova/instances/. It seems that the instances have been deleted from both nova database and remains on the hypervisor.the hypervisor, but why does nova keeps terminating an instance that doesn't exist? Where does nova get the instance id?

BTW, I already set running_deleted_instance_action=reap in nova.conf, but it seems not work.

Is there a way work.

This problem seems to cleanup the deleted instance from the hypervisor? Is this problem be related to the down state of nova-compute?an unresolved bug report in https://bugs.launchpad.net/nova/+bug/1520396

instance cannot be deleted from the hypervisor

I installed OpenStack Kilo on Ubuntu 14.04. However the compute nodes are in down state and restarting nova-compute does not help.

I then looked into nova compute log, and found the following messages appearing every time before the compute node wend went down:

2016-09-11 20:04:45.626 6275 INFO nova.compute.manager [req-BLABLA] [instance: INSTANCE_ID] Terminating instance

2016-09-11 20:04:45.630 6275 INFO nova.virt.libvirt.driver [-] [instance: INSTANCE_ID] Instance destroyed successfully.

The same message repeated for the SAME INSTANCE_ID every time when I restarted nova-compute (just several minutes later it went down), which means that instance didn't get destroyed as reported. However virsh list returns nothing. And there is no instances in /var/lib/nova/instances/. It seems that the instances have been deleted from both nova database and the hypervisor, but why does nova keeps terminating an instance that doesn't exist? Where does nova get the instance id?

BTW, I already set running_deleted_instance_action=reap in nova.conf, but it seems not work.

This problem seems to be related to an unresolved bug report in https://bugs.launchpad.net/nova/+bug/1520396

instance cannot be deleted from the hypervisor

I installed OpenStack Kilo on Ubuntu 14.04. However the compute nodes are in down state and restarting nova-compute does not help.

I then looked into nova compute log, and found the following messages appearing every time before the compute node went down:

2016-09-11 20:04:45.626 6275 INFO nova.compute.manager [req-BLABLA] [instance: INSTANCE_ID] Terminating instance

2016-09-11 20:04:45.630 6275 INFO nova.virt.libvirt.driver [-] [instance: INSTANCE_ID] Instance destroyed successfully.

The same message repeated for the SAME INSTANCE_ID every time when I restarted nova-compute (just several minutes later it went down), which means that instance didn't get destroyed as reported. However virsh list returns nothing. And there is no instances in /var/lib/nova/instances/. It seems that the instances have been deleted from both nova database and the hypervisor, but why does nova keeps terminating an instance that doesn't exist? Where does nova get the instance id?

BTW, I already set running_deleted_instance_action=reap in nova.conf, but it seems not work.

This problem seems to be related to an unresolved bug report in https://bugs.launchpad.net/nova/+bug/1520396

instance cannot be nova compute keeps terminating deleted from the hypervisorinstance and turns into down state

I installed OpenStack Kilo on Ubuntu 14.04. However the compute nodes are in down state and restarting nova-compute does not help.

I then looked into nova compute log, and found the following messages appearing every time before the compute node went down:

2016-09-11 20:04:45.626 6275 INFO nova.compute.manager [req-BLABLA] [instance: INSTANCE_ID] Terminating instance

2016-09-11 20:04:45.630 6275 INFO nova.virt.libvirt.driver [-] [instance: INSTANCE_ID] Instance destroyed successfully.

The same message repeated for the SAME INSTANCE_ID every time when I restarted nova-compute (just several 30 minutes later it after restarting nova-compute, the node went down), down and there was no other message showing up), which means that instance didn't get destroyed as reported. However virsh list returns nothing. And there is no instances in /var/lib/nova/instances/. It seems that the instances have been deleted from both nova database and the hypervisor, but why does nova keeps terminating an instance that doesn't exist? Where does nova get the instance id?

BTW, I already set running_deleted_instance_action=reap in nova.conf, but it seems not work.

This problem seems to be related to an unresolved bug report in https://bugs.launchpad.net/nova/+bug/1520396