nova-compute not update vm status when compute node restarted or shutdown
I have 2 node setup.
One node is working as appliance where I deploy grizzly 2013.1.2. I created second node as compute node(KVM host). nova-compute service is running on second node.
I booted vm and vm created on compute node. I perform some operation like nova stop
, nova suspend
, nova pause
, nova start
... It's working fine and nova list
display all changes in vm state. I also check messages getting in message bus(RabbitMQ in my case). When ever vm operation perform, some message register in message bus with event type, like compute.instance.update
, compute.instance.delete.start
, compute.instance.delete.end
, compute.instance.pause.start
, compute.instance.pause.end
, compute.instance.stop.start
, compute.instance.stop.end
....
When I run reboot now
I got 2 messages.
{
...
"event_type": "compute.instance.update",
"message_id": "13b7c3b8-10af-4df5-98a5-08224d3c1534",
"payload": {
"host": "local.compute.node",
"hostname": "testvm",
...
"new_task_state": null,
"old_state": null,
"old_task_state": null,
"os_type": null,
"state": "active",
"state_description": "",
...
},
"priority": "INFO",
"publisher_id": "conductor.appliance",
"timestamp": "2013-09-06 08:29:35.019364"
}
{
...
"event_type": "compute.instance.update",
"message_id": "1daefc80-b6be-485d-a086-f1b47cc39654",
"payload": {
"display_name": "testvm",
"host": "local.compute.node",
"hostname": "testvm",
...
"new_task_state": "powering-off",
"old_state": null,
"old_task_state": null,
"state": "active",
"state_description": "powering-off",
...
},
"priority": "INFO",
"publisher_id": "api.appliance",
"timestamp": "2013-09-06 08:29:37.706266"
}
After this I check the nova database.
nova=> select id,display_name, vm_state, task_state from instances;
id | display_name | vm_state | task_state
----+--------------+----------+--------------
1 | testvm | active | powering-off
(1 rows)
When compute node booted up I get some new messages.
{
"event_type": "compute.instance.power_off.start",
"payload": {
...
"display_name": "testvm",
"host": "local.compute.node",
"hostname": "testvm",
"state": "active",
"state_description": "powering-off",
...
},
"priority": "INFO",
"publisher_id": "compute.appliance",
"timestamp": "2013-09-06 08:18:31.128909"
}
{
"event_type": "compute.instance.update",
"payload": {
...
"display_name": "testvm",
"host": "local.compute.node",
"hostname": "testvm",
"new_task_state": null,
"old_state": "active",
"old_task_state": "powering-off",
"state": "stopped",
"state_description": "",
}
"priority": "INFO",
"publisher_id": "conductor.appliance",
"timestamp": "2013-09-06 08:33:29.613301"
}
{
"event_type": "compute.instance.power_off.end",
"payload": {
"display_name": "testvm",
"host": "local.compute.node",
"hostname": "testvm",
"state": "stopped",
"state_description": "",
}
"priority": "INFO",
"publisher_id": "local.compute.node",
"timestamp": "2013-09-06 08:18:31.456541"
}
This set the vm to SHUTOFF
, but when compute node booted up(It should happen when compute node went down).
[root@appliance ~]# nova list
+--------------------------------------+--------+---------+---------------------+
| ID | Name | Status | Networks |
+--------------------------------------+--------+---------+---------------------+
| 77deebfa-f659-4359-9117-43723fd35a90 | testvm | SHUTOFF | vlan-2001=17.17.0.3 |
+--------------------------------------+--------+---------+---------------------+
I did nova start
and after that I tried to poweroff now
.
At that time I get only one message.
{
"event_type": "compute.instance.update",
"payload": {
"display_name": "testvm",
"host": "local.compute.node",
"hostname": "testvm",
"new_task_state": null,
"old_state": null,
"old_task_state": null,
"state": "active",
"state_description": "",
}
"priority": "INFO",
"publisher_id": "conductor.appliance",
"timestamp": "2013-09-06 08:37:59.007831"
}
And VM is ACTIVE
in appliance.
I want to know is this the expected behavior? Why VM is not went down when the compute node went down?
Bumping for a legitimate question-- same issue encountered in my cluster.