Ask Your Question

Error instance state after host reboot

asked 2015-04-22 04:45:15 -0500

Bartok gravatar image

updated 2015-04-27 05:07:15 -0500

I've some trouble with the management of instances. I created an instance attached to a volume, started and it run without trouble. After performing a reboot of the host machine, i cannot start the previous created instance both if the guest instance was running or stopped. When i press the "start instance" button, the instance not start and the only thing to do is to terminate and recreate.


Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T -p --rescan Exit code: 21 Stdout: u'' Stderr: u'iscsiad].


Thanks to comments i've found that using virsh command before reboot show a guests machine but after reboot the list command return no guests.

edit retag flag offensive close merge delete


When your host comes back online are you confirming that the nova-compute service is running before attempting to start the instance? Also, when the compute host is back online you usually will have to run nova reboot <instance> or nova reboot --hard <instance> to start the instances.

raul.flores gravatar imageraul.flores ( 2015-04-23 10:12:36 -0500 )edit

Had to reboot my host machine. After reboot I attempted to start my instance but received the error

 Failed to start libvirt guest: libvirtError: Cannot access storage file '/dev/sdb' (as uid:107, gid:107): No such file or directory

Running nova reboot --hard <instance> solved my problem TY!

AaronK gravatar imageAaronK ( 2018-02-17 18:52:31 -0500 )edit

4 answers

Sort by ยป oldest newest most voted

answered 2015-04-23 16:36:41 -0500

Have you tried nova reset-state --active $instanceid, also do the console logs of the instance or libvirt logs show anything interesting. If you aren't familiar with those commands or logs the links below should help. Good Luck ;)

edit flag offensive delete link more


I tested this command already, but after a state reset the instance wan't boot and goes in ERROR state again. I've searched for erros in log with no success. I've an message from the Horizon console when star machine, reported in the question

Bartok gravatar imageBartok ( 2015-04-24 06:36:39 -0500 )edit

After the compute host restarts do you see the iscsi (i am assuming you are using iscsi for the volume connection) being re-connected by libvirt? If the instance cannot find the backing iscsi drive it will error when it tries to start

jdexter gravatar imagejdexter ( 2015-04-24 11:14:48 -0500 )edit

From the error message that you have posted this looks to be the cause. Does this happen when only the copmute host is restarted or when the the cinder-volumes host is restarted?

jdexter gravatar imagejdexter ( 2015-04-24 11:16:34 -0500 )edit

I've a single host installation so ... this is the same machine. How i can test if the iscsi is reconnected by libvirt? I've not understand why it is using an iscsi connection given that I use a LVM on the same host. It is the same situation of this question: link text

Bartok gravatar imageBartok ( 2015-04-25 16:46:31 -0500 )edit

answered 2015-09-07 08:05:15 -0500

mriedem gravatar image

Hopefully this instance wasn't a 'pet' for you. I agree with others that mention it sounds like making the iscsi connection to the volume is the problem. What version of nova are you running? That's generally helpful when posting questions for help in case there are known issues or bug fixes.

Also, it's generally not a good idea to be doing maintenance on your compute nodes outside of nova, i.e. rebooting the host that nova-compute is running on and expecting everything to be happy once it comes back up. I'd recommend reading through the docs on planned maintenance:

Also, I'd only use virsh for read-only commands like listing guests on that host to see if nova is accurate with respect to the hypervisor. Otherwise if you do things outside of nova-api, like reboot instances, nova might think those instances need to be stopped later when it tries to sync up the nova database and hypervisor power states for the instance.

edit flag offensive delete link more

answered 2015-04-25 22:23:01 -0500

kamrul gravatar image

Try this. This command worked in my case. ================== ================= For nova compute use following command

cd /usr/bin

# ./nova-compute

For nova network use following command

cd /usr/bin

# ./nova-network

For nova conductor use following command

cd /usr/bin

# ./nova-conductor

For nova cert use following command

cd /usr/bin

# ./nova-cert

edit flag offensive delete link more

answered 2018-09-04 04:10:04 -0500

Hope this will still help someone: I solved it by opting to use virsh.

First, list your virtual machines: virsh list --all

Then resume the paused VM: virsh resume INSTANCE-ID

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2015-04-22 04:45:15 -0500

Seen: 31,393 times

Last updated: Sep 07 '15