Ask Your Question

Unable to start instances after host reboot

asked 2014-12-24 07:52:09 -0500

Anonymouslemming gravatar image

updated 2014-12-24 13:47:54 -0500

I have a 2 node environment where I'm using one host for the Controller, Network node and as a Compute node. I have a second host that acts as a compute node. I'm using CentOS7 for both hosts.

At the moment, I have excluded the second node from the setup to simplify troubleshooting.

I am able to create and start instances. I use "Boot from image (creates new volume)" when creating these.

After a reboot, none of the existing instances that were created before the reboot start anymore. When I try to start them in the dashboard, I get an exception in the nova-api log and the status remains 'Shutoff'. The full output is at but what I believe to be the key parts are:

2014-12-24 13:38:37.778 3058 ERROR oslo.messaging.rpc.dispatcher [req-695fe6a8-245b-443e-9d4c-28563c4e31ba ] Exception during message handling: Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T -p --rescan
Exit code: 21
Stdout: u''
Stderr: u'iscsiadm: No session found.\n'

If I create a new instance with a new volume, that starts.

After doing that, the error I get when trying to start any of the instances that were created before the reboot changes to

2014-12-24 13:36:00.113 2871 ERROR oslo.messaging.rpc.dispatcher [req-506b75df-4c7a-4fbd-8c94-b0178d0a109b ] Exception during message handling: iSCSI device not found at /dev/disk/by-path/

Both volumes are visible under Admin -> Volumes image description

I don't see anything in the cinder api log or cinder server log while attempting to restart the instances.

target status is as follows:

service target status -l

Redirecting to /bin/systemctl status  -l target.service
target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: active (exited) since Wed 2014-12-24 13:37:36 GMT; 6h ago
  Process: 1028 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 1028 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/target.service

Dec 24 13:37:35 systemd[1]: Starting Restore LIO kernel target configuration...
Dec 24 13:37:36 target[1028]: No saved config file at /etc/target/saveconfig.json, ok, exiting
Dec 24 13:37:36 systemd[1]: Started Restore LIO kernel target configuration.
edit retag flag offensive close merge delete


Please, post

service target status

on Storage node

dbaxps gravatar imagedbaxps ( 2014-12-24 08:03:36 -0500 )edit

Sorry - that exceeds comment limit again ... Pastebin is at

I'll also amend the question to include that.

Anonymouslemming gravatar imageAnonymouslemming ( 2014-12-24 13:42:52 -0500 )edit

I need :-

targetcli>ls on ISCSI target server
service iscsid status on iSCSI initiator (client)
dbaxps gravatar imagedbaxps ( 2014-12-24 13:52:23 -0500 )edit

targetcli ls is at - that's from the storage node

iscsid status is at As I'm just running the controller / storage / compute node right now, that was run from the controller node. But it's also a compute node.

Anonymouslemming gravatar imageAnonymouslemming ( 2014-12-24 15:37:59 -0500 )edit

I asked

service iscsid status -l

not iscsi

dbaxps gravatar imagedbaxps ( 2014-12-24 22:13:29 -0500 )edit

1 answer

Sort by » oldest newest most voted

answered 2014-12-24 08:12:36 -0500

dbaxps gravatar image

updated 2014-12-26 12:03:16 -0500

With Enterprise Linux 7, both Red Hat and CentOS, there is a big change in the management of iSCSI targets.Software run as part of the standard systemd structure. Consequently there will be significant changes in multi back end cinder architecture of RDO Juno running on CentOS 7 or Fedora 21 utilizing LVM based iSCSI targets.
Regarding tuning iSCSI target server && iSCSI initiator (iscsid daemon) on CentOS 7. View :- What I expect to see on iSCSI initiator side ( just a kind of ) :-

[root@juno2 ~]# service iscsid status
Redirecting to /bin/systemctl status  iscsid.service
iscsid.service - Open-iSCSI
   Loaded: loaded (/usr/lib/systemd/system/iscsid.service; enabled)
   Active: active (running) since Thu 2014-12-25 08:18:08 MSK; 33s ago
     Docs: man:iscsid(8)
  Process: 16495 ExecStop=/sbin/iscsiadm -k 0 2 (code=exited, status=0/SUCCESS)
  Process: 16500 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS)
 Main PID: 16502 (iscsid)
   CGroup: /system.slice/iscsid.service
           ├─16501 /usr/sbin/iscsid
           └─16502 /usr/sbin/iscsid

Dec 25 08:18:08 juno2.localdomain iscsid[16500]: iSCSI logger with pid=16501 started!
Dec 25 08:18:08 juno2.localdomain systemd[1]: Started Open-iSCSI.
Dec 25 08:18:09 juno2.localdomain iscsid[16501]: iSCSI daemon with pid=16502 started!
Dec 25 08:18:12 juno2.localdomain iscsid[16501]: connection2:0 is operational after recovery (1 attempts)
Hint: Some lines were ellipsized, use -l to show in full.

Cinder part :-

In cinder.conf

Then via CLI

[root@juno1 ~(keystone_admin)]# cinder type-create lvms
[root@juno1 ~(keystone_admin)]# cinder type-key lvms set volume_backend_name=LVM_iSCSI

When you create cinder volume ( via dashboard for instance) type from dropdown menu should be selected "lvms"

Next issue :- Per

On iSCSI Target Server
# enter the admin console
[root@dlp ~]#
targetcli shell version 2.1.fb34
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
. . . .  .
# set ACL (it's the IQN of an initiator you permit to connect)

Created Node ACL for
Created mapped LUN 0.

On iSCSI initiator Client

[root@www ~]#
vi /etc/iscsi/initiatorname.iscsi
# change to the same IQN you set on the iSCSI target server

In my case, I kept default:-

   [root@juno2 ~]#  cat /etc/iscsi/initiatorname.iscsi

It appears (vs your report ) in targetcli>ls

image description

edit flag offensive delete link more


I've now spent the better part of the past week working on this, and I can now reproduce it 100% on both a virtual machine and on physical hardware. What seems to be happening is that the iscsi config is not persisted after the reboot. Creating a new machine creates a new config.

Anonymouslemming gravatar imageAnonymouslemming ( 2015-01-03 16:58:08 -0500 )edit

The existing config works fine for new machines and keeps working - right up until I reboot. The fact that it works and then stops seems to indicate that something is not right to me.

Anonymouslemming gravatar imageAnonymouslemming ( 2015-01-03 16:58:40 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2014-12-24 07:52:09 -0500

Seen: 2,989 times

Last updated: Dec 26 '14