Can anyone assist with an issue attaching volume to instance when storage is used? [closed]

asked 2015-01-22 04:56:06 -0600

mickt gravatar image

updated 2015-01-22 08:02:36 -0600

Asking this question in a different way in the hope that someone can assist.

I have juno installed on CentOS7 by packstack. System comprises of 1 controller/neutron, 3 compute, one of which has storage for cinder.

I can create instances and volumes (in target) etc. but volumes fail to attach to instances.

cinder list

+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+   
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| e5c4c144-1315-476e-97c5-8f647be0868a | available |     test     |  1   |     lvm     |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

nova list

+--------------------------------------+------+--------+------------+-------------+------------------------+
| ID                                   | Name | Status | Task State | Power State | Networks               |
+--------------------------------------+------+--------+------------+-------------+------------------------+
| d81f77ee-2b7f-4032-bd19-e113e4833220 | test | ACTIVE | -          | Running     | admin-net=172.16.100.4 |
+--------------------------------------+------+--------+------------+-------------+------------------------+

nova volume-attach d81f77ee-2b7f-4032-bd19-e113e4833220 e5c4c144-1315-476e-97c5-8f647be0868a auto

+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | e5c4c144-1315-476e-97c5-8f647be0868a |
| serverId | d81f77ee-2b7f-4032-bd19-e113e4833220 |
| volumeId | e5c4c144-1315-476e-97c5-8f647be0868a |
+----------+--------------------------------------+

cinder list

+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| e5c4c144-1315-476e-97c5-8f647be0868a | available |     test     |  1   |     lvm     |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

Storage host volume.log

2015-01-22 13:05:43.164 5125 INFO cinder.brick.iscsi.iscsi [req-b32c0a32-cb77-4ae2-8241-8c7d9bc8ad5e bd01e7fc46884992afd7660e60cbb3d6 26bdb8bbc54f4137998a3931d68fa45b - - -] Creating iscsi_target for volume: volume-02a490ef-12ca-43d4-8a7f-f2fa15ff0edb
2015-01-22 13:05:43.393 5125 ERROR cinder.brick.iscsi.iscsi [req-b32c0a32-cb77-4ae2-8241-8c7d9bc8ad5e bd01e7fc46884992afd7660e60cbb3d6 26bdb8bbc54f4137998a3931d68fa45b - - -] Failed to create iscsi target for volume id:volume-02a490ef-12ca-43d4-8a7f-f2fa15ff0edb.
2015-01-22 13:05:43.394 5125 ERROR oslo.messaging.rpc.dispatcher [req-b32c0a32-cb77-4ae2-8241-8c7d9bc8ad5e bd01e7fc46884992afd7660e60cbb3d6 26bdb8bbc54f4137998a3931d68fa45b - - -] Exception during message handling: Resource could not be found.
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher     return f(*args, **kwargs)
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 875, in initialize_connection
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher     volume)
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher     return f(*args, **kwargs)
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 548, in create_export
2015-01-22 13:05:43.394 5125 TRACE oslo.messaging.rpc.dispatcher     return self._create_export(context, volume)
2015-01-22 13:05:43.394 5125 TRACE oslo ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by mickt
close date 2015-01-28 06:07:11.005666

Comments

What's wrong with this shema http://textuploader.com/t3dp

dbaxps gravatar imagedbaxps ( 2015-01-22 05:49:51 -0600 )edit

Nothing and I have tried it, please see above in main body. After doing so, volume still shows available (& obviously no /dev/vdb in instance)

mickt gravatar imagemickt ( 2015-01-22 06:17:54 -0600 )edit

So inside instance pvcreate /dev/vdb doesn't work ?

dbaxps gravatar imagedbaxps ( 2015-01-22 06:22:23 -0600 )edit

Device /dev/vdb not found

mickt gravatar imagemickt ( 2015-01-22 06:42:10 -0600 )edit

Are there any ERRORS in cinder logs ?

dbaxps gravatar imagedbaxps ( 2015-01-22 07:24:54 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-01-27 09:49:47 -0600

mickt gravatar image

updated 2015-02-02 04:13:38 -0600

I finally figured what the cause of the issue was. There were three issues as follows:

.

1) I did not assign a unique name to each iscsi initiator, e.g.

compute02: cat /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.2014-12.com.test:iscsi.access02

compute03: cat /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.2014-12.com.test:iscsi.access03

.

.

2) I configured the lvm backend in cinder.conf on the storage host but did not configure some parameters, which I assumed were covered as part of the backend configuration (all except lio_initiator_iqns, which I forgot to configure).

backend conf:

[lvm04]

iscsi_helper=lioadm

iscsi_ip_address=10.10.12.16

iscsi_target_prefix=iqn.2014-12.com.test:

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

volume_group=cindervg

volume_backend_name=LVM_iSCSI

.

Missing conf in [DEFAULT] section:

iscsi_target_prefix=iqn.2014-12.com.test:

iscsi_ip_address=10.10.12.16

volume_backend_name=LVM_iSCSI

iscsi_helper=lioadm

lio_initiator_iqns=iqn.2014-12.com.test:iscsi.access02,iqn.2014-12.com.test:iscsi.access03,iqn.2014-12.com.test:iscsi.access04

iscsi_iotype=blockio

.

.

3) Some volumes were still showing as active sessions after deletion, e.g.

iscsiadm -m session

tcp: [15] 10.10.12.16:3260,0 iqn.2014-12.com.test:volume-04aa68d1-8486-45c8-a804-0b8972ea8442 (non-flash)

I've still to figure why this is and resolve but in the interim, I'm deleting the /var/lib/iscsi/nodes/<volume_dir> and rebooting.

There also seems to be some iSCSI login negotiation issues after rebooting the storage host.

If anyone has info on this part, please add a note.

edit flag offensive delete link more

Comments

A further observation is that it seems fine if I launch an instance with create volume prior create a volumes and attaching to instances.

mickt gravatar imagemickt ( 2015-02-03 03:44:50 -0600 )edit

I tried with different and same iscsi initiator and the result is the same. When I try to attach a volume, this appears in nova-compute.log: "iSCSI device not found at /dev/disk/by-path/ip-192.168.70.2:.." and indeed in storage node there is nothing on the storage node, on that folder, even though..

sacha-m gravatar imagesacha-m ( 2015-10-07 23:06:53 -0600 )edit

..if I see with targecli, I see the volume there. So the question for me is: What makes to appear the volumes (sym link) in /dev/disk/by-path/? Actually what I see in that folder is stuff like pci-0000:03:00.0-scsi-0:2:2:0-part1 which is a partition that is part of the VGGROUP where the volume is.

sacha-m gravatar imagesacha-m ( 2015-10-07 23:11:16 -0600 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-01-22 04:56:06 -0600

Seen: 2,603 times

Last updated: Feb 02 '15