Ask Your Question
0

Fail to attach volume and stay in "attaching" status

asked 2012-06-03 16:35:49 -0600

kj-tanaka gravatar image

Hi, I'm working on OpenStack Essex installation, and things are mostly working except nova-volume. When I try attacking a volume as /dev/vdc to a running instance, it doesn't finish "attaching". The nova-compute.log says "ISCSI volume not yet found at: vdc. Will rescan & retry. Try number: 0", but it doesn't seem retrying.

Here's nova-volume related entries of nova.conf on nova-compute nodes.

iscsi_ip_prefix=172.29.200 num_targets=100

iscsi_helper=tgtadm

Here's the entries on the volume node.

iscsi_ip_prefix=172.29.200. iscsi_ip_address=172.29.200.110 num_targets=100

iscsi_helper=tgtadm

When I check "iscsi -m session" and "fdsk -l" on the nova-compute node which is running the instance, the volume exists as /dev/sdb. The /dev/sdb should be atteched to the instance as /dev/vdc. But it doesn't work somehow. The outputs are below.

root@novacompute01:~# iscsi -m session tcp: [3] 172.29.200.110:3260,1 iqn.2010-10.org.openstack:volume-00000007

root@novacompute01:~# fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00053a18

Device Boot Start End Blocks Id System /dev/sda1 63 4209029 2104483+ 82 Linux swap / Solaris /dev/sda2 * 4209030 109081349 52436160 83 Linux /dev/sda3 109081350 318809924 104864287+ 83 Linux

Disk /dev/sdb: 4294 MB, 4294967296 bytes 133 heads, 62 sectors/track, 1017 cylinders, total 8388608 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table

Does anyone know how to fix it?

Thanks for your help!

edit retag flag offensive close merge delete

6 answers

Sort by ยป oldest newest most voted
0

answered 2012-06-06 20:42:14 -0600

kj-tanaka gravatar image

Here's the same issue and the way to resolve it. Thanks Hendrik.

https://bugs.launchpad.net/nova/+bug/996840/comments/4 (https://bugs.launchpad.net/nova/+bug/...)

Koji

edit flag offensive delete link more
0

answered 2012-06-05 19:59:55 -0600

kj-tanaka gravatar image

This is the last several lines on my log in verbose mode. It seems it's hanging some iscsiadm command in volume.py

2012-06-05 15:49:00 DEBUG nova.virt.libvirt.connection [req-effecfa6-9862-4bf5-ab29-3e7b8f100be3 None None] Updating host stats from (pid=31923) update_status /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:2467 2012-06-05 15:49:04 DEBUG nova.manager [req-effecfa6-9862-4bf5-ab29-3e7b8f100be3 None None] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=31923) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152 2012-06-05 15:49:04 DEBUG nova.virt.libvirt.volume [req-81e9b836-48dd-4465-887d-455d521d5a5a 22783796c597492dbb05f149aff16238 f07ca073f69648708d0480da9d761a42] iscsiadm ('--login',): stdout=Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-00000002, portal: 172.29.200.121,3260] Login to [iface: default, target: iqn.2010-10.org.openstack:volume-00000002, portal: 172.29.200.121,3260]: successful stderr= from (pid=31923) _run_iscsiadm /usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py:112 2012-06-05 15:49:04 DEBUG nova.utils [req-81e9b836-48dd-4465-887d-455d521d5a5a 22783796c597492dbb05f149aff16238 f07ca073f69648708d0480da9d761a42] Running cmd (subprocess): sudo nova-rootwrap iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000002 -p 172.29.200.121:3260 --op update -n node.startup -v automatic from (pid=31923) execute /usr/lib/python2.7/dist-packages/nova/utils.py:219 2012-06-05 15:49:04 DEBUG nova.virt.libvirt.volume [req-81e9b836-48dd-4465-887d-455d521d5a5a 22783796c597492dbb05f149aff16238 f07ca073f69648708d0480da9d761a42] iscsiadm ('--op', 'update', '-n', 'node.startup', '-v', 'automatic'): stdout= stderr= from (pid=31923) _run_iscsiadm /usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py:112

So it seems the problem is related with iscsiadm command in the code... But I can't resolve it yet. Help!!!

class LibvirtISCSIVolumeDriver(LibvirtVolumeDriver): """Driver to attach Network volumes to libvirt."""

def _run_iscsiadm(self, iscsi_properties, iscsi_command, **kwargs):
    check_exit_code = kwargs.pop('check_exit_code', 0)
    (out, err) = utils.execute('iscsiadm', '-m', 'node', '-T',
                               iscsi_properties['target_iqn'],
                               '-p', iscsi_properties['target_portal'],
                               *iscsi_command, run_as_root=True,
                               check_exit_code=check_exit_code)
    LOG.debug("iscsiadm %s: stdout=%s stderr=%s" %
              (iscsi_command, out, err))
    return (out, err)
edit flag offensive delete link more
0

answered 2012-06-05 13:15:13 -0600

kj-tanaka gravatar image

Oh this is syslog, not libvirt.

edit flag offensive delete link more
0

answered 2012-06-05 13:12:00 -0600

kj-tanaka gravatar image

Toyday, I noticed some people are having the same problem. Here's my libvirt's log.

Jun 5 08:56:31 i121 kernel: [ 278.502378] scsi4 : iSCSI Initiator over TCP/IP Jun 5 08:56:31 i121 kernel: [ 278.758605] scsi 4:0:0:0: RAID IET Controller 0001 PQ: 0 ANSI: 5 Jun 5 08:56:31 i121 kernel: [ 278.758915] scsi 4:0:0:0: Attached scsi generic sg1 type 12 Jun 5 08:56:31 i121 kernel: [ 278.760219] scsi 4:0:0:1: Direct-Access IET VIRTUAL-DISK 0001 PQ: 0 ANSI: 5 Jun 5 08:56:31 i121 kernel: [ 278.760529] sd 4:0:0:1: Attached scsi generic sg2 type 0 Jun 5 08:56:31 i121 kernel: [ 278.761506] sd 4:0:0:1: [sdb] 2097152 512-byte logical blocks: (1.07 GB/1.00 GiB) Jun 5 08:56:31 i121 kernel: [ 278.762224] sd 4:0:0:1: [sdb] Write Protect is off Jun 5 08:56:31 i121 kernel: [ 278.762229] sd 4:0:0:1: [sdb] Mode Sense: 49 00 00 08 Jun 5 08:56:31 i121 kernel: [ 278.762749] sd 4:0:0:1: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 5 08:56:31 i121 kernel: [ 278.774284] sdb: unknown partition table Jun 5 08:56:31 i121 kernel: [ 278.775839] sd 4:0:0:1: [sdb] Attached SCSI disk Jun 5 08:56:31 i121 iscsid: connection1:0 is operational now

edit flag offensive delete link more
0

answered 2012-06-04 14:33:34 -0600

kj-tanaka gravatar image

I tried nova.scheduler.simple.SimpleScheduler, but no luck. The status doesn't change from "attaching". If I attach a volume at first running instance via Dashboard, the volume is attached without any issue. This problem happens when I try to attach a volume to a running instance. And there isn't any useful error log on nova-compute.log, nova-schedule.log or nova-volume.log.

edit flag offensive delete link more
0

answered 2012-06-04 01:56:19 -0600

kj-tanaka gravatar image

On the nova-scheduler.log, there is this worning message.

2012-06-03 21:38:40 WARNING nova.scheduler.manager [req-xxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxx] Driver Method schedule_create_volume missing: 'ChanceScheduler' object has no attribute 'schedule_create_volume'. Reverting to schedule()

Thanks.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2012-06-03 16:35:49 -0600

Seen: 217 times

Last updated: Jun 06 '12