Volume says it is attached, but doesn't show up in guest

asked 2011-07-22 17:58:44 -0600

jslenk gravatar image

Cactus nova. Can create volumes. Doing euca-attach-volume appears to work; reports that the volume is in-use etc. But when I log into the guest, there's no vdz in /dev, and sudo fdisk -l doesn't show it either. Suggestions for what to look for in logs appreciated :-) I've been trawling them but don't see anything obviously wrong to me yet. Rebooted the guest, didn't help. Thanks...

// the first 3 below are the bad ones from my last question, dunno how to get rid of them. root@cc-vm-3:~# euca-describe-volumes VOLUME vol-00000001 1 nova creating (default-project, None, None, None) 2011-07-21T21:03:00Z VOLUME vol-00000002 1 nova creating (default-project, None, None, None) 2011-07-21T21:54:04Z VOLUME vol-00000003 1 nova creating (default-project, None, None, None) 2011-07-21T23:56:58Z VOLUME vol-00000004 1 nova available (default-project, cc-vm-3, None, None) 2011-07-22T16:53:51Z VOLUME vol-00000005 1 nova in-use (default-project, cc-vm-3, i-00000016[cnode-1-3-4], /dev/vdz) 2011-07-22T17:32:30Z

edit retag flag offensive close merge delete

7 answers

Sort by ยป oldest newest most voted
0

answered 2011-07-22 21:24:53 -0600

jslenk gravatar image

(p.p.s. this is with kvm if that matters.)

edit flag offensive delete link more
0

answered 2011-07-22 22:12:51 -0600

jslenk gravatar image

Huh. I'm pretty surprised and confused! Sorry if I've totally misunderstood things.

The man page for euca-attach-volume says "-d local device name (inside the guest VM) to use." guest, not host. The docs http://docs.openstack.org/bexar/openstack-compute/admin/content/ch05s07.html (http://docs.openstack.org/bexar/opens...) make it look to me like it should be the same device name in the guest. If it is truly true that the guest sees some effectively-kinda-random-from-the-perspective-of-the-admin device name, it would be great to call that out in the docs. :-} (To reiterate, we're on Cactus, in case something about that behaviour has changed since then.)

So I just retried, and the controller could send out the attach, and things showed up in the host logs, but the host does not show any new /dev/vdq i called it this time. There are no /dev/vd* anythings. Looking at syslog, I think if anything it showed up as /dev/sdd or maybe /dev/sdf on the host. BTW, the vg wasn't ever formatted & given a fs, if that is something that can confuse iscsi/nova-volume?

At any rate, all in all now I'm even more befuddled about the expected names of things.

(Looking at the Cactus code, it appears to me (but obviously I don't know what I'm doing) that compute/api.py.attach_volume() is sending the device name as supplied by the euca-attach-volume command in the RPC cast, and looking at virt/libvirt_conn.py.attach_volume() it is just getting the same value "/dev/vdq" for example per what I see in the host's nova-compute.log. But I've never seen it show up on either the host or guest as that yet.)

edit flag offensive delete link more
0

answered 2011-07-22 22:36:43 -0600

vishvananda gravatar image

Sorry for the confusion.

Unfortunately this is a KVM limitation. You can specify a device name to the KVM hypervisor, but the actual means of attach to the guest is over a virtual pci bus. When the guest sees a new device on the pci bus, it picks the next available name (which in most cases is /dev/vdc) and the disk will show up there. If you then detach this disk and attach another one it will show up as /dev/vdd, etc.. The device name is required on the hypervisor side so it can be detached later, but the device name that the hypervisor reports is completely different from the device name in the guest.

Remember that euca is designed to work with AWS which is using xen. Xen will more reliably attach a device to a specific name, so in that case the names are consistent. I've considered a possible workaround in the guest of a udev rule that could query metadata to find the requested attach point and make a symlink to the guest device, but this has not been created yet.

Perhaps this issue needs to be clarified in the docs.

Vish

On Jul 22, 2011, at 3:15 PM, Jon Slenk wrote:

Question #165695 on OpenStack Compute (nova) changed: https://answers.launchpad.net/nova/+q...

Status: Answered => Open

Jon Slenk is still having a problem: Huh. I'm pretty surprised and confused! Sorry if I've totally misunderstood things.

The man page for euca-attach-volume says "-d local device name (inside the guest VM) to use." guest, not host. The docs http://docs.openstack.org/bexar/opens... compute/admin/content/ch05s07.html make it look to me like it should be the same device name in the guest. If it is truly true that the guest sees some effectively-kinda-random-from-the-perspective-of-the-admin device name, it would be great to call that out in the docs. :-} (To reiterate, we're on Cactus, in case something about that behaviour has changed since then.)

So I just retried, and the controller could send out the attach, and things showed up in the host logs, but the host does not show any new /dev/vdq i called it this time. There are no /dev/vd* anythings. Looking at syslog, I think if anything it showed up as /dev/sdd or maybe /dev/sdf on the host. BTW, the vg wasn't ever formatted & given a fs, if that is something that can confuse iscsi/nova-volume?

At any rate, all in all now I'm even more befuddled about the expected names of things.

(Looking at the Cactus code, it appears to me (but obviously I don't know what I'm doing) that compute/api.py.attach_volume() is sending the device name as supplied by the euca-attach-volume command in the RPC cast, and looking at virt/libvirt_conn.py.attach_volume() it is just getting the same value "/dev/vdq" for example per what I see in the host's nova-compute.log. But ...

(more)
edit flag offensive delete link more
0

answered 2011-07-22 18:26:11 -0600

jslenk gravatar image

Possibly relevant log extract from the compute host:

2011-07-22 17:34:35,396 DEBUG nova.rpc [-] received {u'_context_request_id': u'MFXGNIG4RLVB99USLQ2-', u'_context_read_deleted': False, u'args': {u'instance_id': 22, u'mountpoint': u'/dev/vd z', u'volume_id': 5}, u'_context_is_admin': True, u'_context_timestamp': u'2011-07-22T17:34:39Z', u'_context_user': u'admin', u'method': u'attach_volume', u'_context_project': u'default-pro ject', u'_context_remote_address': u'10.4.0.24'} from (pid=27378) _receive /usr/lib/pymodules/python2.6/nova/rpc.py:167 2011-07-22 17:34:35,397 DEBUG nova.rpc [-] unpacked context: {'timestamp': u'2011-07-22T17:34:39Z', 'remote_address': u'10.4.0.24', 'project': u'default-project', 'is_admin': True, 'user': u'admin', 'request_id': u'MFXGNIG4RLVB99USLQ2-', 'read_deleted': False} from (pid=27378) _unpack_context /usr/lib/pymodules/python2.6/nova/rpc.py:331 2011-07-22 17:34:35,397 INFO nova.compute.manager [MFXGNIG4RLVB99USLQ2- admin default-project] check_instance_lock: decorating: |<function attach_volume="" at="" 0x2fbcde8="">| 2011-07-22 17:34:35,398 INFO nova.compute.manager [MFXGNIG4RLVB99USLQ2- admin default-project] check_instance_lock: arguments: |<nova.compute.manager.computemanager object="" at="" 0x2eba510="">| |< nova.context.RequestContext object at 0x43ca950>| |22| 2011-07-22 17:34:35,398 DEBUG nova.compute.manager [MFXGNIG4RLVB99USLQ2- admin default-project] instance 22: getting locked state from (pid=27378) get_lock /usr/lib/pymodules/python2.6/nova /compute/manager.py:701 2011-07-22 17:34:35,455 INFO nova.compute.manager [MFXGNIG4RLVB99USLQ2- admin default-project] check_instance_lock: locked: |False| 2011-07-22 17:34:35,455 INFO nova.compute.manager [MFXGNIG4RLVB99USLQ2- admin default-project] check_instance_lock: admin: |True| 2011-07-22 17:34:35,455 INFO nova.compute.manager [MFXGNIG4RLVB99USLQ2- admin default-project] check_instance_lock: executing: |<function attach_volume="" at="" 0x2fbcde8="">| 2011-07-22 17:34:35,498 AUDIT nova.compute.manager [MFXGNIG4RLVB99USLQ2- admin default-project] instance 22: attaching volume 5 to /dev/vdz 2011-07-22 17:34:35,509 WARNING nova.volume.driver [-] ISCSI provider_location not stored, using discovery 2011-07-22 17:34:35,510 DEBUG nova.utils [-] Running cmd (subprocess): sudo iscsiadm -m discovery -t sendtargets -p cc-vm-3 from (pid=27378) execute /usr/lib/pymodules/python2.6/nova/utils.py:150 2011-07-22 17:34:35,544 DEBUG nova.volume.driver [-] ISCSI Discovery: Found 10.4.0.24:3260,1 iqn.2010-10.org.openstack:volume-00000005 from (pid=27378) _get_iscsi_properties /usr/lib/pymodules/python2.6/nova/volume/driver.py:409 2011-07-22 17:34:35,544 DEBUG nova.utils [-] Running cmd (subprocess): sudo iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000005 -p 10.4.0.24:3260 --login from (pid=27378) execute /usr/lib/pymodules/python2.6/nova/utils.py:150 2011-07-22 17:34:36,068 DEBUG nova.volume.driver [-] iscsiadm --login: stdout=Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-00000005, portal: 10.4.0.24,3260] Login to [iface: default, target: iqn.2010-10.org.openstack:volume-00000005, portal: 10.4.0.24,3260]: successful stderr= from (pid=27378) _run_iscsiadm /usr/lib/pymodules/python2.6/nova/volume/driver.py:436 2011-07-22 17:34:36,069 DEBUG nova.utils [-] Running cmd (subprocess): sudo iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000005 -p 10.4.0.24:3260 ('--op', 'update', '-n', 'node.startup', '-v', 'automatic') from (pid=27378) execute /usr/lib/pymodules/python2 ... (more)

edit flag offensive delete link more
0

answered 2011-07-22 18:28:00 -0600

jslenk gravatar image

root@cnode-1-3-4:/var/log/nova# iscsiadm -m node 10.4.0.24:3260,1 iqn.2010-10.org.openstack:volume-00000005 10.4.0.24:3260,1 iqn.2010-10.org.openstack:volume-00000004 so they are visible to the compute host it seems, so I'm guessing there's something about getting those mapped up and into the guest that isn't working & that I don't yet understand.

edit flag offensive delete link more
0

answered 2011-07-22 19:05:37 -0600

jslenk gravatar image

(p.s. I experimented with 2 volumes. Neither showed up. The first one I could detach. The second one throws errors when detaching, and never shows up as detached. Furthermore, it appears to prevent the instance from being correctly terminated, so the instance won't fully go away. The error is File "/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 454, in detach_volume / 'NotFound: No disk at vdz'. I don't know why I didn't get that when I tried attaching the first volume, because I did try that one at /dev/vdc and /dev/vdx and it never showed up, but did let me detach it.)

edit flag offensive delete link more
0

answered 2011-07-22 21:42:18 -0600

vishvananda gravatar image

Fyi, the volume will not be at /dev/vdz. That is the attach point from the hypervisors view, but the guest will pick its own name. it will show up in the next available slot (usually /dev/vdc)

On Jul 22, 2011, at 2:26 PM, Jon Slenk wrote:

Question #165695 on OpenStack Compute (nova) changed: https://answers.launchpad.net/nova/+q...

Jon Slenk gave more information on the question: (p.p.s. this is with kvm if that matters.)


You received this question notification because you are a member of Nova Core, which is an answer contact for OpenStack Compute (nova).

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2011-07-22 17:58:44 -0600

Seen: 316 times

Last updated: Jul 22 '11