Ask Your Question
0

nova volume-detach fails silently, compute log shows libvirtError: invalid argument: no target device vdb

asked 2015-03-04 10:49:21 -0500

rmartin gravatar image

updated 2015-03-05 12:35:51 -0500

I'm trying to detach a cinder volume from a nova instance.

Nova volume-list shows it as in-use:

| 34b0ea26-f85c-4b62-8ebd-884b0e63e2d5 | in-use | filestore | 256  | None| 4d05ffe4-d30a-4c93-b710-c9ec80dad1c2 |

The volume is connected via iscsi:

# iscsiadm -m session
tcp: [5] 10.3.40.10:3260,1 iqn.2010-10.org.openstack:volume-34b0ea26-f85c-4b62-8ebd-884b0e63e2d5 (non-flash)

It's visible as /dev/vdb within the instance, and is mountable and read/writable.

However, after unmounting it in the instance and issuing

# nova volume-detach 4d05ffe4-d30a-4c93-b710-c9ec80dad1c2 34b0ea26-f85c-4b62-8ebd-884b0e63e2d5

it does not detach. The volume stays labeled "in-use". I get this error in the compute.log:

libvirtError: invalid argument: no target device vdb

When I run "virsh edit 4" on the compute node, the disk device for vdb is in fact missing. However, when I run "virsh dumpxml 4" it is still there!

How can I untangle this?

Virsh dumpxml vs. edit diff: http://pastebin.com/3CrGbcwu

Compute.log of the error: http://pastebin.com/1UCahB3d

edit retag flag offensive close merge delete

Comments

Here's the error from the compute node's compute.log:

2015-03-03 19:19:33.125 12332 AUDIT nova.compute.manager [req-c9695029-1529-49d4-be65-ec2a8fe0db49 918670168a67461bb0bc5fbccba26fd5 012c82ce05cd4739994cf7e103300c2f] [instance: 4d05ffe4-d30a-4c93-b710-c9ec80dad1c2] Detach volume 34b0ea26 -f85c-4

rmartin gravatar imagermartin ( 2015-03-04 10:49:30 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-03-05 17:37:00 -0500

rmartin gravatar image

I was able to get the libvirt configuration resynchronized by copying the <disk> XML from the xml dump into its own file, and then updated it back into the current configuration via

virsh attach-device 4 filestore.xml --config

After that, I reran nova volume-detach. It succeeded in cleaning up the compute node, the iscsi session is now idle. However, now the volume was stuck in "detaching" status. No errors anywhere, no idea why it didn't go back to "available." tgt-admin allowed me to take it offline and back to ready, then I got it back to available status with cinder reset-state.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-03-04 10:49:21 -0500

Seen: 857 times

Last updated: Mar 05 '15