Ask Your Question
0

ERROR attaching volume to vm on compute node

asked 2013-06-26 13:39:03 -0500

diul gravatar image

Hi guys, I need your help I have an openstack installation on three nodes. The Controller Node has all services, with quantum for networking and cinder for storage. The other two nodes run nova-compute services only.

The problem is that I can't attach a volume (located on Controller node) to an instance runnig on Compute Node. I got this error in compute node "/nova/compute-log"

2013-06-26 15:00:22 6280 ERROR nova.openstack.common.rpc.amqp [-] Exception during message handling 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last): 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 276, in _process_data 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp rval = self.proxy.dispatch(ctxt, version, method, *args) 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 145, in dispatch 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, *kwargs) 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp temp_level, payload) 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp return f(args, *kw) 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 175, in decorated_function 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp pass 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 161, in decorated_function 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp return function(self, context, args, *kwargs) 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 196, in decorated_function 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp kwargs['instance']['uuid'], e, sys.exc_info()) 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-06-26 15:00:22 6280 TRACE nova.openstack ... (more)

edit retag flag offensive close merge delete

11 answers

Sort by » oldest newest most voted
0

answered 2013-06-27 13:56:33 -0500

Oh>...... Sorry for some misread of the log.....

This is just because you forget to config nova-compute to use cinder instead of nova-volume

you should add volume_api_class=nova.volume.cinder.API in your nova.conf

I would like to bet you are using Folsom release, right :)

edit flag offensive delete link more
0

answered 2013-06-27 12:22:07 -0500

From the first log 2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp VolumeNotFound: Volume 27e29461-0b46-4b64-941d-a8a27f4f91d2 could not be found.

the volume you are trying to attach is 27e29461-0b46-4b64-941d-a8a27f4f91d2

but I cannot find it in cinder list i cannot find such volume.....

have you manually delete them?

edit flag offensive delete link more
0

answered 2013-06-27 16:13:38 -0500

diul gravatar image

Yes, it's folsom

I've added volume_api_class=nova.volume.cinder.API and no more lines in nova.conf on Compute node, but i got this error:

2013-06-27 16:21:36 6280 INFO nova.service [-] Caught SIGTERM, exiting 2013-06-27 16:21:36 6280 CRITICAL nova [-] need more than 0 values to unpack 2013-06-27 16:21:36 6280 TRACE nova Traceback (most recent call last): 2013-06-27 16:21:36 6280 TRACE nova File "/usr/bin/nova-compute", line 48, in <module> 2013-06-27 16:21:36 6280 TRACE nova service.wait() 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/service.py", line 659, in wait 2013-06-27 16:21:36 6280 TRACE nova _launcher.wait() 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/service.py", line 202, in wait 2013-06-27 16:21:36 6280 TRACE nova rpc.cleanup() 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py", line 203, in cleanup 2013-06-27 16:21:36 6280 TRACE nova return _get_impl().cleanup() 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py", line 581, in cleanup 2013-06-27 16:21:36 6280 TRACE nova return rpc_amqp.cleanup(Connection.pool) 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 419, in cleanup 2013-06-27 16:21:36 6280 TRACE nova connection_pool.empty() 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 63, in empty 2013-06-27 16:21:36 6280 TRACE nova self.get().close() 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py", line 368, in close 2013-06-27 16:21:36 6280 TRACE nova self.connection.close() 2013-06-27 16:21:36 6280 TRACE nova File "<string>", line 6, in close 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 316, in close 2013-06-27 16:21:36 6280 TRACE nova ssn.close(timeout=timeout) 2013-06-27 16:21:36 6280 TRACE nova File "<string>", line 6, in close 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 749, in close 2013-06-27 16:21:36 6280 TRACE nova if not self._ewait(lambda: self.closed, timeout=timeout): 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 566, in _ewait 2013-06-27 16:21:36 6280 TRACE nova result = self.connection._ewait(lambda: self.error or predicate(), timeout) 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 208, in _ewait 2013-06-27 16:21:36 6280 TRACE nova result = self._wait(lambda: self.error or predicate(), timeout) 2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line ... (more)

edit flag offensive delete link more
0

answered 2013-06-27 16:59:39 -0500

diul gravatar image

Seems solved by editing nova.conf on Compute Node as you suggested and installing python-cinderclient always on Compute node ;) Thanks a lot xiaoxi_chen

edit flag offensive delete link more
0

answered 2013-06-27 12:14:05 -0500

diul gravatar image

The volume exists, and I am able to attach it to a VM running on Controller Node 1. cinder list +--------------------------------------+----------------+--------------+------+-------------+-------------+ | ID | Status | Display Name | Size | Volume Type | Attached to | +--------------------------------------+----------------+--------------+------+-------------+-------------+ | 4c820b34-ddc7-4719-bc8a-dc0277324451 | available | HDD3 | 3 | None | | | b66bd34a-bfb1-4f18-a8a1-67a9ef51e8e8 | available | HDD4 | 1 | None | | | ddd50017-f2c8-4e59-8369-281b9177280d | error_deleting | Pro | 1 | None | | +--------------------------------------+----------------+--------------+------+-------------+-------------+ 2.--- Logical volume --- LV Path /dev/cinder-volumes/volume-ddd50017-f2c8-4e59-8369-281b9177280d LV Name volume-ddd50017-f2c8-4e59-8369-281b9177280d VG Name cinder-volumes LV UUID 8Mdy5e-isXi-xT75-T6bh-XZOf-pjJc-ltL26l LV Write Access read/write LV Creation host, time Controller_IP, 2013-06-24 17:44:12 +0200 LV Status available # open 1 LV Size 1,00 GiB Current LE 256 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3 --- Logical volume --- LV Path /dev/cinder-volumes/volume-4c820b34-ddc7-4719-bc8a-dc0277324451 LV Name volume-4c820b34-ddc7-4719-bc8a-dc0277324451 VG Name cinder-volumes LV UUID hwgLlu-m5h9-mYlw-QDXK-2wDF-JbhI-sKOXh7 LV Write Access read/write LV Creation host, time Controller_IP, 2013-06-26 10:50:05 +0200 LV Status available # open 1 LV Size 3,00 GiB Current LE 768 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:5 --- Logical volume --- LV Path /dev/cinder-volumes/volume-b66bd34a-bfb1-4f18-a8a1-67a9ef51e8e8 LV Name volume-b66bd34a-bfb1-4f18-a8a1-67a9ef51e8e8 VG Name cinder-volumes LV UUID Q6pirW-xWwb-znt6-g4eZ-qYHv-4EtF-BkrGFi LV Write Access read/write LV Creation host, time Controller_IP, 2013-06-26 13:53:19 +0200 LV Status available # open 1 LV Size 1,00 GiB Current LE 256 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:7 --- Logical volume --- LV Path /dev/vg_opscontroller/lv_root LV Name lv_root VG Name vg_opscontroller LV UUID qtM9O5-KNi0-7MUQ-8HMs-nmEd-ZZdE-JFq3Oi LV Write Access read/write LV Creation host, time ops-controller, 2013-04-18 15:02:54 +0200 LV Status available # open 1 LV Size 50,00 GiB Current LE 12800 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/vg_opscontroller/lv_home LV Name lv_home VG Name vg_opscontroller LV UUID PCBaOn-jVXf-1OC6-27Vz-lLNz-LWil-zN4b9v LV Write Access read/write LV Creation host, time ops-controller, 2013-04-18 15:03:04 +0200 LV Status available # open 0 LV Size 220,57 GiB Current LE 56466 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:6 --- Logical volume --- LV Path /dev/vg_opscontroller/lv_swap LV Name lv_swap VG Name vg_opscontroller LV UUID jrLxU3-gKq3-I8Zn-r0sG-8jtR-bv5f-vcSCOO LV Write Access read/write LV Creation host, time ops-controller, 2013-04-18 15:03:44 +0200 LV Status available # open 1 LV Size 7,81 GiB Current LE 2000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1


I take the opportunity also to ask you how to delete that "error_deleting" volume. The problem, I think, is that in the db the "provider_locaiotn" coloumn is "NULL". This volume, with its error, is previous a correct Cinder run

edit flag offensive delete link more
0

answered 2013-06-28 02:18:01 -0500

I also hit this issue..Thanks.

But diul, basically it's not a good manger to set the "resolved by" to yourself unless you resolve it without any external help

edit flag offensive delete link more
0

answered 2013-06-27 11:42:59 -0500

diul gravatar image

The volume exists, and I am able to attach it to a VM running on Controller Node

  1. cinder list +--------------------------------------+----------------+--------------+------+-------------+-------------+ | ID | Status | Display Name | Size | Volume Type | Attached to | +--------------------------------------+----------------+--------------+------+-------------+-------------+ | 4c820b34-ddc7-4719-bc8a-dc0277324451 | available | HDD3 | 3 | None | | | b66bd34a-bfb1-4f18-a8a1-67a9ef51e8e8 | available | HDD4 | 1 | None | | | ddd50017-f2c8-4e59-8369-281b9177280d | error_deleting | Pro | 1 | None | | +--------------------------------------+----------------+--------------+------+-------------+-------------+

2.--- Logical volume --- LV Path /dev/cinder-volumes/volume-ddd50017-f2c8-4e59-8369-281b9177280d LV Name volume-ddd50017-f2c8-4e59-8369-281b9177280d VG Name cinder-volumes LV UUID 8Mdy5e-isXi-xT75-T6bh-XZOf-pjJc-ltL26l LV Write Access read/write LV Creation host, time se08.scope.unina.it, 2013-06-24 17:44:12 +0200 LV Status available # open 1 LV Size 1,00 GiB Current LE 256 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3

--- Logical volume --- LV Path /dev/cinder-volumes/volume-4c820b34-ddc7-4719-bc8a-dc0277324451 LV Name volume-4c820b34-ddc7-4719-bc8a-dc0277324451 VG Name cinder-volumes LV UUID hwgLlu-m5h9-mYlw-QDXK-2wDF-JbhI-sKOXh7 LV Write Access read/write LV Creation host, time se08.scope.unina.it, 2013-06-26 10:50:05 +0200 LV Status available # open 1 LV Size 3,00 GiB Current LE 768 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:5

--- Logical volume --- LV Path /dev/cinder-volumes/volume-b66bd34a-bfb1-4f18-a8a1-67a9ef51e8e8 LV Name volume-b66bd34a-bfb1-4f18-a8a1-67a9ef51e8e8 VG Name cinder-volumes LV UUID Q6pirW-xWwb-znt6-g4eZ-qYHv-4EtF-BkrGFi LV Write Access read/write LV Creation host, time se08.scope.unina.it, 2013-06-26 13:53:19 +0200 LV Status available # open 1 LV Size 1,00 GiB Current LE 256 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:7

--- Logical volume --- LV Path /dev/vg_opscontroller/lv_root LV Name lv_root VG Name vg_opscontroller LV UUID qtM9O5-KNi0-7MUQ-8HMs-nmEd-ZZdE-JFq3Oi LV Write Access read/write LV Creation host, time ops-controller, 2013-04-18 15:02:54 +0200 LV Status available # open 1 LV Size 50,00 GiB Current LE 12800 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0

--- Logical volume --- LV Path /dev/vg_opscontroller/lv_home LV Name lv_home VG Name vg_opscontroller LV UUID PCBaOn-jVXf-1OC6-27Vz-lLNz-LWil-zN4b9v LV Write Access read/write LV Creation host, time ops-controller, 2013-04-18 15:03:04 +0200 LV Status available # open 0 LV Size 220,57 GiB Current LE 56466 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:6

--- Logical volume --- LV Path /dev/vg_opscontroller/lv_swap LV Name lv_swap VG Name vg_opscontroller LV UUID jrLxU3-gKq3-I8Zn-r0sG-8jtR-bv5f-vcSCOO LV Write Access read/write LV Creation host, time ops-controller, 2013-04-18 15:03:44 +0200 LV Status available # open 1 LV Size 7,81 GiB Current LE 2000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1


I take the opportunity also to ask you how to delete that "error_deleting" volume. The problem, I think, is that in the db the "provider_locaiotn" coloumn is "NULL". This volume, with its error, is previous a correct Cinder run

edit flag offensive delete link more
0

answered 2013-06-27 11:19:54 -0500

No, you don't need to configure cinder on compute node. It's likely that the volume you want to create doesn't exist due to some error Could you please do the following checklist: 1. run and paste the result of 'cinder list', see if the volume you like to attach is "available" 2. run pvdisplay, lvdisplay,vgdisplay on control node, and paste the result

edit flag offensive delete link more
0

answered 2013-06-27 03:38:00 -0500

xianghui gravatar image

Hi diul,

The error " VolumeNotFound:" means can not find the volume you specified to attach, did you create an available volume yet?put more detail info here, like volume , pv,vg,lv.

edit flag offensive delete link more
0

answered 2013-06-27 06:34:39 -0500

diul gravatar image

Yes, the volume exists, but on Controller Node, and it belongs to cinder-volume group. On the Compute Node, there is not cinder, and there is not cinder-volume group. I've yet checked if port 3260 is reachable from the two Compute Nodes.

In the db, in "provider_location" coloumn for that volume i have

"CONTROLLER_IP:3260,23 iqn.2010-10.org.openstack:volume-b66bd34a-bfb1-4f18-a8a1-67a9ef51e8e8 1" (the volume is not the one of the error in the first post, it's another volume)

Do I need to configure cinder on Compute Node to? Do I mistake supposing to attach a remote volume (existing on Controller Node) to a VM running on Compute node?

This is "nova.conf" on Compute Node

[DEFAULT] #DEFAULT CONFIG verbose=True logdir = /var/log/nova state_path = /var/lib/nova lock_path = /var/lib/nova/tmp #volumes_dir = /etc/nova/volumes injected_network_template = /usr/share/nova/interfaces.template rootwrap_config = /etc/nova/rootwrap.conf auth_strategy = keystone

GLANCE

image_service=nova.image.glance.GlanceImageService glance_api_servers=CONTROLLER_IP:9292

APIS

osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions ec2_dmz_host=CONTROLLER_IP s3_host=CONTROLLER_IP

#NETWORK network_api_class=nova.network.quantumv2.api.API quantum_admin_username=quantum quantum_admin_password=#### quantum_admin_auth_url=http://CONTROLLER_IP:35357/v2.0/ quantum_auth_strategy=keystone quantum_admin_tenant_name=service quantum_url=http://CONTROLLER_IP:9696/ #firewall_driver=nova.virt.firewall.NoopFirewallDriver #libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver #LIBVIRT libvirt_nonblocking = True libvirt_inject_partition = -1 compute_driver = libvirt.LibvirtDriver libvirt_type=kvm firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver instance_name_template=instance-%08x api_paste_config=/etc/nova/api-paste.ini

COMPUTE/APIS: if you have separate configs for separate services

this flag is required for both nova-api and nova-compute

allow_resize_to_same_host=True

#ISCSI iscsi_helper = tgtadm

VOLUMES

#volume_driver=nova.volume.driver.ISCSIDriver #volume_group=nova-volumes #volume_name_template=volume-%08x

#MYSQL sql_connection = mysql://nova:nova@CONTROLLER_IP/nova

#MESSAGE QUEUE rpc_backend = nova.openstack.common.rpc.impl_qpid qpid_hostname=CONTROLLER_IP qpid_port=5672

NOVNC CONSOLE

novncproxy_base_url=http://CONTROLLER_IP:6080/vnc_auto.html

Change vncserver_proxyclient_address and vncserver_listen to match each compute host

vncserver_proxyclient_address=NODE_IP vncserver_listen=NODE_IP

[keystone_authtoken] admin_tenant_name = service admin_user = nova admin_password = #### auth_host = CONTROLLER_IP auth_port = 35357 auth_protocol = http signing_dir = /tmp/keystone-signing-nova

Hope it's clear and enough. Thanks for the help

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-06-26 13:39:03 -0500

Seen: 692 times

Last updated: Jun 28 '13