Ask Your Question
0

Error with more than one cinder-volume

asked 2013-05-23 08:00:21 -0500

rmm0811 gravatar image

I download the grizzly version openstack and deployed it on two nodes(node1 and dev202) node1: cinder-api cinder-scheduler cinder-volume, keystone, mysql, qpid node2: only cinder-volume

rpc_backend = cinder.openstack.common.rpc.impl_qpid qpid_durable_queues = True

qpid-tool watch the queue status (list queue active): 252353 07:50:56 - 248315.cinder-scheduler 252354 07:50:56 - 248315.cinder-scheduler.node1 252355 07:50:56 - 248315.cinder-scheduler_fanout_e0ef7071e1b744769df5f06dae595550 252479 07:52:27 - 248315.cinder-volume 252480 07:52:27 - 248315.cinder-volume.node1 252481 07:52:27 - 248315.cinder-volume_fanout_df149604027d49fabd8853f3acb7e997 252549 07:52:49 - 248315.cinder-volume.dev202 252550 07:52:49 - 248315.cinder-volume_fanout_4bab111c0de74d8285b12ba4990d9ec9

then I stop cinder-volume service on node1(with command kill) list queue active: 252353 07:50:56 - 248315.cinder-scheduler 252354 07:50:56 - 248315.cinder-scheduler.node1 252355 07:50:56 - 248315.cinder-scheduler_fanout_e0ef7071e1b744769df5f06dae595550

all of cinder-volume queues were deleted. After this operation, cinder create volume status became creating, even cinder-volume on dev202 is still active but no queue message receive. However, cinder-volume on dev202 still report its status to cinder-scheduler.

How to solve this problem? I think that the primary reason is the queue delete message.

edit retag flag offensive close merge delete

18 answers

Sort by » oldest newest most voted
0

answered 2013-05-29 02:49:14 -0500

rmm0811 gravatar image

There are differents between /usr/bin/cinder-volume with others(cinder-scheduler/ nova-compute/nova-scheduler etc). /usr/bin/cinder-volume service start method: launcher = service.ProcessLauncher() server = service.Service.create(binary='cinder-volume') launcher.launch_server(server) launcher.wait()

Others method: server = service.Service.create(binary='nova-compute', topic=CONF.compute_topic, db_allowed=False) service.serve(server) service.wait()

Then I changed /usr/bin/cinder-volume to service.wait() method as same as others, problem confused me disappear. when stop cinder-volume service, the critical info appear

The different of two methods is whether fork child process or not.

Is it the reason of problem confused me ?

Could anyone help me?

edit flag offensive delete link more
0

answered 2013-05-30 14:32:20 -0500

rmm0811 gravatar image

I am not sure that set session.receivers and session.senders to null list directly to solve the problem.

edit flag offensive delete link more
0

answered 2013-05-28 07:27:08 -0500

rmm0811 gravatar image

I think I found some problems of qpid as rpcbackend, however I'm not sure about it. Could anyone try to test it with your environment?

openstack grizzly version

config file need debug=True

  1. service openstack-cinder-scheduler stop (nova-compute, nova-scheduler, etc)
  2. " vi /var/log/cinder/scheduler.log " some info will be found like this.

2013-05-27 06:02:46 CRITICAL [cinder] need more than 0 values to unpack Traceback (most recent call last): File "/usr/bin/cinder-scheduler", line 50, in <module> service.wait() File "/usr/lib/python2.6/site-packages/cinder/service.py", line 613, in wait rpc.cleanup() File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/__init__.py", line 240, in cleanup return _get_impl().cleanup() File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/impl_qpid.py", line 649, in cleanup return rpc_amqp.cleanup(Connection.pool) File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 671, in cleanup connection_pool.empty() File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 80, in empty self.get().close() File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/impl_qpid.py", line 386, in close self.connection.close() File "<string>", line 6, in close File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 316, in close ssn.close(timeout=timeout) File "<string>", line 6, in close File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 749, in close if not self._ewait(lambda: self.closed, timeout=timeout): File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 566, in _ewait result = self.connection._ewait(lambda: self.error or predicate(), timeout) File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 208, in _ewait result = self._wait(lambda: self.error or predicate(), timeout) File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 193, in _wait return self._waiter.wait(predicate, timeout=timeout) File "/usr/lib/python2.6/site-packages/qpid/concurrency.py", line 57, in wait self.condition.wait(3) File "/usr/lib/python2.6/site-packages/qpid/concurrency.py", line 96, in wait sw.wait(timeout) File "/usr/lib/python2.6/site-packages/qpid/compat.py", line 53, in wait ready, _, _ = select([self], [], [], timeout) ValueError: need more than 0 values to unpack

edit flag offensive delete link more
0

answered 2013-05-23 16:47:21 -0500

shantha-kumar gravatar image

Ensure you receive the update from cinder-volume service of each node to cinder-scheduler log and check the volume and scheduler log for more information.

share the logs & conf file to help you more.

edit flag offensive delete link more
0

answered 2013-05-24 03:33:54 -0500

rmm0811 gravatar image

My cinder config file and part of logs

node1 config file: cinder-api cinder-scheduler cinder-volume are running with same config file cinder.conf [DEFAULT] logdir = /var/log/cinder state_path = /var/lib/cinder lock_path = /var/lib/cinder/tmp gpfs_volumes_dir = /gpfs/volumes/volumes/node1 iscsi_helper = tgtadm sql_connection = mysql://cinder:cinder@192.168.0.201/cinder rpc_backend = cinder.openstack.common.rpc.impl_qpid rootwrap_config = /etc/cinder/rootwrap.conf auth_uri = http://192.168.0.201:5000 auth_strategy = keystone qpid_hostname = localhost api_paste_config = /etc/cinder/api-paste.ini volume_name_template = volume-%s volume_group = cinder-volumes verbose = True debug = True volume_driver = cinder.volume.drivers.gpfs.GpfsDriver scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,CapabilitiesFilter storage_availability_zone = test:dev188

[keystone_authtoken] admin_tenant_name = service admin_user = cinder admin_password = cinder auth_host = 192.168.0.201 auth_port = 35357 auth_protocol = http signing_dirname = /tmp/keystone-signing-cinder

================================================================ dev202 : cinder-volume run on it cinder.conf

[DEFAULT] logdir = /var/log/cinder state_path = /var/lib/cinder lock_path = /var/lib/cinder/tmp gpfs_volumes_dir = /gpfs/volumes/volumes/dev202 iscsi_helper = tgtadm sql_connection = mysql://cinder:cinder@192.168.0.201/cinder rpc_backend = cinder.openstack.common.rpc.impl_qpid rootwrap_config = /etc/cinder/rootwrap.conf auth_uri = http://192.168.0.201:5000 auth_strategy = keystone qpid_hostname = 192.168.0.201 api_paste_config = /etc/cinder/api-paste.ini volume_name_template = volume-%s volume_group = cinder-volumes verbose = True debug = True volume_driver = cinder.volume.drivers.gpfs.GpfsDriver scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,CapabilitiesFilter storage_availability_zone = test:dev202

[keystone_authtoken] admin_tenant_name = service admin_user = cinder admin_password = cinder auth_host = 192.168.0.201 auth_port = 35357 auth_protocol = http

====================================================================

node1 : kill -9 cinder-volume /var/log/cinder/volume.log

2013-05-23 21:39:57 INFO [cinder.service] Caught SIGTERM, stopping children 2013-05-23 21:39:57 INFO [cinder.service] Waiting on 1 children to exit 2013-05-23 21:39:57 INFO [cinder.service] Caught SIGTERM, exiting 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.ops] SENT[2638c68]: MessageCancel(destination='0', id=serial(25)) 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.ops] SENT[2638c68]: QueueDelete(queue='cinder-volume', id=serial(26), sync=True) 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.ops] SENT[2638c68]: ConnectionClose(reply_code=200) 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.raw] SENT[2638c68]: '\x0f\x01\x00\x14\x00\x01\x00\x00\x00\x00\x00\x00\x04\x08\x01\x00\x01\x00\x010\x0f\x01\x00 \x00\x01\x00\x00\x00\x00\x00\x00\x08\x02\x01\x01\x01\x00\rcinder-volume\x0f\x00\x00\x12\x00\x00\x00\x00\x00\x00\x00\x00\x01\x0b\x01\x00\x00\xc8' 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.raw] READ[2638c68]: '\x0f\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x02\n\x01\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x1a\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\x0c\x00\x00' 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.ops] RCVD[2638c68]: SessionCompleted(commands=[0-26]) 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.ops] RCVD[2638c68]: ConnectionCloseOk() 2013-05-23 21:39:57 INFO [cinder.service] Child 332 exited with status 1

============================================================= scheduler.log after node1 kill cinder-volume

2013-05-23 22:29:36 DEBUG [qpid.messaging] RCVD[3dff8c0]: Message({u'_context_roles': [u'admin'], u'_context_request_id': u'req-23a28132-0658-4392-a0fe-128beb54f0da ... (more)

edit flag offensive delete link more
0

answered 2013-05-24 03:34:52 -0500

rmm0811 gravatar image

My cinder config file and part of logs

node1 config file: cinder-api cinder-scheduler cinder-volume are running with same config file cinder.conf [DEFAULT] logdir = /var/log/cinder state_path = /var/lib/cinder lock_path = /var/lib/cinder/tmp gpfs_volumes_dir = /gpfs/volumes/volumes/node1 iscsi_helper = tgtadm sql_connection = mysql://cinder:cinder@192.168.0.201/cinder rpc_backend = cinder.openstack.common.rpc.impl_qpid rootwrap_config = /etc/cinder/rootwrap.conf auth_uri = http://192.168.0.201:5000 auth_strategy = keystone qpid_hostname = localhost api_paste_config = /etc/cinder/api-paste.ini volume_name_template = volume-%s volume_group = cinder-volumes verbose = True debug = True volume_driver = cinder.volume.drivers.gpfs.GpfsDriver scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,CapabilitiesFilter storage_availability_zone = test:dev188

[keystone_authtoken] admin_tenant_name = service admin_user = cinder admin_password = cinder auth_host = 192.168.0.201 auth_port = 35357 auth_protocol = http signing_dirname = /tmp/keystone-signing-cinder

================================================================ dev202 : cinder-volume run on it cinder.conf

[DEFAULT] logdir = /var/log/cinder state_path = /var/lib/cinder lock_path = /var/lib/cinder/tmp gpfs_volumes_dir = /gpfs/volumes/volumes/dev202 iscsi_helper = tgtadm sql_connection = mysql://cinder:cinder@192.168.0.201/cinder rpc_backend = cinder.openstack.common.rpc.impl_qpid rootwrap_config = /etc/cinder/rootwrap.conf auth_uri = http://192.168.0.201:5000 auth_strategy = keystone qpid_hostname = 192.168.0.201 api_paste_config = /etc/cinder/api-paste.ini volume_name_template = volume-%s volume_group = cinder-volumes verbose = True debug = True volume_driver = cinder.volume.drivers.gpfs.GpfsDriver scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,CapabilitiesFilter storage_availability_zone = test:dev202

[keystone_authtoken] admin_tenant_name = service admin_user = cinder admin_password = cinder auth_host = 192.168.0.201 auth_port = 35357 auth_protocol = http

====================================================================

node1 : kill -9 cinder-volume /var/log/cinder/volume.log

2013-05-23 21:39:57 INFO [cinder.service] Caught SIGTERM, stopping children 2013-05-23 21:39:57 INFO [cinder.service] Waiting on 1 children to exit 2013-05-23 21:39:57 INFO [cinder.service] Caught SIGTERM, exiting 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.ops] SENT[2638c68]: MessageCancel(destination='0', id=serial(25)) 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.ops] SENT[2638c68]: QueueDelete(queue='cinder-volume', id=serial(26), sync=True) 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.ops] SENT[2638c68]: ConnectionClose(reply_code=200) 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.raw] SENT[2638c68]: '\x0f\x01\x00\x14\x00\x01\x00\x00\x00\x00\x00\x00\x04\x08\x01\x00\x01\x00\x010\x0f\x01\x00 \x00\x01\x00\x00\x00\x00\x00\x00\x08\x02\x01\x01\x01\x00\rcinder-volume\x0f\x00\x00\x12\x00\x00\x00\x00\x00\x00\x00\x00\x01\x0b\x01\x00\x00\xc8' 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.raw] READ[2638c68]: '\x0f\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x02\n\x01\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x1a\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\x0c\x00\x00' 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.ops] RCVD[2638c68]: SessionCompleted(commands=[0-26]) 2013-05-23 21:39:57 DEBUG [qpid.messaging.io.ops] RCVD[2638c68]: ConnectionCloseOk() 2013-05-23 21:39:57 INFO [cinder.service] Child 332 exited with status 1

============================================================= scheduler.log after node1 kill cinder-volume

2013-05-23 22:29:36 DEBUG [qpid.messaging] RCVD[3dff8c0]: Message({u'_context_roles': [u'admin'], u'_context_request_id': u'req-23a28132-0658-4392-a0fe-128beb54f0da ... (more)

edit flag offensive delete link more
0

answered 2013-05-27 09:14:43 -0500

rmm0811 gravatar image

When I stop volume-service on node1, "Delete failed" message in cinder.log, however stop dev202, it was not appear.

stop service log:

2013-05-27 04:10:23 INFO [cinder.service] Caught SIGTERM, stopping children 2013-05-27 04:10:23 INFO [cinder.service] Waiting on 1 children to exit 2013-05-27 04:10:23 INFO [cinder.service] Caught SIGTERM, exiting 2013-05-27 04:10:23 DEBUG [qpid.messaging.io.ops] SENT[37e8c68]: MessageCancel(destination='0', id=serial(25)) 2013-05-27 04:10:23 DEBUG [qpid.messaging.io.ops] SENT[37e8c68]: QueueDelete(queue='cinder-volume', id=serial(26), sync=True) 2013-05-27 04:10:23 DEBUG [qpid.messaging.io.ops] SENT[37e8c68]: ConnectionClose(reply_code=200) 2013-05-27 04:10:23 DEBUG [qpid.messaging.io.raw] SENT[37e8c68]: '\x0f\x01\x00\x14\x00\x01\x00\x00\x00\x00\x00\x00\x04\x08\x01\x00\x01\x00\x010\x0f\x01\x00 \x00\x01\x00\x00\x00\x00\x00\x00\x08\x02\x01\x01\x01\x00\rcinder-volume\x0f\x00\x00\x12\x00\x00\x00\x00\x00\x00\x00\x00\x01\x0b\x01\x00\x00\xc8' 2013-05-27 04:10:23 DEBUG [qpid.messaging.io.raw] READ[37e8c68]: '\x0f\x01\x00y\x00\x01\x00\x00\x00\x00\x00\x00\x03\x03\x01\x00\x7f\x00\x01\x94\x00\x00\x00\x1a\x08\x02\x00\x00Tnot-found: Delete failed. No such queue: cinder-volume (qpid/broker/Broker.cpp:1119)\x00\x00\x00\x04\x00\x00\x00\x00\x0f\x00\x008\x00\x00\x00\x00\x00\x00\x00\x00\x02\x03\x01\x00\x00&419462f4-23a7-4d15-b1db-866aa4ffb380:0\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\x0c\x00\x00' 2013-05-27 04:10:23 DEBUG [qpid.messaging.io.ops] RCVD[37e8c68]: ExecutionException(error_code=404, command_id=serial(26), class_code=8, command_code=2, field_index=0, description=u'not-found: Delete failed. No such queue: cinder-volume (qpid/broker/Broker.cpp:1119)', error_info={}, id=serial(7)) 2013-05-27 04:10:23 DEBUG [qpid.messaging.io.ops] RCVD[37e8c68]: SessionDetach(name='419462f4-23a7-4d15-b1db-866aa4ffb380:0') 2013-05-27 04:10:23 DEBUG [qpid.messaging.io.ops] SENT[37e8c68]: SessionDetached(name='419462f4-23a7-4d15-b1db-866aa4ffb380:0') 2013-05-27 04:10:23 DEBUG [qpid.messaging.io.ops] RCVD[37e8c68]: ConnectionCloseOk() 2013-05-27 04:10:23 INFO [cinder.service] Child 14594 exited with status 1

edit flag offensive delete link more
0

answered 2013-05-27 09:28:54 -0500

rmm0811 gravatar image

I try to stop cinder-scheduler service openstack-cinder-scheduler stop The end of log print error info, is this related to the last problem?

2013-05-27 04:18:46 AUDIT [cinder.service] SIGTERM received 2013-05-27 04:18:46 DEBUG [qpid.messaging.io.ops] SENT[2e70c20]: SessionDetach(name='741d7405-dcdb-4c9c-8fdc-b1e45f519150:5') 2013-05-27 04:18:46 DEBUG [qpid.messaging.io.ops] SENT[2e70c20]: ConnectionClose(reply_code=200) 2013-05-27 04:18:46 DEBUG [qpid.messaging.io.raw] SENT[2e70c20]: '\x0f\x00\x008\x00\x00\x00\x00\x00\x00\x00\x00\x02\x03\x01\x00\x00&741d7405-dcdb-4c9c-8fdc-b1e45f519150:5\x0f\x00\x00\x12\x00\x00\x00\x00\x00\x00\x00\x00\x01\x0b\x01\x00\x00\xc8' 2013-05-27 04:18:46 DEBUG [qpid.messaging.io.raw] READ[2e70c20]: '\x0f\x00\x009\x00\x00\x00\x00\x00\x00\x00\x00\x02\x04\x03\x00\x00&741d7405-dcdb-4c9c-8fdc-b1e45f519150:5\x00\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\x0c\x00\x00' 2013-05-27 04:18:46 DEBUG [qpid.messaging.io.ops] RCVD[2e70c20]: SessionDetached(name='741d7405-dcdb-4c9c-8fdc-b1e45f519150:5', code=0) 2013-05-27 04:18:46 DEBUG [qpid.messaging.io.ops] RCVD[2e70c20]: ConnectionCloseOk() 2013-05-27 04:18:46 CRITICAL [cinder] need more than 0 values to unpack Traceback (most recent call last): File "/usr/bin/cinder-scheduler", line 50, in <module> service.wait() File "/usr/lib/python2.6/site-packages/cinder/service.py", line 613, in wait rpc.cleanup() File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/__init__.py", line 240, in cleanup return _get_impl().cleanup() File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/impl_qpid.py", line 649, in cleanup return rpc_amqp.cleanup(Connection.pool) File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 671, in cleanup connection_pool.empty() File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 80, in empty self.get().close() File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/impl_qpid.py", line 386, in close self.connection.close() File "<string>", line 6, in close File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 316, in close ssn.close(timeout=timeout) File "<string>", line 6, in close File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 749, in close if not self._ewait(lambda: self.closed, timeout=timeout): File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 566, in _ewait result = self.connection._ewait(lambda: self.error or predicate(), timeout) File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 208, in _ewait result = self._wait(lambda: self.error or predicate(), timeout) File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 193, in _wait return self._waiter.wait(predicate, timeout=timeout) File "/usr/lib/python2.6/site-packages/qpid/concurrency.py", line 57, in wait self.condition.wait(3) File "/usr/lib/python2.6/site-packages/qpid/concurrency.py", line 96, in wait sw.wait(timeout) File "/usr/lib/python2.6/site-packages/qpid/compat.py", line 53, in wait ready, _, _ = select([self], [], [], timeout) ValueError: need more than 0 values to unpack

edit flag offensive delete link more
0

answered 2013-05-27 10:03:57 -0500

rmm0811 gravatar image

node1: service openstack-cinder-volume stop dev202: tail -f /var/log/cinder/volume.log Receive the message "Queue cinder-volume has been deleted"

2013-05-27 04:59:14 DEBUG [qpid.messaging.io.raw] READ[2900758]: '\x0f\x01\x00g\x00\x01\x00\x00\x00\x00\x00\x00\x03\x03\x01\x00\x7f\x00\x01\x98\x00\x00\x00\x00\x00\x00\x00\x00BQueue cinder-volume has been deleted. (qpid/broker/Queue.cpp:1787)\x00\x00\x00\x04\x00\x00\x00\x00\x0f\x00\x008\x00\x00\x00\x00\x00\x00\x00\x00\x02\x03\x01\x00\x00&338970db-7a5a-4993-b5c0-493b4fa2ce80:0' 2013-05-27 04:59:14 DEBUG [qpid.messaging.io.ops] RCVD[2900758]: ExecutionException(error_code=408, command_id=serial(0), class_code=0, command_code=0, field_index=0, description=u'Queue cinder-volume has been deleted. (qpid/broker/Queue.cpp:1787)', error_info={}, id=serial(7)) 2013-05-27 04:59:14 DEBUG [qpid.messaging.io.ops] RCVD[2900758]: SessionDetach(name='338970db-7a5a-4993-b5c0-493b4fa2ce80:0') 2013-05-27 04:59:14 DEBUG [qpid.messaging.io.ops] SENT[2900758]: SessionDetached(name='338970db-7a5a-4993-b5c0-493b4fa2ce80:0') 2013-05-27 04:59:14 DEBUG [qpid.messaging.io.raw] SENT[2900758]: '\x0f\x00\x008\x00\x00\x00\x00\x00\x00\x00\x00\x02\x04\x01\x00\x00&338970db-7a5a-4993-b5c0-493b4fa2ce80:0' 2013-05-27 04:59:19 DEBUG [cinder.manager] Running periodic task VolumeManager._publish_service_capabilities 2013-05-27 04:59:19 DEBUG [cinder.manager] Notifying Schedulers of capabilities ...

edit flag offensive delete link more
0

answered 2013-05-27 10:08:25 -0500

rmm0811 gravatar image

qpid-tool watch queue active

when node1 and dev202 cinder-volume service start , list queue active:

1962 09:58:23 - 301.cinder-scheduler 1963 09:58:23 - 301.cinder-scheduler.node1 1964 09:58:23 - 301.cinder-scheduler_fanout_37e2047fa48b46c2a08867b7e9d00b2a 2444 10:04:22 - 301.cinder-volume 2445 10:04:22 - 301.cinder-volume.dev202 2446 10:04:22 - 301.cinder-volume_fanout_fac001a142eb411a9cc1473e51c86626 2466 10:04:32 - 301.cinder-volume.node1 2467 10:04:32 - 301.cinder-volume_fanout_bdfd1086647d4bb68859efebf01d77f7

However, node1 or dev202 execute cmd: service openstack-cinder-volume stop

list queue active:

1962  09:58:23  -          301.cinder-scheduler
1963  09:58:23  -          301.cinder-scheduler.node1
1964  09:58:23  -          301.cinder-scheduler_fanout_37e2047fa48b46c2a08867b7e9d00b2a

all queue cinder-volume were removed.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-05-23 08:00:21 -0500

Seen: 570 times

Last updated: Jun 05 '13