Why do two cinder-scheduler_fanout queues have no consumer?
Hi everyone,
when I had a look at the RabbitMQ queues on my controller nodes, I was surprised seeing three cinder-scheduler_fanout queues. Two of them have no consumer leading to continuously increasing queue sizes. Each queue already requires more than 1 GB of memory. I have manually purged the queues but there are still new messages arriving.
The third queue receives messages, too but they are directly consumed.
In order to prevent the queues to run out of memory, I tried to identify which service is sending the messages to theses queues. But I had no luck. Does anyone of you has an idea which service is sending messages on these queues and how I can reconfigure it such that these queues are not used any more?
Cloud Setup
- Openstack Juno on Ubuntu 14.04
- 3 controller nodes
- each running RabbitMQ
- one controller running cinder-scheduler and cinder-api
- 3 storage nodes
- running a ceph cluster as storage backend
- each runs a ceph monitor and a cinder-volume
Example Message Sent to Queues
{
"oslo.message": "{
\"_context_domain\": null,
\"_context_request_id\": \"req-4b04bf3f-5efc-4cf3-84e2-8c594fa027a6\",
\"_context_quota_class\": null,
\"_context_service_catalog\": [],
\"_context_auth_token\": null,
\"_context_user_id\": null,
\"_context_is_admin\": true,
\"version\": \"1.0\",
\"_context_timestamp\": \"2015-03-04T13:21:42.034252\",
\"_context_project_domain\": null,
\"_context_user\": null,
\"method\": \"update_service_capabilities\",
\"_context_remote_address\": null,
\"_context_roles\": [\"admin\"],
\"args\": {
\"service_name\": \"volume\",
\"host\": \"storage3@rbd-storage\",
\"capabilities\": {
\"volume_backend_name\": \"RBD_STORAGE\",
\"free_capacity_gb\": 239266,
\"driver_version\": \"1.1.0\",
\"total_capacity_gb\": 244822,
\"reserved_percentage\": 0,
\"vendor_name\": \"Open Source\",
\"storage_protocol\": \"ceph\"
}
},
\"_unique_id\": \"014c9c97b86b4c6b9eb8f7207af1ba5b\",
\"_context_project_name\": null,
\"_context_read_deleted\": \"no\",
\"_context_user_identity\": \"- - - - -\",
\"_context_tenant\": null,
\"_context_project_id\": null,
\"_context_user_domain\": null
}",
"oslo.version": "2.0"
}
cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
volume_name_template = volume-%s
#verbose = True
#debug = true
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
rpc_backend = rabbit
rabbit_hosts = <rabbit1>,<rabbit2>,<rabbit3>
rabbit_password = <rabbit_password>
rabbit_userid = <rabbit_user>
rabbit_virtual_host = /
rabbit_durable_queues = False
rabbit_ha_queues = true
auth_strategy = keystone
my_ip = <myIP>
glance_host = <glanceHostIP>
glance_api_version = 2
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/storage.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups_storage
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
# the topic volume backup nodes listen on (string value)
backup_topic=cinder-backup
enabled_backends=rbd-storage,rbd-compute
default_volume_type=storage
[rbd-storage]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes_storage
volume_backend_name=RBD_STORAGE
rbd_ceph_conf = /etc/ceph/storage.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = <user>
rbd_secret_uuid = <uuid>
[rbd-compute]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes_compute
volume_backend_name=RBD_COMPUTE
rbd_ceph_conf = /etc/ceph/storage.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = <user>
rbd_secret_uuid = <uuid>
[keystone_authtoken]
auth_uri = <keystone>/v2.0
identity_uri = <keystoneIdentit>
admin_tenant_name = <tenantName>
admin_user = <userName>
admin_password = <userPassword>
[database]
connection = <databaseConnection>
ceph/storage.conf
[global]
fsid = <fsid>
mon_initial_members = storage1, storage2, storage3
mon_host = <host1>,<host2>,<host3>
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public_network = <networkIP>
cluster_network = <networkIP>
osd_journal_size = 40000
mon_clock_drift_allowed = 1
[client.cinder]
keyring = <keyring>
[client.glance]
keyring = <keyring>
[client.cinder-backup]
keyring = <keyring>