CEPH and Cinder, volume stuck in "Creating"

asked 2015-01-18 08:42:53 -0500

FMM gravatar image

Hi,

I've configured Cinder with RBD backed only and every time I create a volume it hangs in status "Creating". My openstack installation follow exactly the Juno guideline for Ubuntu 14.04, in detail:

  • 1 controller node (glance-api, cinder-api, cinder-scheduler)
  • 1 compute node 1 (nova)
  • 1 network node (neutron)
  • 3 ceph nodes (ceph1, ceph2, ceph2 acting as MON/OSD and cinder-volumes)

Glance on controller node is working fine with ceph, I'm able to upload RAW images and see them uploaded on the RBD volume. Even if Cinder is connected to RBD and not raising any error, volumes are freezing in "Creating" status.

Below an overall of the config/logs

root@controller:~# cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2015-01-18T14:12:56.000000 |       None      |
|  cinder-volume   |   ceph1    | nova | enabled |   up  | 2015-01-18T14:12:52.000000 |       None      |
|  cinder-volume   |   ceph2    | nova | enabled |   up  | 2015-01-18T14:12:56.000000 |       None      |
|  cinder-volume   |   ceph3    | nova | enabled |   up  | 2015-01-18T14:12:55.000000 |       None      |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

root@controller:~# cinder list
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+
|                  ID                  |  Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+
| 743b25a9-ca63-42dd-84fd-deab3c5b1446 | creating |      v8      |  1   |     None    |  false   |             |
| fb9f755b-c5dc-4ff3-932c-b3bf388af1ec | creating |  testVolume  |  1   |     None    |  false   |             |
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+

root@controller:~# vi /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = ***********
auth_strategy = keystone
my_ip = 10.0.167.101
control_exchange = cinder
notification_driver = cinder.openstack.common.notifier.rpc_notifier

volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_user=volumes
rbd_secret_uuid=023a9bf7-0f85-49c5-995d-ae99a9c2e046
rbd_pool=volumes
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2

root@controller:~# tail -f /var/log/cinder/cinder-api.log
2015-01-18 15:19:05.637 25065 INFO cinder.api.v1.volumes [req-1cef27a8-35f4-417a-af3e-619ffef83b56 705f471ac21046d38641a41c05d3cc87 f43d2811e09d48678539bda32c0a5339 - - -] vol={'migration_status': None, 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2015, 1, 17, 21, 43, 52), 'provider_geometry': None, 'replication_extended_status': None, 'replication_status': u'disabled', 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'743b25a9-ca63-42dd-84fd-deab3c5b1446', 'size': 1L, 'user_id': u'705f471ac21046d38641a41c05d3cc87', 'attach_time': None, 'attached_host': None, 'display_description': u'', 'volume_admin_metadata': [], 'project_id': u'f43d2811e09d48678539bda32c0a5339', 'launched_at': None, 'scheduled_at': datetime.datetime(2015, 1, 17, 21, 43, 52), 'status': u'creating', 'volume_type_id': None, 'deleted': False, 'provider_location': None, 'host': u'ceph3#RBD', 'consistencygroup_id': None, 'source_volid': None, 'provider_auth': None, 'display_name': u'v8', 'instance_uuid': None, 'bootable': False, 'created_at': datetime.datetime(2015, 1, 17, 21, 43, 51), 'attach_status': u'detached', 'volume_type': None, 'consistencygroup': None, 'volume_metadata': [], '_name_id': None, 'encryption_key_id': None, 'replication_driver_data': None}

root@controller:~# tail -f /var/log/cinder/cinder-scheduler.log
2015-01-18 15:21:24.093 5439 INFO cinder.service [-] Starting cinder-scheduler node (version 2014.2.1)
2015-01-18 15:21:24.095 5439 INFO oslo.messaging._drivers.impl_rabbit [req-6b478627-437c-4210-97f9-d284e3638bbf - - - - -] Connecting to AMQP server on controller:5672
2015-01-18 15:21:24.111 5439 INFO oslo.messaging._drivers.impl_rabbit [req-6b478627-437c-4210-97f9-d284e3638bbf - - - - -] Connected to AMQP server on controller:5672

root@ceph1:~# tail -f /var/log/cinder/cinder-volume.log
2015-01-18 15:20:30.249 29012 INFO cinder.service [-] Starting cinder-volume node (version 2014.2.1)
2015-01-18 ...
(more)
edit retag flag offensive close merge delete

Comments

Can you enable logs with DEBUG so we can see more relevant information?

lnxslck gravatar imagelnxslck ( 2015-03-17 04:38:46 -0500 )edit