Ask Your Question
1

Icehouse: Cinder and Ceph Volume Creation error [closed]

asked 2014-06-22 19:22:44 -0600

alexk gravatar image

updated 2014-06-23 17:24:51 -0600

Greetings,

OS: Ubuntu 14.04 OpenStack: Icehouse

I have setup Ceph with Cinder and for some reason, when I attempt to create a volume from a Cinder, it result in an error, while creating a volume from Nova, gets created successfully. For instance:

root@ccinder2:/etc/cinder# **cinder create --display-name cinder-ceph-vol1 --display-description "Testing Vol Create" 100**

+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-06-22T22:01:41.454300      |
| display_description |          Testing Vol Create          |
|     display_name    |           cinder-ceph-vol1           |
|      encrypted      |                False                 |
|          id         | b3a5d66a-92e7-4f01-908f-64b3bd5b6592 |
|       metadata      |                  {}                  |
|         size        |                 100                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

root@ccinder2:/etc/cinder# **cinder list**

+--------------------------------------+--------+------------------+------+-------------+----------+-------------+
|                  ID                  | Status |   Display Name   | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+------------------+------+-------------+----------+-------------+
| b3a5d66a-92e7-4f01-908f-64b3bd5b6592 | error  | cinder-ceph-vol1 | 100  |     None    |  false   |             |
+--------------------------------------+--------+------------------+------+-------------+----------+-------------+

.. and now on Nova node, create a test volume:

root@ccomp1:~# **nova volume-create --display_name test_rbd_vol 5**

+---------------------+--------------------------------------+
| Property            | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| created_at          | 2014-06-22T23:00:25.377476           |
| display_description | -                                    |
| display_name        | test_rbd_vol                         |
| encrypted           | False                                |
| id                  | 723853f5-60b8-4ee2-806c-694597783cdb |
| metadata            | {}                                   |
| size                | 5                                    |
| snapshot_id         | -                                    |
| source_volid        | -                                    |
| status              | creating                             |
| volume_type         | None                                 |
+---------------------+--------------------------------------+

List  current volumes from nova-compute node:

root@ccomp1:~# **nova volume-list**

+--------------------------------------+-----------+-----------------+------+-------------+-------------+
| ID                                   | Status    | Display Name    | Size | Volume Type | Attached to |
+--------------------------------------+-----------+-----------------+------+-------------+-------------+
| 723853f5-60b8-4ee2-806c-694597783cdb | available | test_rbd_vol    | 5    | None        |            |
| 5962121d-e7bf-4bf7-81b5-a161a1e81775 | error     | cinder-ceph-vol | 100  | None        |             |
+--------------------------------------+-----------+-----------------+------+-------------+-------------+

As you can see, my volume "test_rbd_vol" has created successfully without a problem. Looking through cinder-volume.log I see the following:

2014-06-22 18:57:02.914 1420 INFO cinder.volume.manager [-] Updating volume status
2014-06-22 18:58:02.915 1420 INFO cinder.volume.manager [-] Updating volume status
2014-06-22 18:59:02.917 1420 INFO cinder.volume.manager [-] Updating volume status
2014-06-22 19:00:02.916 1420 INFO cinder.volume.manager [-] Updating volume status
2014-06-22 19:00:25.593 1420 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'cd72fa7e6af44bc5a5b7c3bc7473c367', 'tenant': u'0696d4e77c4c494ab653c91df7324ae4', 'user_identity': u'cd72fa7e6af44bc5a5b7c3bc7473c367 0696d4e77c4c494ab653c91df7324ae4 - - -'}
2014-06-22 19:00:25.775 1420 INFO cinder.volume.flows.manager.create_volume [req-44841245-b46e-4e72-8dd3-066bf35f4ee8 cd72fa7e6af44bc5a5b7c3bc7473c367 0696d4e77c4c494ab653c91df7324ae4 - - -] Volume 723853f5-60b8-4ee2-806c-694597783cdb: being created using CreateVolumeFromSpecTask._create_raw_volume with specification: {'status': u'creating', 'volume_size': 5, 'volume_name': u'volume-723853f5-60b8-4ee2-806c-694597783cdb'}
2014-06-22 19:00:25.929 1420 INFO cinder.volume.flows.manager.create_volume [req-44841245-b46e-4e72-8dd3-066bf35f4ee8 cd72fa7e6af44bc5a5b7c3bc7473c367 0696d4e77c4c494ab653c91df7324ae4 - - -] Volume volume-723853f5-60b8-4ee2-806c-694597783cdb (**723853f5-60b8-4ee2-806c-694597783cdb**): **created successfully**

2014-06-22 19:01:02.918 1420 INFO cinder.volume.manager [-] Updating volume status
2014-06-22 19:02:02.920 1420 INFO cinder.volume.manager [-] Updating volume status

There is nothing else thereafter. Here is my **/etc/cinder/cinder.conf** file:

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
#
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
glance_api_version=2
rbd_user=volumes
rbd_secret_uuid=fe1776f8-041a-4976-a89c-9d0a611565cf
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshots=false
rbd_max_clone_depth=5
#
rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_hosts = 10.10.10.79,10.10.10.80
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = guest
rabbit_notification_exchange = glance
rabbit_notification_topic = notifications
rabbit_durable_queues = False
glance_host = 10.10.10.91

[database]
connection = mysql://cinder:XXXXXXXXXXX@10.10.10.91/cinder

[keystone_authtoken]
auth_uri = http://10.10.10.91:5000
auth_host = 10.10.10.91
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = XXXXXXXXXXX

backup_driver=cinder.backup ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by koolhead17
close date 2014-06-30 04:46:44.380858

Comments

it looks like you've provided logs for the successful creation. Can we see the logs for the failure too ?

fifieldt gravatar imagefifieldt ( 2014-06-23 03:43:41 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
2

answered 2014-06-28 17:26:27 -0600

alexk gravatar image

Found my own problem. I mistakenly used different UUID key for secret.xml on cinder. This is why it was erroring out. Once I re-used the same UUID on secret.xml key (as it was configured on Nova node), all worked out :)

Thanks again,

Alex

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-06-22 19:22:44 -0600

Seen: 900 times

Last updated: Jun 28 '14