Connecting cinder to ceph - all volumes created in error state

asked 2019-06-11 09:13:02 -0500

msarro gravatar image

As mentioned in the title, I have an allinone setup using packstack (192.168.57.111), that I am attempting to integrate with a small ceph cluster. Glance was easy to integrate, but cinder is causing more of a problem. When I attempt to create a volume, the volume always shifts immediately to an "Error" state. The logs themselves don't seem to show any major errors, so I'm at a loss for what to do next.

I am using openstack queens, and connecting to ceph jewel (I will be upgrading ceph jewel to luminous as part of the class I am taking).

The cinder node should have access:

[root@os-node1 backups(keystone_admin)]# ceph -s --id cinder
    cluster 171df60a-f55c-41d5-b854-a72f19e581ff
     health HEALTH_OK
     monmap e1: 1 mons at {ceph-node1=192.168.57.101:6789/0}
            election epoch 4, quorum 0 ceph-node1
     osdmap e62: 9 osds: 9 up, 9 in
            flags sortbitwise,require_jewel_osds
      pgmap v240: 512 pgs, 4 pools, 910 MB data, 125 objects
            3713 MB used, 158 GB / 161 GB avail
                 512 active+clean

Here is my current cinder.conf file. It is generated based on a textbook, and validated against the openstack docs http://docs.ceph.com/docs/master/rbd/rbd-openstack/ (here).

[DEFAULT]
backup_swift_url=http://192.168.57.111:8080/v1/AUTH_
backup_swift_container=volumebackups
backup_driver=cinder.backup.drivers.swift
enable_v3_api=True
storage_availability_zone=nova
default_availability_zone=nova
default_volume_type=iscsi
auth_strategy=keystone
enabled_backends = ceph
glance_api_version = 2
osapi_volume_listen=0.0.0.0
osapi_volume_workers=1
log_dir=/var/log/cinder
transport_url=rabbit://guest:guest@192.168.57.111:5672/
control_exchange=openstack
api_paste_config=/etc/cinder/api-paste.ini
glance_host=192.168.57.111
nova_catalog_info=compute:nova:publicURL
nova_catalog_admin_info=compute:nova:adminURL
verbose=true
debug=true

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_user = cinder
rbd_secret_uuid = f8b609e7-7034-42dd-8ba6-13614878affb
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1

[database]
connection=mysql+pymysql://cinder:37fb0ab4fc25447e@192.168.57.111/cinder

[key_manager]
backend=cinder.keymgr.conf_key_mgr.ConfKeyManager

[keystone_authtoken]
auth_uri=http://192.168.57.111:5000/
auth_type=password
auth_url=http://192.168.57.111:35357
username=cinder
password=e7fca7a9aae14334
user_domain_name=Default
project_name=services
project_domain_name=Default

[oslo_concurrency]
lock_path=/var/lib/cinder/tmp

[oslo_messaging_rabbit]
ssl=False

[oslo_policy]
policy_file=/etc/cinder/policy.json

With a clean rerun of the openstack-cinder-volume service:

[root@os-node1 backups(keystone_admin)]# service openstack-cinder-volume start
Redirecting to /bin/systemctl start openstack-cinder-volume.service

[root@os-node1 backups(keystone_admin)]# cinder list
+--------------------------------------+--------+---------------+------+-------------+----------+-------------+
| ID                                   | Status | Name          | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+---------------+------+-------------+----------+-------------+
| 348d5c38-e57f-44e7-8e37-d9646791cc77 | error  | ceph-volume01 | 2    | iscsi       | false    |             |
+--------------------------------------+--------+---------------+------+-------------+----------+-------------+
[root@os-node1 backups(keystone_admin)]# cinder create --name ceph-volume02 --description "cinder volume test 2" 2
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2019-06-10T22:01:00.000000           |
| description                    | cinder volume test 2                 |
| encrypted                      | False                                |
| id                             | aa435bbc-a43f-441f-8d6b-7937f7b2a82d |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | ceph-volume02                        |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 3706150f812d45bea391e0091ec91604     |
| replication_status             | None                                 |
| size                           | 2                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | bf583b01ab6644978a4a6c07015f4ae4     |
| volume_type                    | iscsi                                |
+--------------------------------+--------------------------------------+
[root@os-node1 backups(keystone_admin)]# cinder list
+--------------------------------------+--------+---------------+------+-------------+----------+-------------+
| ID                                   | Status | Name          | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+---------------+------+-------------+----------+-------------+
| 348d5c38-e57f-44e7-8e37-d9646791cc77 | error  | ceph-volume01 | 2    | iscsi       | false ...
(more)
edit retag flag offensive close merge delete

Comments

I don't see any error messages in the cinder-volume.log. Does glance work with ceph? Can you try to create a rbd object with the cinder user? rbd -p volumes --user cinder create --size 1 test1

eblock gravatar imageeblock ( 2019-06-21 03:02:22 -0500 )edit