Ask Your Question
0

Ceph-OpenStack integration "unable to update stats, RBDdriver -1.1.0 driver is uninitialized" error(Kilo Hammer)

asked 2016-05-24 01:47:57 -0500

niv gravatar image

updated 2016-05-24 07:34:06 -0500

Is somebody familar with this error? i've tried to search for a fix but sadly failed to find one out there.

My system: ubuntu 14.04.02 ceph hammer - (cephx) Openstack Kilo

REFERENCE to the guide I used: http://docs.ceph.com/docs/master/rbd/rbd-openstack/ (http://docs.ceph.com/docs/master/rbd/...) (http://docs.ceph.com/docs/master/rbd/...)

my environment is in a closed network, its kinda hard to get stuff out. but when I try cinder-volume its just shows the log, what's in the log is just the sentence I wrote in the header.

RBDDriver -1.1.0 is uninitialized.

I've deployed my whole ceph env with ansible playbooks according to leseb's github as a standalone ceph I've manage to acomplish a fully functional ceph environment, tried it with different benchmarking tools such as COSbench, iometer, getput, DD, radosBench and few more and never encountered any errors or disfunctions cephwise.

my cinder.conf

[DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf api_paste_confg = /etc/cinder/api-paste.ini iscsi_helper = tgtadm volume_name_template = volume-%s volume_group = cinder-volumes verbose = True state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes

rpc_backend=rabbit auth_strategy=keystone my_ip={{ ip_manage }}

enabled_backends=ceph glance_host={{ controller_virtual_name }}

osapi_volume_listen_port=58776

[oslo_messaging_rabbit] rabbit_host={{ controller_virtual_name }} rabbit_userid=openstack rabbit_password={{ rabbit_mq_password }}

[oslo_concurrency] lock_path=/var/lock/cinder

[database] connection=mysql://{{mysql_cinder_user}}:{{ mysql_cinder_password }}@{{ controller_virtual_name }}/cinder

[keystone_authtoken]

auth_uri=http://{{ controller_virtual_name }}:5000

auth_url=http://{{ controller_virtual_name }}:35357

auth_plugin=password

project_domain_id=default

user_domain_id=default

project_name=service

username=cinder

user-name=cinder

password=cinderpass

[ceph]

volume_driver = cinder.volume.drivers.rbd.RBDDriver

rbd_pool = volumes

rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot = false

rbd_max_clone_depth = 5

rbd_store_chunk_size = 4

rados_connect_timeout = -1

glance_api_version = 2

rbd_user = cinder

rbd_secret_uuid = XXXXXXXXXXXXXXXXX


HEALTH_WARN pool .rgw.buckets has too few pgs; noout,noscrub,nodeep-scrub flag(s) set.

cinder-volume

DeprecationWarning: The oslo namespace package is deprecated. please use oslo_config instead

cinder.volume.manager Determined volume DB was empty at startup

cinder.openstack.common.service Starting 1 workers

cinder.openstack.common.service Started child 99999

cinder.service Starting cinder-volume node (version 2015.1.1)

cinder.volume.manger Starting volume driver RBDDriver (-1.1.0)

cinder.volume.drivers.rbd Error connecting to ceph cluster.

cinder.volume.manger Error encountered during initialization of driver: RBDDriver

cinder.volume.manager Bad or unexpected response from storage volume backend API: Error connecting to ceph cluster

oslo_messaging._drivers.impl_rabbit Connected to AMQP server on controller:5672

cinder.volume.manager Updating volume status

cinder.volume.manager self.driver.initialized

cinder.volume.manager Unable to update stats, RBDDriver -1.1.0 driver is uninitialized

edit retag flag offensive close merge delete

3 answers

Sort by ยป oldest newest most voted
2

answered 2016-05-24 06:11:16 -0500

rduncan-t gravatar image

updated 2016-05-25 05:16:52 -0500

looks like the ceph remote block device driver is not initialized.

what is the output of

ceph health

what is the output of

cinder-volume

most likely it's a simple config issue.

..looking at the out put you Ceph cluster is not ready, specifically you have error message

.rgw.buckets has too few pgs;

this points to having too few placement groups in your cluster. things to look at

ceph osd dump | grep ^pool

&&

ceph health detail

the fix might be to add more pg's

edit flag offensive delete link more

Comments

Posted Ceph health and cinder-volume

niv gravatar imageniv ( 2016-05-24 07:34:55 -0500 )edit
1

answered 2016-05-24 07:02:53 -0500

share outputs of ceph health

cinder-volume

edit flag offensive delete link more

Comments

Posted few mins ago

niv gravatar imageniv ( 2016-05-24 07:35:34 -0500 )edit
0

answered 2016-05-26 03:13:14 -0500

niv gravatar image

it was network issue to ceph cluster

i wrote python script that test the connection to monitor servers

python: import rados

c=rados.Rados(conffile='/etc/ceph/ceph.conf') c.connect(); print c.version() print c.get_fsid()

link to detailed guide about rados library http://docs.ceph.com/docs/jewel/rados/api/python/ (http://docs.ceph.com/docs/jewel/rados...)

edit flag offensive delete link more

Comments

I have never been able to solve this issue , But I could confirm that I am able to connect to the backend cluster but I am always getting RBD Driver (config XXX) is uninitialized. If anyone has any leads to troubleshoot this I would appreicate , I have already checked permissions in ceph.conf etc.

vgdub gravatar imagevgdub ( 2018-09-05 15:50:56 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-05-24 01:47:57 -0500

Seen: 830 times

Last updated: May 26 '16