Ask Your Question
0

cinder multi backend ceph with block device support

asked 2015-06-23 09:54:23 -0500

Added a disk to one of the hypervisors (node-53), formatted in to five extended partitions with Linux FS at 30.00GB each.
e.g. /dev/sde1,/dev/sde2,/dev/sde3,/dev/sde4,/dev/sde5

Then created a new availability zone, and added node-53 to the zone:

nova aggregate-create blockdevhosts blockdevhosts

Which was ID 2, so i then added the host with the block devices:

nova aggregate-add-host 2 node-53.domain.com

Then reconfigured my cinder.conf:

[DEFAULT]
amqp_durable_queues=False
kombu_reconnect_delay=5.0
rabbit_hosts=127.0.0.1:5673,10.10.28.2:5673,10.10.28.4:5673
rabbit_userid=nova
rabbit_password=DR3votEu
rabbit_virtual_host=/
rabbit_ha_queues=True
notification_driver=cinder.openstack.common.notifier.rpc_notifier
rpc_backend=cinder.openstack.common.rpc.impl_kombu
control_exchange=cinder
osapi_volume_listen=10.10.28.9
api_paste_config=/etc/cinder/api-paste.ini
glance_api_servers=10.10.28.6:9292
glance_api_version=2
glance_num_retries=0
glance_api_insecure=False
glance_api_ssl_compression=False
host=rbd:volumes
auth_strategy=keystone
debug=True
verbose=True
log_dir=/var/log/cinder
use_syslog=True
use_syslog_rfc_format=True
syslog_log_facility=LOG_LOCAL3

enabled_backends=ceph,blockdevhost

[ceph]
volume_backend_name=ceph
rbd_pool=volumes
rbd_user=volumes
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=False
rbd_secret_uuid=a5d0dd94-57c4-ae55-ffe0-7e3732a24455
rbd_max_clone_depth=5
volume_driver=cinder.volume.drivers.rbd.RBDDriver
storage_availability_zone=nova

[blockdevhost]
volume_backend_name=blockdevhosts
volume_driver=cinder.volume.drivers.block_device.BlockDeviceDriver
available_devices='/dev/sde1,/dev/sde2,/dev/sde3,/dev/sde4,/dev/sde5'
storage_availability_zone=blockdevhosts

[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[database]
connection=mysql://cinder:Zyc8kTdz@10.10.28.6/cinder?charset=utf8&read_timeout=60
idle_timeout=3600
max_pool_size=10
max_retries=-1
max_overflow=30
[fc-zone-manager]
[keymgr]
[keystone_authtoken]
auth_host=10.10.28.6
auth_port=35357
auth_protocol=http
admin_user=cinder
admin_password=lTqXiBID
admin_tenant_name=services
signing_dir=/tmp/keystone-signing-cinder
signing_dirname=/tmp/keystone-signing-cinder
[matchmaker_redis]
[matchmaker_ring]
[profiler]
[ssl]

Exception starting cinder-volume service:

2015-06-22 20:14:02.578 31841 INFO cinder.openstack.common.service [-] Started child 8831
2015-06-22 20:14:02.580 8831 INFO cinder.service [-] Starting cinder-volume node (version 2014.2)
2015-06-22 20:14:02.585 8831 INFO cinder.volume.manager [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Starting volume driver BlockDeviceDriver (1.0.0)
2015-06-22 20:14:02.627 8831 DEBUG oslo.db.sqlalchemy.session [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/session.py:482
2015-06-22 20:14:02.755 8831 DEBUG cinder.volume.manager [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Re-exporting 0 volumes init_host /usr/lib/python2.6/site-packages/cinder/volume/manager.py:263
2015-06-22 20:14:02.755 8831 DEBUG cinder.volume.manager [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Resuming any in progress delete operations init_host /usr/lib/python2.6/site-packages/cinder/volume/manager.py:304
2015-06-22 20:14:02.756 8831 INFO cinder.volume.manager [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Updating volume status
2015-06-22 20:14:02.757 8831 DEBUG cinder.openstack.common.processutils [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf blockdev --getsz /dev/sde1 execute /usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py:158
2015-06-22 20:14:02.850 8831 DEBUG cinder.openstack.common.processutils [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Result was 1 execute /usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py:192
2015-06-22 20:14:02.854 8831 ERROR cinder.openstack.common.threadgroup [-] Unexpected error while running command.
Command ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2015-06-23 10:41:27 -0500

capsali gravatar image

So if i understand correctly, you have ceph as a backend for cinder and want to add another backend that is stored locally on the compute node and it is not a network storage!? If this is the case than you need to run cinder-volume on that compute node and configure the backend in cinder.conf on that compute node! So you will have two cinder-volumes : node@ceph and node-53@blockdevhosts . If i missunderstood what you are trying to say than i'm sorry :)

edit flag offensive delete link more

Comments

Okay, I understand and that makes some sense. If I start openstack-cinder-volume service on the specific compute node, and then have two cinder-volume services... will they work properly with the same rabbitmq & mysqldb? Ultimately I would like to configure the Volume Types in horizon to choose

championofcyrodiil gravatar imagechampionofcyrodiil ( 2015-06-23 15:51:29 -0500 )edit

By choose, i mean that if I choose volume type "local" i get BlockDeviceDriver, and volume type "ceph" i get RbdDriver. Wondering how the FilterScheduler will handle allocating "local" volume types to node-53 cinder volume, while allocating 'rbd' volume types to any node that can connect to ceph.

championofcyrodiil gravatar imagechampionofcyrodiil ( 2015-06-23 15:54:10 -0500 )edit

also i understand this creates more complexity, since ceph volumes support live-migration easily, while BlockDevice requires physically copying the volume data from one compute node to another. we don't plan to migrate these volumes, but need the disk i/o performance.

championofcyrodiil gravatar imagechampionofcyrodiil ( 2015-06-23 15:56:30 -0500 )edit

Well they should work well. the problem is that if one cinder-volume goes down you will not have access to the volumes managed by that cinder-volume untill you bring it back up. If you had two network storage backends than you could have done a proper cinder-volume HA setting the "host" parameter

capsali gravatar imagecapsali ( 2015-06-24 06:32:40 -0500 )edit
1

in cinder.conf to the same value. In your case, even if you set the host parameter on both cinder-volume to be the same you will have a partial HA enveiroment. If cinder-volume that directs to ceph backend fails, the one on the compute node should take over (if it has access to ceph clusterand cephx

capsali gravatar imagecapsali ( 2015-06-24 06:34:45 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-06-23 09:54:23 -0500

Seen: 1,006 times

Last updated: Jun 23 '15