Backend_id in cinder.conf with ceph backend and replication enabled for DR purpose

asked 2019-06-22 22:26:06 -0500

Hi all , I have a doubt in replication_device of cinder.conf. I have two ceph cluster one is primary location and another at DR site both are mirrored using rbd mirroring.

So what could be the backend_id ?? Is it host name of the secondary site or the ceph backend name of secondary site?

When I tried to give backend_id as ceph backend name of secondary site, it throws invalidreplicationtarget Unknown failover target host, server2.

The code where the execution fails is

def _get_target_config(self, target_id):
"""Get a replication target from known replication targets."""
for target in self._replication_targets:
if target['name'] == target_id:
return target
if not target_id or target_id == 'default':
return {
'name': self.configuration.rbd_cluster_name,
'conf': self.configuration.rbd_ceph_conf,
'user': self.configuration.rbd_user
}
raise exception.InvalidReplicationTarget(
reason=_('RBD: Unknown failover target host %s.') % target_id)

While debugging and analyzing the source code I found a function named _get_target_config() in rbd.py is comparing active_backend_id(target_id) with the target.name( backend_id in replication_device).

Active_backend_id is taking host name. From where it is taking the active_backend_id??

My cinder.conf at primary site

my_ip = 10.236.249.3
glance_api_servers = http://server1:9293
auth_strategy = keystone
glance_api_version = 2
auth_strategy = keystone
enabled_backends = primary
target_helper = tgtadm
transport_url = rabbit://openstack:rabbit@server1:5672

[backend]
[backend_defaults]


[primary]
replication_device = backend_id:secondary,conf:/etc/ceph/secondary.conf,pool:vol
volume_backend_name = primary
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_cluster_name = primary
rbd_pool = vol
rbd_user = rbd-mirror.server1
rbd_ceph_conf = /etc/ceph/primary.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1

My cinder.conf at secondary site

my_ip = 10.236.249.4
glance_api_servers = http://server1:9293
auth_strategy = keystone
glance_api_version = 2
auth_strategy = keystone
enabled_backends = secondary
target_helper = tgtadm
transport_url = rabbit://openstack:rabbit@server2:5672

[backend]
[backend_defaults]


[primary]
replication_device = backend_id:primary,conf:/etc/ceph/primary.conf,pool:vol
volume_backend_name = secondary
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_cluster_name = secondary
rbd_pool = vol
rbd_user = rbd-mirror.server2
rbd_ceph_conf = /etc/ceph/secondary.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1

Anything more should I add ?

edit retag flag offensive close merge delete