Ask Your Question
0

Multiple cinder nodes in Mirantis HA openstack deployment?

asked 2016-10-19 08:16:00 -0500

dongdan39 gravatar image

Hi, Expert,

I deployed the Mirantis openstack Liberty with 3 controller nodes as this is requested by the default Mirantis HA mode. My question is: how to know which controller node is the leading one in the 3 nodes( all 3 nodes provide cinder service at the same time)? A practical question is: How to configure cinder so that all 3 controller nodes share the same big backend iSCSI storage to provide volumes to VMs simultaneously( rather than configure 3 separate iSCSI volume for each of the controller node)? Thanks!

Cheers,

Dongdan

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2016-10-20 07:34:10 -0500

dbaxps gravatar image

updated 2016-10-20 10:20:38 -0500

Using field as comment
UPDATE 10/20/16
On Newton && Mitaka PCS HA Controller's Clusters (TripleO). Swift is set up as glance back end

[root@overcloud-controller-0 ~]# .  overcloudrc
[root@overcloud-controller-0 ~(keystone_admin)]# glance image-list
+--------------------------------------+-----------------+
| ID                                   | Name            |
+--------------------------------------+-----------------+
| 1f3c63de-794b-4bc8-aab0-e1cfc474f54a | Ubuntu1604Cloud |
| 3295916a-bd3c-46e0-a842-54da3bdd2196 | VF24Cloud       |
+--------------------------------------+-----------------+
[root@overcloud-controller-0 ~(keystone_admin)]# .  overcloudrc_glance
[root@overcloud-controller-0 ~(keystone_glance)]# swift list glance
1f3c63de-794b-4bc8-aab0-e1cfc474f54a
1f3c63de-794b-4bc8-aab0-e1cfc474f54a-00001
1f3c63de-794b-4bc8-aab0-e1cfc474f54a-00002
3295916a-bd3c-46e0-a842-54da3bdd2196
3295916a-bd3c-46e0-a842-54da3bdd2196-00001

and this didn't happen occasionally
END UPDATE

You stated :

All 3 nodes provide cinder service at the same time

However , having experience with Setup described in https://github.com/beekhof/osp-ha-dep...
I would provide reference for "Cinder-volume as a single point of failure"

There are currently concerns over the cinder-volume service ability to run as a fully active-active service. During the Mitaka time-frame, this is being worked on, see [1]. Thus, cinder-volume will only be running on one of the controller nodes, even if it will be configured on all nodes. In case of a failure in the node running cinder-volume, it should be started in a surviving controller node.

Mentioned above reference [1] is "Support High Availability Active-Active configurations in Cinder Volume"
https://blueprints.launchpad.net/cind...

Work items:
Remove API Races: INPROGRESS
Manager Local Locks: INPROGRESS
Job distribution: INPROGRESS
Cleanup: TODO
Data Corruption Prevention: TODO
Drivers' Locks: INPROGRESS

Returning back to your statement I would have to decide that issue was resolved by Mirantis for Liberty ahead of status blueprint in upstream.

edit flag offensive delete link more

Comments

Thank you for the reply Boris, quite helpful! Now I've found a way to configure the cinder on one of the controller node and all 3 controllers can issue cinder commands to provide volumes to VMs. Of course, the cinder volumes are actually provided by that single controller only. Thanks!

dongdan39 gravatar imagedongdan39 ( 2016-10-21 22:29:10 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-10-19 08:16:00 -0500

Seen: 332 times

Last updated: Oct 20 '16