Ask Your Question
0

Where should the cinder-volume service be installed

asked 2016-11-11 17:20:01 -0500

xdaxdb gravatar image

I would like clarification about where I should be installing the cinder-volume service.

My deployment has 1 node acting as a controller for nova/neutron/glance/cinder and 8 nodes configured as hosts. I am using the IBM Storewize (iSCSI) volume driver. Currently I have cinder-api and cinder-scheduler configured on the controller and cinder-volume configured on all the hosts.

Basically it works, I can create a volume in Horizon and it creates the volume on the IBM storage and maps it to a host and I can attach it to an instance. But the volume is always mapped to the same host node ( #2 of 8 ) regardless of which host the instance is running on.

I would prefer that the volumes be attached directly to the host the instance is running on, assuming that will optimize performance. Is that even possible? I'm starting to get the impression that only one storage node ( node running cinder-volume ) can talk to the SAN storage, which leaves me a unsure how to make the storage be HA.

It would help me out significantly if I could better understand where the cinder-volume service should run in my scenario.

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2016-11-12 17:24:21 -0500

volenbovsky gravatar image

Hi,

Typically cinder-volume should run on one of controller nodes, one active process(es) per host per backend. In near future cinder-volume can run on several hosts (see Cinder active-active in Google, work started in Newton)

Once Cinder volume got attached (that's _one_ of relevant scenarios, I will not) - then you should get an iSCSI connection between Compute Host the VM runs on and your storage backend. In your case storage backend IBM Storewize - so that is 'separate box'(es) So the part 'But the volume is always mapped to the same host node ( #2 of 8 )' is odd in the way that it is Compute Host where VM got booted up is the one that becomes iSCSI initiator and iSCSI session towards iSCSI target IPs(s) on your storage backend. Another way of describing that - you can bring down cinder-volume service (of course not on production ;) - but your volumes should have still successful read/write=iSCSI sessions should remain up. Maybe output of iscsiadm -m session or something specific to your distribution from all Compute Hosts (+controller) will illustrate/confirm that. Hmmm, your Cinder volumes being handled by 'host 2' doesn't sound that your Cinder volumes are provided by IBM Storewize...

edit flag offensive delete link more

Comments

Thanks, that was very helpful. I was reading the HA guide for block storage last night and I started to think things were as you described. It occurred to me that maybe cinder-volume only creates one host/IQN on the SAN and all the Nova hosts connect with the same IQN.

xdaxdb gravatar imagexdaxdb ( 2016-11-12 20:34:56 -0500 )edit
0

answered 2016-11-17 00:24:37 -0500

silently gravatar image

cinder-volume can abutting joint some sotrage file system,(which point out in cinder's configure file,)such as ceph,

ceph was distributed, was Clustering.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2016-11-11 17:20:01 -0500

Seen: 794 times

Last updated: Nov 17 '16