Ask Your Question
0

multiple cinder back-ends

asked 2018-03-26 09:31:53 -0500

jamesopst gravatar image

updated 2018-03-26 09:34:50 -0500

hi all,

i'm running Newton on Ubuntu 16.04.1

I believe I would benefit from multiple cinder back-ends. I hope to follow these directions to set it up https://docs.openstack.org/cinder/lat...

I have a couple of questions before I start that I hope the community can help me with! First my environment. I have:

  • 3 controllers setup in HA deployed by Fuel.

    the regular cinder volume service for all my compute nodes backed by CEPH on 3 storage nodes also deployed by Fuel.

    half of my compute nodes have a special kind of local storage, an HP MSA storage array (similar to a 3PAR) accessed via fiber channel.

    the other half of my compute nodes have local hard drives used for the running vHDs and ephemeral storage and will not have access to the MSA. they have network access to CEPH/storage nodes for volumes.

I believe I need to set up a second cinder volume service to utilize this MSA storage most effectively (avoids needing to use a shared/cluster filesystem on the MSA), but only for half of the compute nodes.

Is this possible/right to do it only on half the compute nodes? Do I need the same fiber channel access on my controller nodes too (currently they can not access the MSA)?

I don't need to use Fuel to deploy this I can manually set this up

thanks!

edit retag flag offensive close merge delete

Comments

1

i wonder is this is more what I need. I'm not sure https://docs.openstack.org/newton/con...

jamesopst gravatar imagejamesopst ( 2018-04-02 12:41:38 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-04-09 13:43:25 -0500

jamesopst gravatar image

was successful in setting this up mostly using the information in the first link

  1. https://docs.openstack.org/newton/con...

but the info in second link was useful too

  1. https://docs.openstack.org/cinder/lat...

also useful is this link on booting instances from an image to a volume (the image needs special metadata explianed below) https://docs.openstack.org/newton/use... (the WHOLE "Create volume from image and boot instance" section)

==

changes for this need to be done on the controller nodes only not on the compute nodes.

settings in /etc/cinder/cinder.conf on all controllers:

          add the ",MSA" to this line
                     enabled_backends = RBD-backend, MSA
          add this section to the end of the file after the [RBD-backend] section.
                     [MSA]
                     hpmsa_backend_name = A
                     volume_backend_name = MSA
                     volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
                     san_ip = 192.168.28.xxx
                     san_login = <username>
                     san_password = <password>

restart cinder-volume service on all 3 controllers:

                     systemctl restart cinder-volume.service

commands to run on 1 controller node:

openstack volume type create MSA
openstack volume type set --property volume_backend_name=MSA MSA
systemctl restart cinder-volume.service
openstack volume type list --long
nova boot --flavor medium.1 --key-name testkey --nic net-id=867801c4-c8d4-417e-a7c4-a67d87e69242 --security-groups all_in_eg --availability-zone nova:node-<withFCcard> --user-data script_file.txt --block-device source=image,id=3f3718a4-ffb8-4c6c-8ed4-8c318a530g45,dest=volume,size=43,shutdown=preserve,bootindex=0 testmsa5
openstack volume type list --long

The image used in the above command should have a metadata tag added to it "cinder_img_volume_type=MSA". This can be done in Horizon. Without it the volume will be created in the default cinder backend, in my case RBD.

Fiber channel cards need to be in all the controllers and as well as all the compute nodes that will be using this second cinder backend volume type. The controller creates the volume on the MSA via ethernet, http/REST I believe. Then the controller uses the fiber channel card to upload the image to be used to the volume. Then the controller maps the volume to the compute node where the instance will be created and running. You can see these volumes in the v3 MSA webUI under Volumes and mapping. On the compute node running the new instance with a MSA backed volume you will then see the volume mapped in

ls -alR /dev/disk/by-path/
lrwxrwxrwx 1 root root 9 Apr 6 15:54 pci-0000:09:00.0-fc-0x207000c0ee46ce3f-lun-1 -> ../../sde
lrwxrwxrwx 1 root root 10 Apr 6 15:54 pci-0000:09:00.0-fc-0x207000c0ee46ce3f-lun-1-part1 -> ../../sde1

note creating a 40GB volume results in a instance with a 38GB vHD. So there is some space differences here. When I used an image snapshot of a 40GB vHD I needed to create a 43GB volume to use it. "volume,size=43,". When I used an image snapshot of a 100GB vHD I needed to create a 110B volume to use it. "volume,size=110,"

also note we are using shutdown=preserve here (which is most common for volumes), so always make sure you don't end up with a bunch of ERROR'd ... (more)

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2018-03-26 09:31:53 -0500

Seen: 236 times

Last updated: Apr 09 '18