Revision history [back]

was successful in setting this up mostly using the information in the first link

1. https://docs.openstack.org/newton/config-reference/block-storage/drivers/hp-msa-driver.html

but the info in second link was useful too

also useful is this link on booting instances from an image to a volume (the image needs special metadata explianed below) https://docs.openstack.org/newton/user-guide/cli-nova-launch-instance-from-volume.html (the WHOLE "Create volume from image and boot instance" section)

==

changes for this need to be done on the controller nodes only not on the compute nodes.

settings in /etc/cinder/cinder.conf on all controllers:

          add the ",MSA" to this line
enabled_backends = RBD-backend, MSA
add this section to the end of the file after the [RBD-backend] section.
[MSA]
hpmsa_backend_name = A
volume_backend_name = MSA
volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
san_ip = 192.168.28.xxx


restart cinder-volume service on all 3 controllers:

                     systemctl restart cinder-volume.service


commands to run on 1 controller node:

openstack volume type create MSA
openstack volume type set --property volume_backend_name=MSA MSA
systemctl restart cinder-volume.service
openstack volume type list --long
nova boot --flavor medium.1 --key-name testkey --nic net-id=867801c4-c8d4-417e-a7c4-a67d87e69242 --security-groups all_in_eg --availability-zone nova:node-<withFCcard> --user-data script_file.txt --block-device source=image,id=3f3718a4-ffb8-4c6c-8ed4-8c318a530g45,dest=volume,size=43,shutdown=preserve,bootindex=0 testmsa5
openstack volume type list --long


The image used in the above command should have a metadata tag added to it "cinder_img_volume_type=MSA". This can be done in Horizon. Without it the volume will be created in the default cinder backend, in my case RBD.

Fiber channel cards need to be in all the controllers and as well as all the compute nodes that will be using this second cinder backend volume type. The controller creates the volume on the MSA via ethernet, http/REST I believe. Then the controller uses the fiber channel card to upload the image to be used to the volume. Then the controller maps the volume to the compute node where the instance will be created and running. You can see these volumes in the v3 MSA webUI under Volumes and mapping. On the compute node running the new instance with a MSA backed volume you will then see the volume mapped in

ls -alR /dev/disk/by-path/
lrwxrwxrwx 1 root root 9 Apr 6 15:54 pci-0000:09:00.0-fc-0x207000c0ee46ce3f-lun-1 -> ../../sde
lrwxrwxrwx 1 root root 10 Apr 6 15:54 pci-0000:09:00.0-fc-0x207000c0ee46ce3f-lun-1-part1 -> ../../sde1


note creating a 40GB volume results in a instance with a 38GB vHD. So there is some space differences here. When I used an image snapshot of a 40GB vHD I needed to create a 43GB volume to use it. "volume,size=43,". When I used an image snapshot of a 100GB vHD I needed to create a 110B volume to use it. "volume,size=110,"

also note we are using shutdown=preserve here (which is most common for volumes), so always make sure you don't end up with a bunch of ERROR'd stale volumes laying around.