Need help configuring NFS as cinder's back end
I deployed openstack using one control node and one compute node. I have enabled cinder but I am having issues with getting the back end of cinder working. I am not very familiar with cinder or NFS but I was following the steps for using NFS as the back end at the link below: https://docs.openstack.org/kolla-ansible/latest/reference/cinder-guide.html
I followed all the steps from that documentation. I was planning to try and use my control node as the storage node.
I have tested that NFS works apart from cinder on the control and compute by the following steps:
- I have install nfs-utils on my compute and control node and started nfs.
- I added an entry for a directory named
/home
to the/etc/exports
on the control node - I then create a directory called
nfs_home
on my compute node and then mountnfs_home
to the/home
on my control node (NFS server)
That works and I can add files in /home
on control node and see those changes reflected in nfs_home
on compute node
I need help with getting NFS to work with cinder and below are the steps I have done but I can't create volumes and get an error.
The /etc/exports
on the control node:
/home/cinder_storage IP_ADDRESS_OF_CONTROL_NODE/24(rw,no_root_squash)
On the deploy host I created an nfs_shares
file by doing vi /etc/kolla/config/nfs_shares
and added the following line:
IP_ADDRESS_OF_CONTROL_NODE:/home/cinder_storage
Here are what all my cinder related options are set to in my /etc/kolla/globals.yml
enable_cinder: "yes"
#enable_cinder_backup: "yes"
#enable_cinder_backend_hnas_iscsi: "no"
#enable_cinder_backend_hnas_nfs: "no"
#enable_cinder_backend_iscsi: "no"
#enable_cinder_backend_lvm: "no"
enable_cinder_backend_nfs: "yes"
################################
# Cinder - Block Storage Options
################################
# Enable / disable Cinder backends
#cinder_backend_ceph: "{{ enable_ceph }}"
#cinder_backend_vmwarevc_vmdk: "no"
#cinder_volume_group: "cinder-volumes"
# Valid options are [ nfs, swift, ceph ]
#cinder_backup_driver: "ceph"
#cinder_backup_share: ""
#cinder_backup_mount_options_nfs: ""
I am not sure what to set for the Cinder - Block Storage Options
in my globals.yml
or if I don't need to change anything there?
If I run cinder service-list I get the following output:
+------------------+----------------------------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+----------------------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | openstackcontroller.glv.internal | nova | enabled | up | 2017-11-28T21:08:32.000000 | - |
| cinder-scheduler | openstackcontroller.glv.internal | nova | enabled | up | 2017-11-28T21:08:32.000000 | - |
| cinder-volume | openstackcontroller.glv.internal@nfs-1 | nova | enabled | up | 2017-11-28T21:08:34.000000 | - |
+------------------+----------------------------------------+------+---------+-------+----------------------------+-----------------+
But when I try to create a new volume I get the following errors:
ERROR cinder.scheduler.flows.create_volume [req-38257bf8-2ccf-4c06-a324-da7fed5a4828 d77551a2455649a287ed9f1a8871f852 c3a4ae5f4caf46218c592d0b92c593c6 - default default] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. Exceeded max scheduling attempts 3 for volume None: NoValidBackend: No valid backend was found. Exceeded max scheduling attempts 3 for volume None
2017-11-28 14:18:23.562 39 ERROR cinder.volume.drivers.remotefs [req-9b3fb095-7198-4f46-a4b5-7dcd42c73970 - - - - -] Failed to created Cinder secure environment indicator file: [Errno 13] Permission denied: '/var/lib/cinder/mnt/8804d6b3feed68a301166ec9a566d83d/.cinderSecureEnvIndicator': IOError: [Errno 13] Permission denied: '/var/lib/cinder/mnt/8804d6b3feed68a301166ec9a566d83d/.cinderSecureEnvIndicator'
Am I not setting certain options correctly? I tried following that documentation in the link above but I know ...