Ask Your Question
0

ocata upgrade - CentOS7 / RDO / GlusterFS3.8 No module named glusterfs (solved) [closed]

asked 2017-02-28 18:05:53 -0500

updated 2017-03-04 18:12:36 -0500

From the cinder release notes:

--- upgrade: - The GlusterFS volume driver, which was deprecated in the Newton release, has been removed.

Any idea how i can get this back?

gluster volumes and all visiable and mountable... validated all this as i can see instances in glance and volumes... controller even has a scratch mount share fomr the same gluster (/openstack_scratch type fuse.glusterfs) so i know its working...

problem is after the upgrade to ocata same cinder.conf settings as i have had and i am now getting an error: python modules exist

[root@platform2 site-packages]# ls -ald cinder*
drwxr-xr-x 24 root root 4096 Mar  2 09:07 cinder
drwxr-xr-x  2 root root  143 Mar  2 09:07 cinder-10.0.0-py2.7.egg-info
drwxr-xr-x  6 root root 4096 Feb 26 17:51 cinderclient
drwxr-xr-x  2 root root  136 Mar  2 19:09 cinder_data
drwxr-xr-x  2 root root  112 Mar  2 19:09 cinder_data-0.0.1-py2.7.egg-info
drwxr-xr-x  2 root root   44 Mar  2 09:07 cinder_tests.egg-info

[root@platform2 site-packages]# ls -ald gluster*
drwxr-xr-x 5 root root   96 Mar  1 10:38 gluster
drwxr-xr-x 2 root root  112 Mar  1 10:36 glustercli-0.2-py2.7.egg-info
drwxr-xr-x 4 root root   68 Mar  1 10:38 gluster_georep_tools
drwxr-xr-x 2 root root 4096 Mar  1 10:38 gluster_georep_tools-0.2-py2.7.egg-info
drwxr-xr-x 2 root root   90 Mar  2 19:07 gluster_stats
drwxr-xr-x 2 root root  147 Mar  2 19:07 gluster_stats-1.2.2.dist-info
[root@platform2 site-packages]#

debug starting the service.

[root@platform2 sahara]# 2017-02-28 20:06:11.782 17241 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
2017-02-28 20:06:11.782 17241 DEBUG oslo_concurrency.lockutils [-] Acquired semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212
2017-02-28 20:06:11.783 17241 DEBUG oslo_concurrency.lockutils [-] Releasing semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:225
2017-02-28 20:06:11.802 17241 WARNING oslo_config.cfg [-] Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications".
2017-02-28 20:06:12.011 17241 DEBUG oslo_db.sqlalchemy.engines [req-d8c9e611-7236-46d0-ba93-d68cb911a093 - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:261
2017-02-28 20:06:12.164 17241 INFO cinder.rpc [req-d8c9e611-7236-46d0-ba93-d68cb911a093 - - - - -] Automatically selected cinder-volume objects version 1.21 as minimum service version.
2017-02-28 20:06:12.178 17241 INFO cinder.rpc [req-d8c9e611-7236-46d0-ba93-d68cb911a093 - - - - -] Automatically selected cinder-scheduler objects version 1.21 as minimum service version.
2017-02-28 20:06:12.184 17241 INFO cinder.rpc [req-d8c9e611-7236-46d0-ba93-d68cb911a093 - - - - -] Automatically selected cinder-scheduler RPC version 3.5 as minimum service version.
2017-02-28 20:06:12.298 17241 DEBUG oslo_db.sqlalchemy.engines [req-d8c9e611-7236-46d0-ba93-d68cb911a093 - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:261
2017-02-28 20:06:12.334 17241 INFO cinder.volume.manager [req-d8c9e611-7236-46d0-ba93-d68cb911a093 ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by bcollins
close date 2017-03-04 18:12:12.576097

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-03-04 18:10:17 -0500

so if you ever have RDO issues and go from mitaka to newton on CentOS7 and need to go to Ocata and lose the ability as i did. Move your gluster to NFS Ganesha.... and remount as nfs...

if it had glusterfs replace it with nfs options as this example below.

[SSHD_NAS]
volume_driver= cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config= /etc/cinder/nfs_shares.conf
volume_backend_name= SSHD_NAS
nfs_sparsed_volumes = True

https://gluster.readthedocs.io/en/lat...

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2017-02-28 18:05:53 -0500

Seen: 624 times

Last updated: Mar 04 '17