Error cloning volume or snapshot using HPMSA driver [closed]

asked 2017-02-08 19:18:29 -0500

updated 2017-02-09 06:28:44 -0500

Hello all ,

I have a setup with HP MSA 1040 storage connected using iSCSI to cinder Services . it is working fine when we create a new volume from scratch or when we take snapshot from a volume . The problem is when we try to create a new volume based on a exist volume ( clone ) or a snapshot .

/etc/cinder.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 192.168.180.10

glance_api_servers = http://admin01:9292

glance_host = admin01

enabled_backends = pool-a

hpmsa_backend_name = openstack01

default_volume_type = hpmsa
[....] 

[pool-a]

hpmsa_backend_type = virtual

hpmsa_backend_name = openstack01 

volume_backend_name = hpmsa1040-vol 

volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver

san_ip = 192.168.254.120

san_login = manage

san_password = pass

hpmsa_iscsi_ips = 10.0.0.201,10.0.0.202

hpmsa_api_protocol = http

command to create the new cloned volume

cinder  --debug --os-project-name ProjetoTI2 create --name testeE --source-volid ef1e5244-33fd-479d-94d3-d56ea3fe05a9 --volume-type hpmsa 40

error in volume.log

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher [req-5120e477-0a3a-4461-b46d-87290ef8ecf1 39110d12078e432e9a1cf7969800e73a 4b3f8f3d330745fe8f2b5758cd1e7a74 - - -] Exception during message handling: The command was not recognized. (2017-02-08 21:35:22) (-10025)

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last):

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 138, in _dispatch_and_reply

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher     incoming.message))

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 185, in _dispatch

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 127, in _do_dispatch

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 634, in create_volume

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher     _run_flow_locked()

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher     return f(*args, **kwargs)

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 623, in _run_flow_locked

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher     _run_flow()

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 619, in _run_flow

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher     flow_engine.run()

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 230, in run

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher     for _state in self.run_iter(timeout=timeout):

2017-02-08 22:41:51.766 3544 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/engines ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by Saulo Augusto Silva
close date 2017-02-09 10:33:32.962539

Comments

I don't pretend I can really help, but it's odd that the dothill driver, and not MSA, is in the stack trace.

Bernd Bausch gravatar imageBernd Bausch ( 2017-02-09 06:36:59 -0500 )edit

Thanks Bemd , Maybe I phrase my question incorrectly , but the situation is that I canĀ“t clone a volume when the volume are setup on HP 1040 MSA storage . How can I debug it more to get it solve ? Any clue ?

Saulo Augusto Silva gravatar imageSaulo Augusto Silva ( 2017-02-09 07:04:36 -0500 )edit

I would expect an error message from the MSA driver. What I see is the Dothill driver. It's understandable that the Dothill driver can't talk to an MSA box. There is something wrong with the configuration, I'd say, but I can't tell why the wrong driver appears here.

Bernd Bausch gravatar imageBernd Bausch ( 2017-02-09 07:43:27 -0500 )edit

After look at openstack mikata docs and dothill setup is pretty the same of the hpmsa and I also found that hpmsa Inheriting from DotHill cinder drivers . As you can see at the cinder,conf the correct HPMSA driver is setup .

Saulo Augusto Silva gravatar imageSaulo Augusto Silva ( 2017-02-09 09:18:16 -0500 )edit

The solutions was to setup the storage as vdisks instead of virtual pools and configure cinder.conf with the variable . hpmsa_backend_type = linear . Thanks to Bernd melp me out .

Saulo Augusto Silva gravatar imageSaulo Augusto Silva ( 2017-02-09 10:33:12 -0500 )edit