Ask Your Question
0

unable to create instance from iscsi backend: he server has either erred or is incapable of performing the requested operation. (HTTP 500)

asked 2019-10-09 09:18:12 -0500

techcraft gravatar image

updated 2019-10-10 18:55:20 -0500

Hi you all, I installed packstack on a HP microserver with an internal 500 gb hdd and attached a qnap device on ISCSI as external storage. I have a strange problem: I cannot deploy an instance if I try to use the attached iscsi device.
If I use the internal disk (backend: lvm) I dont have any problem.
If I create only the volume on the ISCSI device (backend: lvmiscsi) I dont have any problem.
If I create the boot volume on the ISCSI device and I try to attach it to an instance or I try to create an istance from the simple cirros image, the process aborts.

This is my cinder.conf:

[DEFAULT]

backup_swift_url=http://192.168.1.16:8080/v1/AUTH_

backup_swift_container=volumebackups

backup_driver=cinder.backup.drivers.swift.SwiftBackupDriver

enable_v3_api=True

auth_strategy=keystone

storage_availability_zone=nova

default_availability_zone=nova

default_volume_type=lvmiscsi

enabled_backends=lvmiscsi,lvm

osapi_volume_listen=0.0.0.0

osapi_volume_workers=2

debug=False

log_dir=/var/log/cinder

transport_url=rabbit://guest:guest@192.168.1.16:5672/

control_exchange=openstack

api_paste_config=/etc/cinder/api-paste.ini

glance_host=192.168.1.16

[database]

connection=mysql+pymysql://cinder:5b086b5fb21d4d3a@192.168.1.16/cinder

[fc-zone-manager]

[healthcheck]

[key_manager]

backend=cinder.keymgr.conf_key_mgr.ConfKeyManager

[keystone_authtoken]

www_authenticate_uri=http://192.168.1.16:5000/

auth_uri=http://192.168.1.16:5000/

auth_type=password

auth_url=http://192.168.1.16:5000

username=cinder

password=4c5d588b50e444e7

user_domain_name=Default

project_name=services

project_domain_name=Default

[nova]

[oslo_concurrency]

lock_path=/var/lib/cinder/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

driver=messagingv2

[oslo_messaging_rabbit]

ssl=False

[oslo_policy]

policy_file=/etc/cinder/policy.json

[lvmiscsi]

volume_group=cinder-volumes-iscsi

volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver

iscsi_ip_address=10.10.10.2

volume_backend_name=LVM_iSCSI

volumes_dir=/var/lib/cinder/volumes

[lvm]

volume_backend_name=lvm

volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver

target_ip_address=192.168.1.16

target_helper=lioadm

volume_group=cinder-volumes

volumes_dir=/var/lib/cinder/volumes

The complete error is:

"Error: Failed to perform requested operation on instance "test", the instance has an error status: Please try again later [Error: Build of instance b9633c6b-9780-4e07-9868-72dd6e99c16b aborted: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-67c09277-485f-46af-b50e-e55410005a1d)]."

Looking at /var/log/cinder/volumes.log

"2019-10-09 16:12:21.130 17232 INFO oslo.privsep.daemon [-] privsep daemon starting 2019-10-09 16:12:21.135 17232 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0 2019-10-09 16:12:21.139 17232 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none 2019-10-09 16:12:21.139 17232 INFO oslo.privsep.daemon [-] privsep daemon running as pid 17232 2019-10-09 16:12:21.330 16061 ERROR oslo_messaging.rpc.server [req-67c09277-485f-46af-b50e-e55410005a1d 86628b0073f44211a63dc1dff6dcad2a 74e6c4c79ece48f481152ef9b312bc50 - default default] Exception during message handling: OSError: [Errno 2] No such file or directory 2019-10-09 16:12:21.330 16061 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2019-10-09 16:12:21.330 16061 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 166, in _process_incoming 2019-10-09 16:12:21.330 16061 ... (more)

edit retag flag offensive close merge delete

Comments

Can you share your LVM configuration? Output of pvs and vgs commands is probably sufficient.

There is no Cinder config option iscsi_ip_address, as far as I can see. I wonder what this address is?

Bernd Bausch gravatar imageBernd Bausch ( 2019-10-09 18:44:44 -0500 )edit

I assume this is a single-host cloud, correct?

Correction: iscsi_ip_address is used by various volume drivers, but not the LVM driver.

Another oddity in your config: The lvm backend uses non-default target helper lioadm, whereas lvmiscsi uses the default tgtadm. Perhaps tgtadm is not installed?

Bernd Bausch gravatar imageBernd Bausch ( 2019-10-09 18:44:58 -0500 )edit

Yes it's a single host installation (packstack) on a host with 2 different nic: management (192.168.1.16) and iscsi (10.10.10.1). Target iscsi: 10.10.10.2

techcraft gravatar imagetechcraft ( 2019-10-10 04:07:11 -0500 )edit

I have installed tgtadm with scsi-target-utils now as you suggested but nova shows errors (and I cannot create volumes now too)

techcraft gravatar imagetechcraft ( 2019-10-10 04:10:00 -0500 )edit

Actually, I wanted to see the output of pvs and vgs. I am particularly interested to see the details of the non-working volume group, cinder-volumes-iscsi, to find out what's wrong with it.

Bernd Bausch gravatar imageBernd Bausch ( 2019-10-10 04:34:18 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2019-10-10 19:07:33 -0500

updated 2019-10-10 19:15:14 -0500

You have a fundamental misunderstanding.

The LVM driver doesn't know what disk arrays are connected, what their ISCSI addresses are etc. The LVM driver knows the volume group named cinder-volumes-iscsi, that's it. It doesn't know and doesn't care where the physical volumes reside that this volume group consists of. They could be local disks or partitions, Ceph block devices, fibre-channel disk array LUNs, SD cards, USB thumb drives, loopback devices, smartphones, LUNs on ISCSI disk arrays etc.

This means that the LVM driver doesn't communicate with your Qnap device. It creates a logical volume in cinder-volumes-iscsi, then creates an ISCSI target that exposes the logical volume to the compute node that needs it. You ask why it doesn't use the existing ISCSI device - again, because it doesn't know that there is an ISCSI device. It's not made for working with existing ISCSI devices.

I think that your configuration parameters confuse the LVM driver or the higher-level Cinder code, in particular the disk array's IP address 10.10.10.2. This is why I think you have two options:

  • the [lvm] backend works. Make a copy of it, rename the copy [lvmiscsi], change the backend name and the volume group name, and it should work as well.
  • use the QNAP driver. Unfortunately, I don't see it documented at https://docs.openstack.org/cinder/lat..., but I think that you should either find information on the QNAP web site or you need to talk to QNAP support. In any case, the QNAP driver exists and is supported; the code is here.

EDIT: The QNAP driver code seems to have the following config options:

qnap_opts = [
cfg.URIOpt('qnap_management_url',
           help='The URL to management QNAP Storage. '
                'Driver does not support IPv6 address in URL.'),
cfg.StrOpt('qnap_poolname',
           help='The pool name in the QNAP Storage'),
cfg.StrOpt('qnap_storage_protocol',
           default='iscsi',
           help='Communication protocol to access QNAP storage')
]

and these Boolean extra-specs:

        qnap_thin_provision = self._parse_boolean_extra_spec(
            extra_specs.get('qnap_thin_provision', 'true'))
        qnap_compression = self._parse_boolean_extra_spec(
            extra_specs.get('qnap_compression', 'true'))
        qnap_deduplication = self._parse_boolean_extra_spec(
            extra_specs.get('qnap_deduplication', 'false'))
        qnap_ssd_cache = self._parse_boolean_extra_spec(
            extra_specs.get('qnap_ssd_cache', 'false'))

There is also a very detailed QNAP community forum entry on CInder integration.

edit flag offensive delete link more

Comments

Eureka! it runs! I have implemented QNAP drivers + lvmiscsi and I can deploy now

[lvmiscsi] volume_backend_name=lvmiscsi volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver target_ip_address=192.168.1.16 target_helper=lioadm volume_group=cinder-volumes-iscsi volumes_dir=/var/lib/cinder/volumes

techcraft gravatar imagetechcraft ( 2019-10-11 14:39:35 -0500 )edit

[qnap-hdd] volume_backend_name="QNAP-HDD" volume_driver=cinder.volume.drivers.qnap.QnapISCSIDriver qnap_management_url=https://10.10.10.2:443 qnap_poolname="Raid 5 Disk Volume: Drive 1 2 3 4" qnap_storage_protocol=iscsi qnap_server_port=443 iscsi_ip_address=10.10.10.2 san_login=<..> san_password=<>

techcraft gravatar imagetechcraft ( 2019-10-11 14:40:32 -0500 )edit

Congratulations!

Bernd Bausch gravatar imageBernd Bausch ( 2019-10-12 04:17:15 -0500 )edit
0

answered 2019-10-11 02:53:11 -0500

techcraft gravatar image

Thank you for the explanations I'm really newbie in openstack.

I tried your suggestion and changed cinder in this way:

[lvmiscsi]

volume_backend_name=lvmiscsi

volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver

target_ip_address=192.168.1.16

target_helper=lioadm

volume_group=cinder-volumes-iscsi

volumes_dir=/var/lib/cinder/volumes

[lvm]

volume_backend_name=lvm

volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver

target_ip_address=192.168.1.16

target_helper=lioadm

volume_group=cinder-volumes

volumes_dir=/var/lib/cinder/volumes

But without success:

2019-10-11 09:37:05.264 2435 ERROR nova.compute.manager [req-c68a5be6-df65-4ab1-817a-177c60e30bf6 86628b0073f44211a63dc1dff6dcad2a 74e6c4c79ece48f481152ef9b312bc50 - default def ault] [instance: 99c03028-59a0-42d3-8f52-1119f44fc822] Instance failed block device setup: VolumeNotCreated: Volume 7c1011fd-ebd8-44cc-bc09-522b265b0eb9 did not finish being cre ated even after we waited 0 seconds or 1 attempts. And its status is error.

The best result I reached is configuring cinder in this way:

[lvmiscsi]

volume_group=cinder-volumes-iscsi

volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver

target_prefix=iqn.2010-10.org.openstack:

target_port=3260

target_protocol=iscsi

target_helper=tgtadm

target_ip_address=10.10.10.2

volume_backend_name=LVM_iSCSI

volumes_dir=/var/lib/cinder/volumes

configuring /etc/tgt/targets.conf

default-driver iscsi

<target iqn.2004-04.com.qnap:ts-421:iscsi.openstack.e11392>

</target>

and /etc/tgt/conf.d/cinder.conf

include /var/lib/cinder/volumes/*

In this way the process goes on but it stucks in "spawning" with the error:

2019-10-10 19:54:43.807 2435 WARNING os_brick.initiator.connectors.iscsi [req-100322dd-01da-4bc7-aff1-734b0d639817 86628b0073f44211a63dc1dff6dcad2a 74e6c4c79ece48f481152ef9b312b c50 - default default] Failed to login iSCSI target iqn.2010-10.org.openstack:volume-5c253e45-76c6-4ea3-a04b-f9bd060109a1 on portal 10.10.10.2:3260 (exit code 8).: ProcessExecut ionError: Unexpected error while running command.

I checked the ACL on QNAP and it permits * in r/w

I know there is a QNAP driver but I tried to configure it and it's a nightmare

edit flag offensive delete link more

Comments

VolumeNotCreated means literally that Nova fails to create a volume. More info in the Cinder logs.

What do you get with the same configuration when you create a volume manually with openstack volume create? Then attach that volume to an existing instance?

Bernd Bausch gravatar imageBernd Bausch ( 2019-10-11 05:19:17 -0500 )edit

Eureka! it runs! I have implemented QNAP drivers + lvmiscsi and I can deploy now

[lvmiscsi] volume_backend_name=lvmiscsi volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver target_ip_address=192.168.1.16 target_helper=lioadm volume_group=cinder-volumes-iscsi volumes_dir=/var/lib/cinder/volumes

techcraft gravatar imagetechcraft ( 2019-10-12 01:26:45 -0500 )edit

[qnap-hdd] volume_backend_name="QNAP-HDD" volume_driver=cinder.volume.drivers.qnap.QnapISCSIDriver qnap_management_url=https://10.10.10.2:443 qnap_poolname="Raid 5 Disk Volume: Drive 1 2 3 4" qnap_storage_protocol=iscsi qnap_server_port=443 iscsi_ip_address=10.10.10.2 san_login=<..> san_password=<>

techcraft gravatar imagetechcraft ( 2019-10-12 01:26:54 -0500 )edit

Thank you very much for the help

techcraft gravatar imagetechcraft ( 2019-10-12 01:27:06 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2019-10-09 09:16:32 -0500

Seen: 22 times

Last updated: Oct 11