unable to attach volume to instance

asked 2019-08-27 04:44:36 -0500

tonyp12 gravatar image

I have a packstack all in one Queens host. It is one host and controller all-in-one and runs iscsi connections to it's own single IP address. For the most part, working well. However, the free memory has been depleted a few times due to load and occasional issues have begun. One of which is the following:

At the moment I have one volume that is unable to attach to an instance. This was previously attached and running. But now it's unable to be reattached.

command openstack server add volume 0f63dea0-d6fa-433d-be7f-dbaaef5423a7 73f060fc-e347-4e08-bb63-b399d261b6c8 does not produce any output so appears successful. But a volume list shows that volume not attached.

So far I have managed to discover the cinder api log with this:

2019-08-27 14:43:39.356 185098 INFO cinder.api.openstack.wsgi [req-4740850b-5da4-435f-9b6e-1a45c6fd6f6c 729c0f68314683dbe285c77db9a1b25af9c16637c32f72e8c11a006a2d5d5310 a3b5c4b6770c4d4dabbde06c55c5d0d1 - 04675d204cd0428394b1db1d89ad0f1f 04675d204cd0428394b1db1d89ad0f1f] GET http://192.168.33.235:8776/v3/a3b5c4b6770c4d4dabbde06c55c5d0d1/volumes/73f060fc-e347-4e08-bb63-b399d261b6c8
2019-08-27 14:43:39.356 185098 DEBUG cinder.api.openstack.wsgi [req-4740850b-5da4-435f-9b6e-1a45c6fd6f6c 729c0f68314683dbe285c77db9a1b25af9c16637c32f72e8c11a006a2d5d5310 a3b5c4b6770c4d4dabbde06c55c5d0d1 - 04675d204cd0428394b1db1d89ad0f1f 04675d204cd0428394b1db1d89ad0f1f] Empty body provided in request get_body /usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py:718

I Think the key part is Empty body provided in request get_body

The above log is always output to the cinder api log when I am attempting: (openstack) server add volume 0f63dea0-d6fa-433d-be7f-dbaaef5423a7 73f060fc-e347-4e08-bb63-b399d261b6c8

I googled for the openstack volume attach process and found this page: https://docs.openstack.org/cinder/pik...

From what I understand, openstack sets up the iscsi target, saves the chap credentials somewhere and then uses that to login to the target. I tried to attach the volume again and repeatedly ran:

[root@sl-len-01-p cinder]# cinder-rtstool get-targets | wc -l
46

It then counted to 47 and I confirmed with:

cinder-rtstool get-targets | grep 73f060fc | wc -l
1

So looks like the target is being created. The api log does show this although shortly after the log also shows the target being deleted and the get-targets count for that volume goes back to 0

I conclude that iscsi authentication is failing, but I have not seen this specific message in any cinder log. I have not located any iscsi log either. I googled how to enable debug log and found a how-to post for Centos (we use centos) how to start iscsid in debug mode.. I think it will affect running instances so I have avoided it.

In cinder.conf we have: lock_path=/var/lib/cinder/tmp but in that directory we always have a file of zero bytes:

[root@sl-len-01-p cinder]# ls -lah /var/lib/cinder/tmp
total 252K
drwxr-xr-x. 2 cinder cinder   27 Jul  4  2018 .
drwxr-xr-x. 5 cinder cinder 208K Aug 27 15:08 ..
-rw-r--r--. 1 cinder cinder    0 Jul  4  2018 cinder-lioadm

And the iscsi lvm config from cinder.conf:

enabled_backends=lvm

[lvm]
volume_backend_name=lvm
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_ip_address=192.168.33.235
iscsi_helper=lioadm
volume_group=cinder-volumes
volumes_dir=/var/lib/cinder/volumes

Could anyone help me move forward in the right direction? I am a bit stuck at this point.

edit retag flag offensive close merge delete

Comments

What's in the cinder-volume.log? Can you also check openstack volume service list?

eblock gravatar imageeblock ( 2019-08-27 07:05:25 -0500 )edit

volume service list shows all enabled / up: cinder-scheduler cinder-backup cinder-volume

tonyp12 gravatar imagetonyp12 ( 2019-08-27 09:18:52 -0500 )edit

there is volume.log in /var/log/cinder this shows: Creating iscsi_target for volume: volume-73f060fc-e347 attachment_update completed successfully. Terminate volume connection completed successfully Removing iscsi_target: 73f060fc-e347

All for the same volume in question. How to increase verbosity?

tonyp12 gravatar imagetonyp12 ( 2019-08-27 09:22:03 -0500 )edit

Set debug = true in your /etc/cinder/cinder.conf and restart cinder services.

eblock gravatar imageeblock ( 2019-08-28 04:01:22 -0500 )edit

Same messages there in the volume log which say creating target then deleting the target. There are logs in between like "CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix cinder-volumes" returned: 0"

tonyp12 gravatar imagetonyp12 ( 2019-08-28 20:53:37 -0500 )edit

I dont understand it and cannot post full log output here. But it looks like it sets up the iscsi connection, runs some commands then tears it down. It does not mention about trying to log in, or log in failures to the iscsi.

tonyp12 gravatar imagetonyp12 ( 2019-08-28 20:54:30 -0500 )edit

Could it be that your volume-group is full or something like that? Can you show the output of vgs | grep cinder?

eblock gravatar imageeblock ( 2019-08-29 02:07:01 -0500 )edit

Sure!

[root@sl-len-01-p ~]# vgs | grep cinder
          cinder-volumes   1  94   0 wz--n- <1.03t 52.48g

Could be the problem? - I was checking df -h :

[root@sl-len-01-p ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  1.3T 1019G  265G  80%
tonyp12 gravatar imagetonyp12 ( 2019-08-29 02:46:07 -0500 )edit

Are the iscsi targets on a different host than your controller (iscsi_ip_address=192.168.33.235)? Can you see the desired volume connected? compute:~ # iscsiadm -m session should show something like iqn.2010-10.org.openstack:volume-<your_id> if the volume is connected.

eblock gravatar imageeblock ( 2019-09-02 05:07:49 -0500 )edit