all volumes are in error state after reboot

asked 2020-06-21 10:58:17 -0500

Mik gravatar image

updated 2020-06-22 01:44:11 -0500

Hi, After I updated my Rocky openstack (cinder from 13.0.5 to 13.0.9) to latest version and rebooted server all my volumes have status: error. I can create new volumes and vms without any problems but all volumes which were before reboot have status error.

example cinder list:

| eb42dbd8-72f3-4df2-9e03-21e6d5c98039 | error     | db-master-binlogs                     | 20   | iscsi       | false    | b931ff78-96b2-41ef-8296-ef4ddca66407 |
| ee08ac0b-c22a-474d-9a48-6a0d32ecdbc6 | in-use    |                                       | 20   | iscsi       | true     | b881e0cd-96a9-4715-90cd-5c56306ae333 |

with status is=use is created for testing and wilh error is volume which was created before reboot. All lmvs are visible:

  volume-eb42dbd8-72f3-4df2-9e03-21e6d5c98039    cinder-volumes Vwi-a-tz--  20.00g cinder-volumes-pool                                                95.23
  volume-ee08ac0b-c22a-474d-9a48-6a0d32ecdbc6    cinder-volumes Vwi-aotz--  20.00g cinder-volumes-pool                                                10.10
edit retag flag offensive close merge delete

Comments

What is the state of the Cinder services? What do you find in the Cinder logs? How did you create this cloud, and how did you update it?

Please use code formatting as documented in the grey box at the top of https://ask.openstack.org/en/question.... Currently, your code is very hard to read.

Bernd Bausch gravatar imageBernd Bausch ( 2020-06-21 19:43:28 -0500 )edit

I can create new volumes and vms without any problems

I missed that detail. Most likely your Cinder services are all up. Still, find log messages for the error volumes.

Bernd Bausch gravatar imageBernd Bausch ( 2020-06-21 21:03:57 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2020-06-22 01:54:33 -0500

Mik gravatar image

I've found something in nova-compute log:

2020-06-21 19:02:30.859 2962 WARNING os_brick.initiator.connectors.iscsi [req-920e16b6-3232-472c-8f5c-b7d21d3e3210 3d36daa70a6c4707a2aaefedfe7c2910 c5dc1b84c8354984b1c7c4563f91b711 - default default] LUN 0 on iSCSI portal 217.xx.xx.xx:3260 not found on sysfs after logging in.: VolumeDeviceNotFound: Volume device not found at .

2020-06-21 19:02:30.950 2962 WARNING os_brick.initiator.connectors.iscsi [req-920e16b6-3232-472c-8f5c-b7d21d3e3210 3d36daa70a6c4707a2aaefedfe7c2910 c5dc1b84c8354984b1c7c4563f91b711 - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions.

    [root@linux1 nova]# iscsiadm --mode discovery --type sendtargets --portal 217.xx.xx.xx:3260

217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-4090ce1b-4a54-4cb8-9113-ba38513914ff 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-7e5d0963-5c37-4011-a00f-35842078848b 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-b1753ecb-e1cd-404e-9bef-bfadf1c199a5 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-9ec28152-50cc-4249-9d99-c056b787e059 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-e9292d3f-4fa0-4068-ac74-2e66e85659e0 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-0629cd6d-7624-40c1-8221-92a1f76bc412 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-36021303-c34d-4bc0-8c48-3f568848fe5e 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-add60c3f-1a03-47dd-ad57-6fbf7992d58b 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-98af0c86-7c43-4beb-a594-808ca8e354e8 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-ce50b975-e770-4cc2-8394-a21ca70efeca 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-6be50373-92a1-4bce-9a25-eba466611d35 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-e7c66de1-2c17-42e7-975a-f03389928112 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-79ded997-8fd3-4a61-85fa-e2b344f1c47e 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-73f5eebf-da95-47dd-bcf8-960ad9431995 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-68e105dd-8194-4d1a-ae52-92501f30a3e1 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-d7b6cec0-a8c7-4b42-b0f6-e63abf7f390d 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-8398196a-66a7-4e91-8f09-85eeb6801c15 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-eb42dbd8-72f3-4df2-9e03-21e6d5c98039 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-889cd3c3-6dfe-417c-ad73-db7138bcbea2 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-04cc24ad-9e05-413c-845b-52964e8387ae 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-3ee5ca3a-0d79-4d65-8968-ac6a8017a431 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-8a15b6dd-bc3e-4d88-82d2-fc1779d391cd 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-12f43f62-db1a-4974-af87-a78bd3d164a0 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-bb0cda5a-9fa8-485c-b60f-95b6210c4504 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-d2b2068e-8a85-4cb8-9a18-d59a70773b57

 iscsiadm -m session

tcp: [26] 217.xx.xx.xx:3260,1 iqn.2010-10.org.openstack:volume-4090ce1b-4a54-4cb8-9113-ba38513914ff (non-flash)

edit flag offensive delete link more

Comments

Could this be a race condition? I.e. the iscsi target wasn't ready when Cinder wanted to connect to it?

Bernd Bausch gravatar imageBernd Bausch ( 2020-06-22 07:32:19 -0500 )edit

I fixed this using rh solution: https://access.redhat.com/solutions/3431431 (https://access.redhat.com/solutions/3...) Backstores were not connected with iscsi luns in targetcli. This wasn't persistent after reboot.

Mik gravatar imageMik ( 2020-06-22 12:43:26 -0500 )edit

Congratulations!

Bernd Bausch gravatar imageBernd Bausch ( 2020-06-22 17:07:16 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2020-06-21 10:53:55 -0500

Seen: 49 times

Last updated: Jun 22