can't start an instance after manually detaching volume, because is still looking for it
I have an instance that had some problems with a volume. I detached the volumes with: cinder reset-state --attach-status detached [ID_Volume] and then delete it: cinder delete [ID_Volume] Now the instance is in error state, so I reset ir to active, so I try to start it with a hard reboot, but it's still trying to find the missing volume:
2020-08-28 02:30:19.886 6 INFO nova.compute.manager [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] [instance: 3b86e532-176a-4ee7-bfd4-cf625191911b] Rebooting instance
2020-08-28 02:30:21.052 6 WARNING nova.compute.manager [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] [instance: 3b86e532-176a-4ee7-bfd4-cf625191911b] trying to reboot a non-running instance: (state: 0 expected: 1)
2020-08-28 02:30:21.237 6 INFO nova.virt.libvirt.driver [-] [instance: 3b86e532-176a-4ee7-bfd4-cf625191911b] Instance destroyed successfully.
2020-08-28 02:30:21.312 6 INFO os_vif [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] Successfully unplugged vif VIFBridge(active=False,address=fa:16:3e:23:ba:73,bridge_name='qbra00218a5-69',has_traffic_filtering=True,id=a00218a5-69ca-48ee-953f-2fa6d368a192,network=Network(1440cf9e-52e6-4917-88bf-975d7f630e87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00218a5-69')
2020-08-28 02:30:21.365 6 WARNING os_brick.initiator.connectors.iscsi [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] Couldn't find iSCSI nodes because iscsiadm err: iscsiadm: No records found
2020-08-28 02:30:21.405 6 WARNING os_brick.initiator.connectors.iscsi [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions.
2020-08-28 02:30:21.451 6 INFO os_brick.initiator.connectors.iscsi [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] Trying to connect to iSCSI portal 172.16.230.2:3260
2020-08-28 02:30:21.735 6 WARNING os_brick.initiator.connectors.iscsi [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions.
2020-08-28 02:30:22.799 6 WARNING os_brick.initiator.connectors.iscsi [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] Failed to login iSCSI target iqn.2010-10.org.openstack:volume-b0be29ca-dbfd-4aed-aaf1-8fde34c93ed8 on portal 172.16.230.2:3260 (exit code 19).: ProcessExecutionError: Unexpected error while running command.
2020-08-28 02:30:22.800 6 WARNING os_brick.initiator.connectors.iscsi [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] Failed to connect to iSCSI portal 172.16.230.2:3260.
2020-08-28 02:30:22.893 6 WARNING os_brick.initiator.connectors.iscsi [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions.
2020-08-28 02:30:25.018 6 INFO os_brick.initiator.connectors.iscsi [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] Trying to connect to iSCSI portal 172.16.230.2:3260
2020-08-28 02:30:25.352 6 WARNING os_brick.initiator.connectors.iscsi [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions.
: VolumeDeviceNotFound: Volume device not found at .
2020-08-28 02:30:26.409 6 WARNING os_brick.initiator.connectors.iscsi [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] Failed to login iSCSI target iqn.2010-10.org.openstack:volume-b0be29ca-dbfd-4aed-aaf1-8fde34c93ed8 on portal 172.16.230.2:3260 (exit code 19).: ProcessExecutionError: Unexpected error while running command.
2020-08-28 02:30:26.410 6 WARNING os_brick.initiator.connectors.iscsi [req-2bdbd99b-bfce-47b1-9b20-c3d909171a9e dfff51be035147dd9a0d60f6ad01312f f590b293a0ca4cf8b209c6bff7160811 - default default] Failed to connect to iSCSI portal 172.16.230.2:3260.: VolumeDeviceNotFound: Volume device not ...
It's hard to tell what your actions may have caused. Do you have access to the database? You could take a look into the nova database, table
block_device_mapping
and paste the output here. Are you sure deleting the detached volume caused this or was there a problem beforehand? Are all services up?All servers are up, and I'm not 100% sure that the deleting volume cause it. Here is the table you requested... but it seems I have to find out how to make KVM to put the instance 3b86e532-176a-4ee7-bfd4-cf625191911b into its inventory or something like that (look at the update 2 i just added)