Ask Your Question

trigger's profile - activity

2015-11-02 13:43:46 -0500 received badge  Famous Question (source)
2015-05-04 02:51:16 -0500 received badge  Notable Question (source)
2015-04-21 04:07:26 -0500 received badge  Famous Question (source)
2015-03-12 04:52:51 -0500 commented answer Running separate cinder on the compute nodes

Yeah that's what I though, thanks for you input.

2015-03-12 03:10:52 -0500 commented answer Running separate cinder on the compute nodes

But then wouldn't that require one to explicitly select that availability zone when booting a new instance? I think so.

2015-03-11 11:09:05 -0500 received badge  Popular Question (source)
2015-03-09 02:45:41 -0500 asked a question Running separate cinder on the compute nodes

Hello,

Is it possible to run cinder and nova-compute on the same compute node and pair them together so that compute1 always uses cinder1, compute2 always uses cinder2 etc? So compute1 actually runs both cinder and nova-compute and compute1 is only allowed to use cinder1, some kind of binding.

Best regards

2015-02-01 07:08:13 -0500 received badge  Self-Learner (source)
2015-02-01 07:08:13 -0500 received badge  Teacher (source)
2015-01-30 04:34:27 -0500 received badge  Enthusiast
2015-01-27 09:56:36 -0500 answered a question ISCSI failure after restart of compute/cinder

Issue was that tgtadm failed to create the ISCSI target, the volume entered 'error' state and Cinder does not try to create targets or "fix" volumes that is in the state 'error'. The issue can be seen in the volume.log cinder log as:

2015-01-23 10:53:15.791 3193 ERROR cinder.brick.iscsi.iscsi [req-c3c06a94-c49e-470b-a956-cfc0802024bd - - - - -] Failed to recover attempt to create iscsi backing lun for volume id:iqn.2010-10.org.openstack:volume-5d51f748-aec8-4da7-a3d2-1083afb720b6: Unexpected error while running command.
Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/cinder-volumes/volume-5d51f748-aec8-4da7-a3d2-1083afb720b6
2015-01-23 10:53:15.871 3193 ERROR cinder.volume.manager [req-c3c06a94-c49e-470b-a956-cfc0802024bd - - - - -] Failed to re-export volume 5d51f748-aec8-4da7-a3d2-1083afb720b6: setting to error state
2015-01-23 10:53:15.872 3193 ERROR cinder.volume.manager [req-c3c06a94-c49e-470b-a956-cfc0802024bd - - - - -] Failed to create iscsi target for volume volume-5d51f748-aec8-4da7-a3d2-1083afb720b6.
2015-01-23 10:53:15.872 3193 TRACE cinder.volume.manager ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-5d51f748-aec8-4da7-a3d2-1083afb720b6.

A bug fix is already proposed here: https://review.openstack.org/#/c/147566/5 (https://review.openstack.org/#/c/1475...)

To solve the issue I did the following:

cinder reset-state --state in-use 5d51f748-aec8-4da7-a3d2-1083afb720b6
/etc/init.d/openstack-cinder-volume restart
iscsiadm -m node -T iqn.2010-10.org.openstack:volume-5d51f748-aec8-4da7-a3d2-1083afb720b6 -l

Thanks to Avishay Traeger (avishay on #openstack-cinder) for the help.

2015-01-27 08:13:44 -0500 received badge  Notable Question (source)
2015-01-26 02:15:27 -0500 commented answer ISCSI failure after restart of compute/cinder

The command gives no output, it looks like after a reboot Cinder does not give out the targets after a reboot.

2015-01-23 15:57:52 -0500 received badge  Popular Question (source)
2015-01-23 09:34:25 -0500 commented answer ISCSI failure after restart of compute/cinder

I'm running tgtd with "include /etc/cinder/volumes/*" in my /etc/tgt/targets.conf configuration. And in /etc/cinder/cinder.conf I got "iscsi_helper=tgtadm", I'm using tgtd. For example, please see http://pastebin.com/EvxfM62N for additional information.

2015-01-23 08:13:24 -0500 received badge  Editor (source)
2015-01-23 08:13:04 -0500 commented answer ISCSI failure after restart of compute/cinder

Hello,

Thanks for the effort, however it's not even close too what I'm seeking help with. I also forgot to mention I'm running CentOS 6.5 with OpenStack Icehouse.

2015-01-23 07:04:20 -0500 asked a question ISCSI failure after restart of compute/cinder

Hello,

  • CentOS 6.5
  • OpenStack Icehouse

We are running compute hosts that acts as both Nova Compute and Cinder Volume nodes. They get ISCSI from our SAN for shared storage so that we can create Cinder volumes on there.

After a compute node is restarted (running both compute and cinder) ISCSI fails with the following logs, please see http://pastebin.com/mM3UbaXB

  • ISCSI volume not yet found at: vda.
  • NovaException: iSCSI device not found at

What do I need to do in order for Cinder to setup the appropriate ISCSI connections again? I can see it when checking "iscsiadm -m session" tcp: [33] 192.168.100.10:3260,1 iqn.2010-10.org.openstack:volume-5d51f748-aec8-4da7-a3d2-1083afb720b6

I can't see the volume when running "fdisk -l" so the ISCSI is not working properly as it's not seeing the logical volume. Anybody have any suggestions for me on how to resolved this?

I have tried rebooting it several times again, also restarting tgtd, libvirtd etc. I have got it working once, but then after restart it didn't work again and I can't remember why it started working, "it just did".

Best regards

2014-12-16 09:22:10 -0500 answered a question Best way to use iSCSI-only SAN arrays for shared storage

Hello,

I would also like to have an answer to Joraff's questions, I'm seeking the same advice. I'm also curious and are looking for a way to use ISCSI without the use of dedicated cinder nodes (LVM) so that the only point of failure is the compute nodes and not storage.

Best regards