RDO Juno instances with LVMiSCSI cinder backend don't resume after Controller(LVM Storage) reboot
RDO Juno Two Real Node (Controller+Compute) Gluster 3.5.2 Cluster ML2&OVS&VXLAN on CentOS 7
set up for testing.
Details here http://bderzhavets.blogspot.com/2014/...
/etc/cinder/cinder.conf contains:-
enabled_backends=gluster,lvm52
[gluster]
volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver
glusterfs_shares_config = /etc/cinder/shares.conf
glusterfs_mount_point_base = /var/lib/cinder/volumes
volume_backend_name=GLUSTER
[lvm52]
iscsi_helper=lioadm
volume_group=cinder-volumes52
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
iscsi_ip_address=192.168.1.127
volume_backend_name=LVM_iSCSI52
VG cinder-volumes52 is located on /dev/sda6 of Controller (/dev/sda3, /dev/sda5)
VMs with both cinder backends work fine until I reboot Controller ( with suspended VMs)
After Controller node reboot VMs with gluster backend resume with no problems.
VMs with LVMiSCSI backend fail to resume.
/var/log/nova/nova-compute.log on Compute node reports
2014-11-02 20:05:11.955 4311 INFO nova.scheduler.client.report [req-dc389e4d-28f8-4c8c-a53c-f6edced42e5d None] Compute_service record updated for ('juno2.localdomain', 'juno2.localdomain')
2014-11-02 20:05:11.962 4311 ERROR oslo.messaging.rpc.dispatcher [req-dc389e4d-28f8-4c8c-a53c-f6edced42e5d ] Exception during message handling: Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-8ade9f17-163d-48ca-bea5-bc9c6ea99b17 -p 192.168.1.127:3260 --rescan
Exit code: 21
Stdout: u''
Stderr: u'iscsiadm: No session found.\n'
I've found thread cinder broken iscsiadm: No session found..
That's exactly situation what I am experiencing now. iscsi && iscsiadm on Compute node gives same diagnostic. However, there is no longer tgtd daemon on Juno Controller (LVM storage). There is no tgtadm.
Not installed with Juno and I guess not supposed to be used.
`lsof -i -n | grep iscsi` on Controller gives empty output
Attempt to install targetd on CentOS 7 ( View http://theurbanpenguin.com/wp/?p=3403 )
yum install targetd
systemctl enable targetd
systemctl start targetd
[root@juno1 ~(keystone_admin)]# systemctl status targetd -l
targetd.service - targetd storage array API daemon
Loaded: loaded (/usr/lib/systemd/system/targetd.service; enabled)
Active: failed (Result: exit-code) since Tue 2014-11-04 15:31:06 MSK; 16min ago
Process: 1767 ExecStart=/usr/bin/targetd (code=exited, status=1/FAILURE)
Main PID: 1767 (code=exited, status=1/FAILURE)
CGroup: /system.slice/targetd.service
Nov 04 15:31:06 juno1.localdomain targetd[1767]: Traceback (most recent call last):
Nov 04 15:31:06 juno1.localdomain targetd[1767]: File "/usr/bin/targetd", line 24, in <module>
Nov 04 15:31:06 juno1.localdomain targetd[1767]: sys.exit(main())
Nov 04 15:31:06 juno1.localdomain targetd[1767]: File "/usr/lib/python2.7/site-packages/targetd/main.py", line 209, in main
Nov 04 15:31:06 juno1.localdomain targetd[1767]: load_config(default_config_path)
Nov 04 15:31:06 juno1.localdomain targetd[1767]: File "/usr/lib/python2.7/site-packages/targetd/main.py", line 170, in load_config
Nov 04 15:31:06 juno1.localdomain targetd[1767]: config[key] = value
Nov 04 15:31:06 juno1.localdomain targetd[1767]: TypeError: 'str' object does not support item assignment
Nov 04 15:31:06 juno1.localdomain systemd[1]: targetd.service: main process exited, code=exited, status=1/FAILURE ...
I'm having this same problem with an all-in-one install in our local office. CentOS 7, RDO Juno. I can remove volumes just fine so it doesn't seem to be related to the bug mentioned below. Were you able to resolve this or did you switch cinder backends?
Please, view http://lxer.com/module/newswire/view/...