Ask Your Question
0

Openstack folsom cinder volumes fail after node reboot

asked 2013-06-14 10:15:27 -0500

david-butler gravatar image

I have a 2 node openstack install with working cinder-volumes, based on http://openstack-folsom-install-guide.readthedocs.org/en/latest/ (http://openstack-folsom-install-guide...) with some minor fixes for the volume naming and sysfsutils install.

If I reboot the control and compute nodes and restart the services, the cinder volumes fail.

I can no longer create or attach volumes.

Any suggestions on a fix for this?

My nova.conf and cinder.conf are as follows;

root@PoC7:/home/user# more /etc/nova/nova.conf [DEFAULT] logdir=/var/log/nova state_path=/var/lib/nova lock_path=/run/lock/nova verbose=True api_paste_config=/etc/nova/api-paste.ini scheduler_driver=nova.scheduler.simple.SimpleScheduler s3_host=192.168.40.207 ec2_host=192.168.40.207 ec2_dmz_host=192.168.40.207 rabbit_host=192.168.40.207 cc_host=192.168.40.207 metadata_host=192.168.40.207 metadata_listen=0.0.0.0 nova_url=http://192.168.40.207:8774/v1.1/ sql_connection=mysql://novaUser:novaPass@192.168.40.207/nova ec2_url=http://192.168.40.207:8773/services/Cloud root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

Auth

use_deprecated_auth=false auth_strategy=keystone keystone_ec2_url=http://192.168.40.207:5000/v2.0/ec2tokens

Imaging service

glance_api_servers=192.168.40.207:9292 image_service=nova.image.glance.GlanceImageService

Vnc configuration

novnc_enabled=true novncproxy_base_url=http://192.168.40.207:6080/vnc_auto.html novncproxy_port=6080 vncserver_proxyclient_address=192.168.40.207 vncserver_listen=0.0.0.0

NETWORK

network_manager=nova.network.manager.FlatDHCPManager force_dhcp_release=True dhcpbridge_flagfile=/etc/nova/nova.conf firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

Change my_ip to match each host

my_ip=192.168.40.207 public_interface=br100 vlan_interface=eth0 flat_network_bridge=br100 flat_interface=eth0 #Note the different pool, this will be used for instance range fixed_range=192.168.30.0/24

Compute

compute_driver=libvirt.LibvirtDriver ram_allocation_ratio=1.0 cpu_allocation_ratio=1.0

Cinder

volume_api_class=nova.volume.cinder.API osapi_volume_listen_port=5900 volume_name_template=volume-%s

root@PoC7:/home/user# more /etc/cinder/cinder.conf [DEFAULT] rootwrap_config=/etc/cinder/rootwrap.conf sql_connection = mysql://cinderUser:cinderPass@192.168.40.207/cinder api_paste_confg = /etc/cinder/api-paste.ini iscsi_helper=ietadm #iscsi_helper=tgtadm volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone

I realise everyone is probably working on girzzly now, but any help would be appreciated.

Dave.

edit retag flag offensive close merge delete

4 answers

Sort by » oldest newest most voted
0

answered 2013-06-18 13:07:45 -0500

david-butler gravatar image

Patrick,

Thansk for the info.

I´m using ietadm, ratther than tgtadm, but I presume it is the same bug?

Am I correct in thinking that the fix is in Cinder 2013.1 "grizzly" and there is no work around for folsom?

Dave.

edit flag offensive delete link more
0

answered 2013-06-18 12:30:59 -0500

patrickvinas gravatar image

Take a look at this bug: https://bugs.launchpad.net/cinder/+bug/1175207 (https://bugs.launchpad.net/cinder/+bu...)

It looks like that fix was backported to Folsom

edit flag offensive delete link more
0

answered 2013-06-17 14:42:33 -0500

patrickvinas gravatar image

I'd recommend adding

enabled_apis = osapi_compute,metadata,<whatever other="" apis="">

to your nova.conf, leaving off osapi_volume. I think I remember reading in one of the docs that you can get conflicts between nova-volume and cinder-volume otherwise.

Are there any errors in the logs to indicate what's failing?

Is the iscsitarget service coming back up before the cinder-volume service (or at all)?

edit flag offensive delete link more
0

answered 2013-06-18 09:24:39 -0500

david-butler gravatar image

Patrick,

I added the following to the nova.conf on the control (PoC7) and compute (PoC20) nodes volume_group = cinder-volumes enabled_apis=ec2,osapi_compute,metadata

I still cannot use the cinder volumes after reboot, getting an error: 2013-06-18 10:05:09 1770 TRACE cinder Error: Got unknown keyword args to utils.execute: {'old_name': None}

The iscsi demodn seems to be running, see detail below:

root@PoC7:/home/pocuser# ps -elf | grep iscsi 1 S root 1414 2 0 60 -20 - 0 rescue 10:04 ? 00:00:00 [iscsi_eh] 1 S root 1428 1 0 80 0 - 1306 hrtime 10:04 ? 00:00:00 /usr/sbin/iscsid 5 S root 1432 1 0 70 -10 - 1438 poll_s 10:04 ? 00:00:00 /usr/sbin/iscsid

root@PoC20:/home/pocuser# ps -elf | grep iscsi 1 S root 1119 2 0 60 -20 - 0 rescue 10:04 ? 00:00:00 [iscsi_eh] 1 S root 1127 1 0 80 0 - 1306 hrtime 10:04 ? 00:00:00 /usr/sbin/iscsid 5 S root 1128 1 0 70 -10 - 1438 poll_s 10:04 ? 00:00:00 /usr/sbin/iscsid 0 S root 2437 2424 0 80 0 - 2349 pipe_w 10:09 pts/0 00:00:00 grep --color=auto iscsi

root@PoC7:/var/log/cinder# cat *.log | grep Error 2013-06-18 10:05:09 1770 TRACE cinder Error: Got unknown keyword args to utils.execute: {'old_name': None}

root@PoC7:/var/log/nova# pvdisplay --- Physical volume --- PV Name /dev/sda1 VG Name cinder-volumes PV Size 2.00 TiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 524287 Free PE 514047 Allocated PE 10240 PV UUID gZxSeF-MSPq-Vv7M-tQqc-NfJv-Euou-BMVzZd

root@PoC7:/var/log/nova# vgdisplay --- Volume group --- VG Name cinder-volumes System ID
Format lvm2 Metadata Areas 1 Metadata Sequence No 7 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.00 TiB PE Size 4.00 MiB Total PE 524287 Alloc PE / Size 10240 / 40.00 GiB Free PE / Size 514047 / 1.96 TiB VG UUID 1tVgkA-GRS3-FEKB-Npl6-Cc2O-A9im-i0pjhx

root@PoC7:/var/log/nova# cd /var/log/cinder root@PoC7:/var/log/cinder# cat *.log | grep Error 2013-06-18 10:05:09 1770 TRACE cinder Error: Got unknown keyword args to utils.execute: {'old_name': None} root@PoC7:/var/log/cinder# lvdisplay --- Logical volume --- LV Path /dev/cinder-volumes/volume-e6ebd093-61d4-4dea-8ca6-33080d0945ae LV Name volume-e6ebd093-61d4-4dea-8ca6-33080d0945ae VG Name cinder-volumes LV UUID ClahdO-eZt7-osQ1-RgmQ-vBfj-1RzY-cc9rvf LV Write Access read/write LV Creation host, time PoC7, 2013-06-14 11:46:37 +0100 LV snapshot status source of _snapshot-9f1e0dae-9905-424d-bb71-6f2b9796ad19 [active] LV Status available # open 0 LV Size 10.00 GiB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1

--- Logical volume --- LV Path /dev/cinder-volumes/_snapshot-9f1e0dae-9905-424d-bb71-6f2b9796ad19 LV Name _snapshot-9f1e0dae-9905-424d-bb71-6f2b9796ad19 VG Name cinder-volumes LV UUID UM8j2R-LH0B-zH6P-nXB9-p9O1-fX24-EwuSfc LV Write Access read/write LV Creation host, time PoC7, 2013-06-14 13:47:44 +0100 LV snapshot status active destination for volume-e6ebd093-61d4-4dea-8ca6-33080d0945ae LV Status available # open ... (more)

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-06-14 10:15:27 -0500

Seen: 111 times

Last updated: Jun 18 '13