Target restart destroyed my Volumes. How do I get them back?

asked 2014-11-26 12:23:21 -0600

DaveKlotz gravatar image

updated 2014-11-26 12:38:04 -0600

I restarted the "target" service on a CentOS openstack instance (latest) and now my instances all cannot read their volumes! How do I reattach them properly? I can make new volumes, but it appears the old ones no longer can be seen properly. I see: Failed to start Restore LIO kernel target configuration.

in my target logs, and each device is still listed under: /dev/cinder-volumes/GUIDHERE

edit retag flag offensive close merge delete


I also see: WARNING nova.virt.libvirt.driver [-] Periodic task is updating the host stats, it is trying to get disk info for instance-00000026, but the backing volume block device was removed by concurrent operations such as resize. Error: No volume Block Device Mapping at path: /dev/disk/by-path/i

DaveKlotz gravatar imageDaveKlotz ( 2014-11-26 12:31:29 -0600 )edit

edit your question as you find out more, don't use comments because these are a lot less legible.

smaffulli gravatar imagesmaffulli ( 2014-12-22 12:53:27 -0600 )edit

2 answers

Sort by ยป oldest newest most voted

answered 2014-11-26 19:07:10 -0600

SamYaple gravatar image

Hmm. You should be careful here. A few wrong processes with your data and it could all go away.

The correct way would be to let the services reestablish connections themselves. Since I am not familiar with target enough to know what calls it and how I would suggest the following:

  • Poweroff all instances that had volumes attached. You will likely have to force them off since the guest OS may not exist anymore if it was hosted on a volume that is now missing
  • Reboot your node hosting your cinder-volume server
  • Reboot your compute node
  • Bring your instances backup through nova (nova start)

That should cycle all the appropriate services and allow the connections to be remade correctly.

If at all possible, backup your data first. Always.

edit flag offensive delete link more



Unfortunately, still am not able to bring up the VM's. It looks like I have to re-mount them somehow, as I don't see any "open" in the # open field when I do lvdisplay. is there some manual process I can mount them with?

DaveKlotz gravatar imageDaveKlotz ( 2014-12-01 06:38:42 -0600 )edit

answered 2014-12-22 10:40:05 -0600

DaveKlotz gravatar image

Not exactly sure what the root cause was, but whatever was included in the latest PackStack/openstack-juno release (2014-2) for cinder fixed my issue.. did a yum upgrade of all my openstack components and now I can reboot.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2014-11-26 12:23:21 -0600

Seen: 382 times

Last updated: Dec 22 '14