Ask Your Question
0

Target restart destroyed my Volumes. How do I get them back?

asked 2014-11-26 12:23:21 -0500

DaveKlotz gravatar image

updated 2014-11-26 12:38:04 -0500

I restarted the "target" service on a CentOS openstack instance (latest) and now my instances all cannot read their volumes! How do I reattach them properly? I can make new volumes, but it appears the old ones no longer can be seen properly. I see: Failed to start Restore LIO kernel target configuration.

in my target logs, and each device is still listed under: /dev/cinder-volumes/GUIDHERE

edit retag flag offensive close merge delete

Comments

I also see: WARNING nova.virt.libvirt.driver [-] Periodic task is updating the host stats, it is trying to get disk info for instance-00000026, but the backing volume block device was removed by concurrent operations such as resize. Error: No volume Block Device Mapping at path: /dev/disk/by-path/i

DaveKlotz gravatar imageDaveKlotz ( 2014-11-26 12:31:29 -0500 )edit

edit your question as you find out more, don't use comments because these are a lot less legible. https://ask.openstack.org/faq

smaffulli gravatar imagesmaffulli ( 2014-12-22 12:53:27 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2014-11-26 19:07:10 -0500

SamYaple gravatar image

Hmm. You should be careful here. A few wrong processes with your data and it could all go away.

The correct way would be to let the services reestablish connections themselves. Since I am not familiar with target enough to know what calls it and how I would suggest the following:

  • Poweroff all instances that had volumes attached. You will likely have to force them off since the guest OS may not exist anymore if it was hosted on a volume that is now missing
  • Reboot your node hosting your cinder-volume server
  • Reboot your compute node
  • Bring your instances backup through nova (nova start)

That should cycle all the appropriate services and allow the connections to be remade correctly.

If at all possible, backup your data first. Always.

edit flag offensive delete link more

Comments

Hi,

Unfortunately, still am not able to bring up the VM's. It looks like I have to re-mount them somehow, as I don't see any "open" in the # open field when I do lvdisplay. is there some manual process I can mount them with?

DaveKlotz gravatar imageDaveKlotz ( 2014-12-01 06:38:42 -0500 )edit
0

answered 2014-12-22 10:40:05 -0500

DaveKlotz gravatar image

Not exactly sure what the root cause was, but whatever was included in the latest PackStack/openstack-juno release (2014-2) for cinder fixed my issue.. did a yum upgrade of all my openstack components and now I can reboot.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-11-26 12:23:21 -0500

Seen: 319 times

Last updated: Dec 22 '14