Ask Your Question
4

can you run live migration between servers using cinder backed instances

asked 2013-08-01 05:47:20 -0500

Sifty gravatar image

Is it possible to live migrate cinder backed instances?

I have tried but no luck - I was given to understand cinder presents the drive as a virtual ISCSI device that can be attached to an instance, obviously you can live migrate nfs or glusterfs shared instances and the cinder volume is transferred too... so the question is can instances using cinder volumes be live migrated using the same methodology.

Any pointers would be awesome.

Many thanks as always.

edit retag flag offensive close merge delete

Comments

I tried it with Havana and libvirt/QEMU and it seems to work. Although it took over 4 minutes, as the rescan on the target node happened before the iscsi volume was served to it, and it waited for that long before rescanning.

darragh-oreilly gravatar imagedarragh-oreilly ( 2013-11-13 06:37:11 -0500 )edit

this could be nothing to do with openstack but iscsi daemon clashing with tgtd.

Sifty gravatar imageSifty ( 2013-12-19 11:09:16 -0500 )edit

Have you been able to get this to work? I have tried this with my Havana setup and it does work when I just have a volume attached to a VM, but when it is booting from a cinder volume it always shows the following errors on the compute nodes: ==> nova-compute.log <== 2014-01-20 10:50:08.180 18612 ERROR nova.virt.libvirt.driver [-] [instance: a0b5da07-1e12-4888-8dcf-4f56e9b060a5] Live Migration failure: Failed to open file '/dev/disk/by-path/ip-10.16.100.20:3260-iscsi-iqn.2010-10.org.openstack:volume-898a4383-d093-48d0-a25e-09ce292cd56c-lun-1': No such file or directory

loatu gravatar imageloatu ( 2014-01-22 11:36:38 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-01-22 15:58:15 -0500

loatu gravatar image

updated 2014-01-22 16:27:46 -0500

Whoa. I had a breakthrough. I tried manually logging in to the iscsi volume on the target compute node, and after that migration worked. This implies that the migration code is either not logging in the iscsi volume on the target node, or is programmed to do it later, in this case after a check to see if it's there has already run and canceled the migration process. @darragh-oreilly The 4 minutes thing you experienced is probably because in block migration it copies the volume to the other compute node. In my case, when I did this, I watched as it copied the volume onto itself (thankfully it still worked afterward :) )

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

4 followers

Stats

Asked: 2013-08-01 05:47:20 -0500

Seen: 901 times

Last updated: Aug 01 '13