Ask Your Question
0

In Folsom, can live migration and server resize exit together on a shared file ssytem?

asked 2013-10-03 18:48:24 -0600

craig-e-ward gravatar image

I have a Folsom installation that supports live migration using the Gluster shared file system. When the "nova resize <server> <flavor>" command is tried, it results in an ERROR state for the instance. The logs show that the command fails when the node with the instance attempts to use ssh to copy the files for the instance to another node. (The actual command is "ssh <node ip=""> mkdir -p <path_to_instance_files>") The problem at this point is that these nodes have not been configured to allow ssh to work using ssh keys. That could be remedied, but the copy seems unnecessary because all of the compute nodes already share the same file system for these files. If the ssh were to have succeeded in making the new directory (the mkdir command doesn't return an error status if the directory exists), would a following copy have just corrupted the files? </path_to_instance_files></node></flavor></server>

The nodes are using QEMU as the hypervisor. I found documentation for configuring XEN, but not QEMU. Are there work-arounds for using resize and live migration together on a shared file system or, at least for Folsom, is it an either/or proposition? Or is resize not supported in a shared file system environment?

The host OS is CentOS 6.4.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2013-10-24 17:49:54 -0600

craig-e-ward gravatar image

Experimenting some more and I believe I have an answer to my own question.

I was able to get the nova resize/resize-confirm/resize-revert commands to work on a Folsom installation that used the Gluster file system for shared storage between compute nodes. The configuration used:

  • nova.conf on both controller and compute nodes needed allow_resize_to_same_host=true setting;
  • The default shell for the nova account on the compute nodes (not controller) was changed to/bin/bash;
  • An empty pass phrase ssh key was created and distributed to the compute nodes.

Although it might not be necessary, I added these directives to the user ssh config for each compute node:

PasswordAuthentication=no
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null

(If somone knows why these are a problem, please post a comment!)

Hope this is a useful question and answer.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-10-03 18:48:24 -0600

Seen: 308 times

Last updated: Oct 24 '13