nova live-migrartion failed

asked 2013-07-08 08:48:54 -0600

nour-nakouzi gravatar image


I'm trying to test the nova live-migration, but i have always this output: ERROR: Live migration of instance b6607c9b-cedb-4749-9366-68256f922171 to host controller-node failed (HTTP 400) (Request-ID: req-59d7e5a4-1639-40aa-9328-b04212338208)

here's my services: Binary Host Zone Status State Updated_At nova-consoleauth controller-node internal enabled :-) 2013-07-08 08:46:30 nova-cert controller-node internal enabled :-) 2013-07-08 08:46:30 nova-conductor controller-node internal enabled :-) 2013-07-08 08:46:30 nova-scheduler controller-node internal enabled :-) 2013-07-08 08:46:30 nova-compute compute-node lannion enabled :-) 2013-07-08 08:46:31 nova-compute controller-node lannion enabled :-) 2013-07-08 08:46:30

Here's the VM i'm tyring to migrate from the compute to the controller-node, b6607c9b-cedb-4749-9366-68256f922171 | precise-admin | MIGRATING | priv1=10.10.x.x, 172.25.x.x |

It was active before the execution of the command, and now it is always on the migrarting status

edit retag flag offensive close merge delete

13 answers

Sort by ยป oldest newest most voted

answered 2013-07-10 10:04:02 -0600

nour-nakouzi gravatar image

Finally i found the problem, i add the ip address of compute-node-2 in /etc/hosts

edit flag offensive delete link more

answered 2013-07-09 15:37:09 -0600

nour-nakouzi gravatar image

Any Ideas. ?

edit flag offensive delete link more

answered 2013-07-10 07:35:42 -0600

nour-nakouzi gravatar image

I'm trying to change ther permission on the nodes: chown nova:nova /var/lib/nova/instances

I have always this problem chown: changing ownership of `/var/lib/nova/instances': Operation not permitted

edit flag offensive delete link more

answered 2013-07-10 08:27:49 -0600

xianghui gravatar image

set debug=True on /etc/nova/nova.conf to track the error, then up the log.

edit flag offensive delete link more

answered 2013-07-09 14:56:01 -0600

xianghui gravatar image

Did you test the live-migration with shared storage? did you set 'instances_shared_storage = true' and 'instance_path=$shared' in the conf, please put the config files here.

edit flag offensive delete link more

answered 2013-07-10 07:52:33 -0600

nour-nakouzi gravatar image

In the doc, they said that: Ensure that the UID and GID of your nova and libvirt users are identical between each of your servers. This ensures that the permissions on the NFS mount will work correctly.

I have different UID and GID for nova and libvirt in all the nodes, i change that manually, i chose a number

edit flag offensive delete link more

answered 2013-07-09 14:18:32 -0600

nour-nakouzi gravatar image

I installed the nfs4server on the controller-node, and the nfs4client on both compute-nodes, the I mount a shared folder on /var/lib/nova/instances, what i noticed is, after creation the instance, the owner is root:root in /var/lib/nova/instances, and not nova:nova, so if i unmount the folder i can create instances, else the VM is on the spawming status

Any Ideas

edit flag offensive delete link more

answered 2013-07-09 15:03:11 -0600

nour-nakouzi gravatar image

Here's My nova.conf file an both compute-nodes:

[DEFAULT] state_path = /var/lib/nova lock_path = /var/lock/nova root_helper = sudo nova-rootwrap /etc/nova/rootwrap.conf api_paste_config = /etc/nova/api-paste.ini enabled_apis = ec2,osapi_compute,metadata sql_connection = mysql://nova:nova@192.168.x.x/nova default_availability_zone = lannion


live_migration_bandwidth=0 live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER live_migration_retry_count=30 live_migration_uri=qemu+tcp://%s/system


verbose = False debug = False logdir = /var/log/nova


auth_strategy = keystone


volume_api_class = nova.volume.cinder.API iscsi_helper=tgtadm num_targets=0


glance_api_servers = 192.168.x.x:9292


compute_driver = libvirt.LibvirtDriver libvirt_type = kvm libvirt_inject_key = True libvirt_inject_partition = -1 libvirt_wait_soft_reboot_seconds = 300 libvirt_snapshot_compression = True instance_build_timeout = 900 reboot_timeout = 300 instance_usage_audit = False instance_usage_audit_period = day resume_guests_state_on_host_boot = True instance_name_template = instance-%08x

Security groups

firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = quantum


my_ip = 192.168.x.x network_api_class = quantum_admin_auth_url = http://192.168.x.x:35357/v2.0/ quantum_admin_tenant_name = service quantum_admin_username = quantum quantum_admin_password = servicepass quantum_auth_strategy = keystone quantum_url = http://192.168.x.x:9696/ force_dhcp_release = True


metadata_host = 192.168.x.x metadata_port = 8775 service_quantum_metadata_proxy = True #quantum_metadata_proxy_shared_secret =


policy_file = policy.json allow_instance_snapshots = True allow_migrate_to_same_host = False allow_resize_to_same_host = False


rpc_backend = nova.openstack.common.rpc.impl_kombu rabbit_host = 192.168.x.x

rabbit_port = 5672

rabbit_userid = rabbit

rabbit_password = rabbit

rabbit_virtual_host = /


max_instances_per_host = 50 reserved_host_disk_mb = 10240 reserved_host_memory_mb = 1024

Live migration

live_migration_bandwidth = 0


vnc_enabled = True vnc_port = 5900 vnc_port_total = 10000 vnc_password = None novncproxy_base_url = http://172.25.x.x:6080/vnc_auto.html vncserver_listen = vncserver_proxyclient_address = 192.168.x;x vnc_keymap = en-us


ec2_host = 192.168.x.x ec2_port = 8773 ec2_private_dns_show_ip = True keystone_ec2_url = http://192.168.x.x:5000/v2.0/ec2tokens

edit flag offensive delete link more

answered 2013-07-10 08:28:21 -0600

nour-nakouzi gravatar image

When i check the nova-compute log, i have this line: 2013-07-10 10:27:03.186 1627 ERROR nova.virt.libvirt.driver [-] [instance: 4dba73a9-5ab5-4202-8254-05b641abd92d] Live Migration failure: operation failed: Failed to connect to remote libvirt URI qemu+tcp://compute-node-2/system

edit flag offensive delete link more

answered 2013-07-10 02:32:04 -0600

xianghui gravatar image

If you are using nfs, the live-migration with shared storage is active,

make sure you have prepare the env referred by (

  1. check NOVA-INST-DIR/instances are same on all your hosts
  2. run 'chown nova:nova NOVA-INST-DIR/instances' on all your hosts to solve the user group problem.

If still got some problems, up the erros.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2013-07-08 08:48:54 -0600

Seen: 640 times

Last updated: Jul 10 '13