Ask Your Question
1

live migration of instance between compute nodes

asked 2014-02-11 00:20:27 -0500

Anand TS gravatar image

updated 2014-02-11 00:21:11 -0500

I setup openstack(havana) using rdo packstack in 2 nodes. node1-(controller & compute) , node2 -compute.

I have glusterfs setup of nova-volumes,cinder-volumes and Glance which I refered following links,

1.Shared storage based live migration

2.bderzhavets.wordpress.com

3 Getting started with Havana+rdo+neutron+glusterfs

only difference from docs is I'm using nova-networking.

So after launching couple of instances in compute nodes I tried to live migrate instance from one node to another.

First I checked the Vm is in which hypervisor host;

root@cloudcluster1 ~(keystone_admin)]# nova-manage vm list
instance node type state launched image kernel ramdisk project user zone index
2014-02-10 15:59:25.180 7386 DEBUG nova.openstack.common.lockutils [req-c0195fa1-0578-4ae7-aa36-82c37378b4d6 None None] Got semaphore “dbapi_backend” lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:166

2014-02-10 15:59:25.181 7386 DEBUG nova.openstack.common.lockutils [req-c0195fa1-0578-4ae7-aa36-82c37378b4d6 None None] Got semaphore / lock “__get_backend” inner /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:245

test_storage cloudcluster1 m1.tiny active 2014-02-07 13:25:58 5a2574b6-4e5a-4a06-899e-cf790f1f9ac4 db20c66aea1b41079d01f10924ac051d 65edffa942b640f1af96ebfffde5b923 nova 0

test_network cloudcluster1 m1.tiny active 2014-02-10 06:25:25 5a2574b6-4e5a-4a06-899e-cf790f1f9ac4 db20c66aea1b41079d01f10924ac051d 65edffa942b640f1af96ebfffde5b923 nova 0

so I tried to migrate vm test_storage in node1 to node2 using the command;

[root@cloudcluster1 ~(keystone_admin)]# nova live-migration 81baec00-acd2-4ab4-82e0-e17cabef8632 cloudcluster2
[root@cloudcluster1 ~(keystone_admin)]# nova list
+————————————–+————–+———–+————+————-+————————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+————–+———–+————+————-+————————————–+
| b2e006c7-9bc1-4ea8-a726-ceddfe62f39b | test_network | ACTIVE | None | Running | novanetwork=192.168.32.6 |
| 81baec00-acd2-4ab4-82e0-e17cabef8632 | test_storage | MIGRATING | migrating | Running | novanetwork=192.168.32.2, 10.54.6.90 |
+————————————–+————–+———–+————+————-+———-

I could see state as migrating when I give nova-list.

After few min it got completed with out any error. But still the vm is in node1.The instance is not migrated to other compute node but it resides in same host itself

[root@cloudcluster1 ~(keystone_admin)]# nova show 81baec00-acd2-4ab4-82e0-e17cabef8632
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| status                               | ACTIVE                                                   |
| updated                              | 2014-02-10T12:04:07Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | cloudcluster1                                            |
| key_name                             | mykey                                                    |
| image                                | Cirros (5a2574b6-4e5a-4a06-899e-cf790f1f9ac4)            |
| hostId                               | 8ef589d6ac40acde8f7fa484731ed1af1f537bd75cfbf1f6b0d9cfea |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000000f                                        |
| OS-SRV-USG:launched_at               | 2014-02-07T13:25:58.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | cloudcluster1.accelfrontline.com                         |
| flavor                               | m1.tiny (1)                                              |
| id                                   | 81baec00-acd2-4ab4-82e0-e17cabef8632                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 65edffa942b640f1af96ebfffde5b923                         |
| name                                 | test_storage                                             |
| created                              | 2014-02-07T13:19:10Z                                     |
| tenant_id                            | db20c66aea1b41079d01f10924ac051d                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| novanetwork network                  | 192.168.32.2, 10.54.6.90                                 |
| os-extended-volumes:volumes_attached | []                                                       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+--------------------------------------+----------------------------------------------------------+

Steps I have done;

Ensured that UID and GID of nova and libvirt users are identical between each of my servers. 

>Edit /etc/libvirt/libvirtd.conf

listen_tls = 0
listen_tcp = 1
auth_tcp = “none”

->Edit /etc/sysconfig/libvirtd

LIBVIRTD_ARGS=”–listen”

->Restart libvirtd

service libvirtd restart

->Edit /etc/nova/nova.conf, add following line:

live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE

Any idea how to proceed on this.

Thanks & Regards, Anand

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
0

answered 2014-02-27 22:06:31 -0500

ajeffrey gravatar image

You can try the --block_migrate option and it might just work with glusterfs. It works using the NetApp CMode NFS driver without a shared mount (i.e. booting a VM from a bootable cinder volume - though this may not be what you are after).

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2014-02-11 00:20:27 -0500

Seen: 1,090 times

Last updated: Feb 27 '14