Ask Your Question
0

Live-migration only works for instances launched by 'Boot from image (create new volume)'

asked 2015-07-02 09:14:46 -0500

schegi gravatar image

updated 2015-07-08 07:03:55 -0500

Hi,

got a running openstack installation with ceph as theonly configured storage backend. Today i needed live migration to move around a couple of instances. So i tried:

nova live-migration <instance-id> <new-host-name>

During that i recognized that instances launched with the option 'Boot from image (create new volume)' could easily be live-migrated. But when it comes to the live-migration of an instance launched with the option 'boot from image' only i get the following error when i try to live migrate this instance.

ERROR (BadRequest): node03 is not on shared storage: Live migration can not be used without shared storage. (HTTP 400) (Request-ID: req-d9d3193b-8c0d-4325-89e4-1df42182adc6)

Whereby 'node03' is the name of the compute node the machine i try to migrate is currently running on. I suspect that i have some miss-configuration, can someone point me what and where?

---------- Edit ----------

Ok i checked the whole thing twice. All compute nodes have the appropriate storage.client.cinder.keychain file in /etc/ceph (the cluster is named as storage) and the key in libvirt injected. At least 'virsh secret-list' returns an UUID for ceph client.cinder secret. 'virsh secret-get.value UUID' returns the key, which is the same as in the keychain and that 'ceph --cluster auth list' returns for client.cinder.

But, if i launch an instance with ephemeral storage, the storage is still created locally not in ceph. The only thing that that makes me wonder is that 'virsh secret-dumpxml UUID' returns

<secret ephemeral="no" private="no">
<uuid> XXX </uuid> <usage type="ceph"> <name>client.cinder secret</name>
</usage> </secret>

XXX is the correct key but what makes ne wonder is that in the secret tag ephemeral is set to no. Could this cause the problems??

------- Edit2 -------

Ok forget the last question this 'ephemeral' only relates to the secret itself.

------- Edit3 -------

nova.conf

[DEFAULT] dhcpbridge_flagfile=/etc/nova/nova.conf dhcpbridge=/usr/bin/nova-dhcpbridge logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova force_dhcp_release=True libvirt_use_virtio_for_bridges=True connection_type=libvirt root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf ec2_private_dns_show_ip=True api_paste_config=/etc/nova/api-paste.ini enabled_apis=ec2,osapi_compute,metadata

vif_plugging_is_fatal: false vif_plugging_timeout: 0

rabbit_hosts = rabbit1:5672,rabbit2:5672,rabbit3:5672 rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = XXX rabbit_password = XXX rabbit_virtual_host = / rabbit_durable_queues = False rabbit_ha_queues = true

auth_strategy = keystone

my_ip = 192.168.25.12

vnc_enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = vncserver novncproxy_base_url = http://vncserver:6080/vnc_auto.html

network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver

service_neutron_metadata_proxy = true

[database] connection = mysql://nova:XXX@database/nova

[keystone_authtoken] auth_uri = http://keystone:5000/v2.0 identity_uri = http://keystone:35357 admin_tenant_name = service admin_user = nova admin_password = XXX revocation_cache_time = 10

[glance] host = glance.wcloud.uni-koblenz.de

[neutron] url = http://neutron:9696 auth_strategy = keystone admin_auth_url = http://keystone:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = XXX service_metadata_proxy = True metadata_proxy_shared_secret = XXX

[libvirt] libvirt_images_type = rbd libvirt_images_rbd_pool = vms libvirt_images_rbd_ceph_conf = /etc/ceph/storage.conf libvirt_rbd_user = cinder libvirt_rbd_secret_uuid = XXX

libvirt_inject_password = false libvirt_inject_key = false libvirt_inject_partition = -2

libvirt_migration_uri=qemu+tcp://%s/system libvirt_live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST

rdb_user = cinder ...

(more)
edit retag flag offensive close merge delete

Comments

nova,conf pls

capsali gravatar imagecapsali ( 2015-07-03 14:58:18 -0500 )edit

under [libvirt] section remove the libvirt in front of all. So libvirt_images_type = rbd should be images_type = rbd .

Do this for all! You don't need libvirt in from of a statement if its under [libvirt] section!

Restart services and see what happens! .Modify on all nova.conf files!

capsali gravatar imagecapsali ( 2015-07-09 11:52:33 -0500 )edit

Did you solve the problem? I have same problem.

Carrot gravatar imageCarrot ( 2016-05-18 01:01:54 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2015-07-03 04:39:33 -0500

capsali gravatar image

updated 2015-07-09 11:54:05 -0500

Hi,

Have you configured ephemeral storage to use ceph? When you launch an instace from image do you get the disk on ceph pool or on local compute storage under /var/lib/nova/instances ?

If you have set up ephemeral storage to use ceph backend than you must configure libvirt to listen to tcp on every compute node for the live migration to work.

But as the error states you are not using ceph as a backend for ephemeral storage. Instances you create from image are stored locally on the compute node that hosts the instance.

For ephemeral storage to work with ceph, besides what the docs say on ceph site you need to have a client.cinder key on every compute node for this to work!

UPDATE:

Under [libvirt] section remove the libvirt in front of all. So libvirt_images_type = rbd should be images_type = rbd . Do this for all! You don't need libvirt in from of a statement if its under [libvirt] section! Restart services and see what happens! .Modify on all nova.conf files!

edit flag offensive delete link more

Comments

Hey, thx i expected something is wrong with the ceph configuration for ephermeral storage. I configured openstack to use rbd according to this documentation http://ceph.com/docs/master/rbd/rbd-o... . What in addition to this is necessary in order to have proper support?

schegi gravatar imageschegi ( 2015-07-03 04:59:07 -0500 )edit

Btw, ceph cinder keyring is distributed and libvirt listen is configured.

schegi gravatar imageschegi ( 2015-07-03 05:02:47 -0500 )edit

Create an instance please and see where it is created with ceph df and check if the pool for ephemeral grows or check the compute node under /var/lib/nova to see if it's stored there.

Please update OP with nova.conf.

So you have a client.cinder under /etc/ceph/ on the compute node,

capsali gravatar imagecapsali ( 2015-07-03 05:56:56 -0500 )edit

You injected secret into libvirt?

capsali gravatar imagecapsali ( 2015-07-03 05:57:29 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-07-02 09:14:46 -0500

Seen: 3,569 times

Last updated: Jul 09 '15