Ask Your Question
0

overcommitting ram usage

asked 2019-01-24 04:11:53 -0600

novainfinite gravatar image

updated 2019-01-24 04:58:54 -0600

hello friends. I have a question about nova , as we all know that the physical cpu and ram can be assigned to vCPU and vRam exceed its actual capacity, my question is compute node will crashed but i want to migrate some of instances to another zon with free capacity automatically.how openstack could do that? can heat help?if yes how? it means that openstack know that some instances are lack of memory and migrate them

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
1

answered 2019-01-24 04:46:25 -0600

You can migrate and live-migrate instances. The former means shutting the instance down, then migrating it. Live-migration keeps the instance running during the process.

It's not done by Heat but Nova. See the admin guide.

edit flag offensive delete link more

Comments

i want to migrate automatically

novainfinite gravatar imagenovainfinite ( 2019-01-24 04:56:59 -0600 )edit

Based on the load of the compute host? OpenStack doesn’t have a mechanism to measure host load. Migration must be initiated outside of OpenStack in my opinion.

Bernd Bausch gravatar imageBernd Bausch ( 2019-01-24 08:09:53 -0600 )edit

how can i initiat migration outsid of openstack? is there a software for it?

novainfinite gravatar imagenovainfinite ( 2019-01-24 11:17:49 -0600 )edit

You use nagios, sar, free, vmstat etc to measure the host’s load, then pick an instance and run openstack server migrate.

Bernd Bausch gravatar imageBernd Bausch ( 2019-01-24 15:32:06 -0600 )edit

thanks.can i use ceilometer?

novainfinite gravatar imagenovainfinite ( 2019-01-24 16:17:29 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2019-01-24 04:11:53 -0600

Seen: 23 times

Last updated: Jan 24