Is it realistic to use the CPU meter for autoscaling?

asked 2017-07-13 02:24:45 -0500

updated 2017-07-17 09:09:18 -0500

zaneb gravatar image

The autoscaling examples I find in the heat-templates git repo or on the internet use CPU usage to trigger scaling-up or scaling-down. While I understand the mechanism, I wonder how realistic this approach is.

An instance competes for resources with other unrelated instances on its hypervisor host. If there is a lot of competition, it accumulates few CPU cycles, thus not triggering the alarm, or even triggering the scale-down alarm.

Still, the application would benefit from running more instances. Are there better ways of gauging the need for scaling up? Or is my scenario unlikely or even incorrect?

edit retag flag offensive close merge delete