Is it realistic to use the CPU meter for autoscaling?
The autoscaling examples I find in the heat-templates git repo or on the internet use CPU usage to trigger scaling-up or scaling-down. While I understand the mechanism, I wonder how realistic this approach is.
An instance competes for resources with other unrelated instances on its hypervisor host. If there is a lot of competition, it accumulates few CPU cycles, thus not triggering the alarm, or even triggering the scale-down alarm.
Still, the application would benefit from running more instances. Are there better ways of gauging the need for scaling up? Or is my scenario unlikely or even incorrect?