Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

An OpenStack controller node may run multiple OpenStack services: for instance, API and Scheduler are some of these services. For a definition of controller node and its running services please have a look at here:

Controller Node

This is a configuration that is typically adopted for smaller or PoC deployments. In this particular configuration, if your controller were to die, your OpenStack cloud would die. To achieve true scale and avoid SPoF (Single Point of Failure), you need to distribute these services. For instance:

  • Run your DB and Queue in dedicated clustered nodes
  • Scale out Glance servers, use a distributed file storage solution like swift as backend
  • Scale out Keystone servers
  • Run multiple Nova API and Schedulers
  • ...

And so on. For an architectural overview of the project, have a look at:

Architecture

For High Availability considerations have a look at:

OpenStack-HA

An OpenStack controller node may run multiple OpenStack services: for instance, API and Scheduler are some of these services. For a definition of controller node and its running services please have a look at here:

Controller Node

This is a configuration that is typically adopted for smaller or PoC deployments. In this particular configuration, if your controller were to die, your OpenStack cloud would die. To achieve true scale and avoid SPoF (Single Point of Failure), you need to distribute these services. For instance:

  • Run your DB and Queue in dedicated clustered nodes
  • Scale out Glance servers, use a distributed file storage solution like swift as backend
  • Scale out Keystone servers
  • Run multiple Nova API and Schedulers
  • ...

And so on. For an architectural overview of the project, have a look at:

Architecture

For High Availability considerations have a look at:

OpenStack-HA

For your queue clustering options please have a look at: http://www.rabbitmq.com/clustering.html and http://www.rabbitmq.com/ha.html, as it pretty much depends on your needs.

click to hide/show revision 3
merged suggestions from comments below

An OpenStack controller node may run multiple OpenStack services: for instance, API and Scheduler are some of these services. For a definition of controller node and its running services please have a look at here:

Controller Node

This is a configuration that is typically adopted for smaller or PoC deployments. In this particular configuration, if your controller were to die, your OpenStack cloud would die. To achieve true scale and avoid SPoF (Single Point of Failure), you need to distribute these services. For instance:

  • Run your DB and Queue in dedicated clustered nodes
  • Scale out Glance servers, use a distributed file storage solution like swift as backend
  • Scale out Keystone servers
  • Run multiple Nova API and Schedulers
  • ...

And so on. For an architectural overview of the project, have a look at:

Architecture

For High Availability considerations have a look at:

at OpenStack-HA

For your queue clustering options please have a look at: http://www.rabbitmq.com/clustering.html and http://www.rabbitmq.com/ha.html, as it pretty much depends on your needs.

As for Keystone, what auth backend are you using? If using a simple datastore like SQL, you can run multiple keystone servers and memcache for their token backend.