Ask Your Question

Revision history [back]

The usual approach is to deploy three identical controllers to avoid the split-brain scenario, plus number of compute nodes. This means you need more than three servers. Of course, in a proof-of-concept or study environment, you can deploy compute services on the controllers, too.

You then spread all OpenStack services over the three controllers and front them with a load balancer. One well-documented example, IMHO, is HPE Helion OpenStack; see https://docs.hpcloud.com/hos-3.x/helion/planning/high_availability.html#HP3.0HA__high_availablity_controllers.

The usual approach is to deploy three three identical controllers to avoid the split-brain scenario, plus number of compute nodes. This means you need more than three servers. Of course, in a proof-of-concept or study environment, you can deploy compute services on the controllers, too.

You then spread all OpenStack services over the three controllers and front them with a load balancer. One well-documented example, IMHO, is HPE Helion OpenStack; see https://docs.hpcloud.com/hos-3.x/helion/planning/high_availability.html#HP3.0HA__high_availablity_controllers.

The usual approach is to deploy three identical controllers to avoid the split-brain scenario, plus any number of compute nodes. This means you need more than three servers. Of course, in a proof-of-concept or study environment, you can deploy compute services on the controllers, too.

You then spread all OpenStack services over the three controllers and front them with a load balancer. One well-documented example, IMHO, is HPE Helion OpenStack; see https://docs.hpcloud.com/hos-3.x/helion/planning/high_availability.html#HP3.0HA__high_availablity_controllers.