Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

In my understanding haproxy/pacemaker/corosync is active/passive type of HA and haproxy/keepalive is active/active HA.

It really depends on the way you want to go. I personally didn;t use active/passive untill now. In our deployment we have a haproxy/keepalive type of HA.

There are advantages and disadvantages to both of them.

A disadvantage of active/passive take is that one node/service will stay in passive mode (will not be active) untill the active one malfunctions and this one takes over. There is no load balancing either being that only one service is active at a given time.

In active/active scenario, all nodes/services of the same type are running simultaniously. We use haproxy for load balancing and keepalive for failover of haproxy. This method is a little more complicated to set up, but you now run at full power and loadbalance the workload.

Of course this scenario has its disadvantages too like not all services work in active/active state [e.g. l3_agent (although i highly recommand neutron DVR where all traffic, be it north-south or east-west, is done on the compute node the instance is running bypassing network node and providing loadbalancing and elimianting single point of failuire), cinder-volume (only works in active/passive mode by providing the same host parameter in cinder.conf), mysql (in master/master replica, it is wise and suggested that services use only one mysql host to write data to so to prevent db locking and differences on different cluster hosts; so in a 3 galera cluster, only one host is active while the other 2 are in passive mode and used as backup in haproxy ) ].

Also keep in mind that all requests are done at the haproxy node (be it openstack services requests, client api requests or mysql db requests).

This being said we are happy with our active/active HA scenario for over a year now. There were a few hiccups along the way (primary being mariadb galera cluster wouldn;t play nice with haproxy due to openstack services keeping a connections opened too long, but has been resolved in the mean time ).