Ask Your Question
0

How HAproxy works with Openstack HEAT?

asked 2014-07-15 20:51:25 -0500

anonymous user

Anonymous

Hi,

I am new to Openstack community. I need to implement autoscaling for my tomcat application. How to implement HAproxy as the default load balancer? Will it be created automatically once I launch my tomcat application instance from a heat template.? or should I manually create an HAProxy instance and later mention its details in my tomcat app heat template to use the HAProxy instance?

Any help would be much appreciated.

Best Regards, Muhammed Roshan

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
2

answered 2014-08-04 11:56:25 -0500

zaneb gravatar image

If you want Heat to create a load balancer, you'll need to specify one in the template.

You have two options. AWS::ElasticLoadBalancing::LoadBalancer will create a Nova instance running HAProxy on a probably-outdated version of Fedora that you'll have to provide an image for.

The better option is OS::Neutron::LoadBalancer, which will configure load balancing through the Neutron API (this assumes that you have the required Neutron plugins available, however).

Reference the load balancer in the autoscaling group definition, and autoscaling will update the load balancer configuration when servers are added or removed.

edit flag offensive delete link more
0

answered 2014-07-28 16:25:20 -0500

rooter gravatar image

Just start multiple heat-api processes. As many as you want. Then configure keystone with endpoint pointing to your haproxy host:

[root@openstack1 ~]# keystone endpoint-list | grep 8004
| e41899cd971b437182f1be06ed98a129 | DefaultRegion |  http://haproxyhost:8004/v1/$(tenant_id)s  |         http://haproxyhost:8004/v1         |  http://haproxyhost:8004/v1/$(tenant_id)s  | 7648a4b19fc64cbdb60e23aa42fa369a |

Then point the haproxy to those heat-api processes. Here's one example haproxy config, but there are many valid way of doing it. Tweak the timeouts to your liking.

root@haproxyhost: cat /etc/haproxy/haproxy.cfg

global
  daemon

defaults
  mode http
  log 127.0.0.1:514 local4
  maxconn 10000
  timeout connect 4s
  timeout client 180s
  timeout server 180s
  option redispatch
  retries 3
  balance roundrobin

listen heatAPI
  bind 0.0.0.0:8004
  server heatnode1 heatnode1:8004     check inter 3s rise 2 fall 3
  server heatnode2 heatnode2:8004     check inter 3s rise 2 fall 3

or, run multiple heat-api on the some node, under different ports:

listen heatAPI
  bind 0.0.0.0:8004
  server heatnode1 heatnode1:8004      check inter 3s rise 2 fall 3
  server heatnode1 heatnode1:18004     check inter 3s rise 2 fall 3

note, use "mode tcp" instead of "mode http" is also possible, and results in better performance.

edit flag offensive delete link more

Comments

This is a great answer, but to a different question ("How can an operator set up load balancing for heat-api?") to the one the questioner was asking ("How can an end user configure load balancing in their application deployed with Heat?").

zaneb gravatar imagezaneb ( 2014-08-04 11:58:14 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

Stats

Asked: 2014-07-15 20:51:25 -0500

Seen: 1,099 times

Last updated: Aug 04 '14