johnsom's profile - activity

2020-08-31 16:26:03 -0500 commented answer Openstack Octavia - How to Load balance Web Applications

To my knowledge Senlin does not provide network load balancing, but it does have a load-balancing policy that configures Octavia for a cluster. See for information on that policy.

2020-08-24 14:36:31 -0500 answered a question Unable to connect to an amphora instance from a controller

Hi, currently kolla-ansible doesn't setup the required networking, but a patch has been posted to improve that:

2020-08-18 10:57:13 -0500 answered a question Openstack Octavia - How to Load balance Web Applications

Hi there, glad to have you trying out Octavia.

The document you are looking for is the load balancing cookbook:

It is included in our end-user section of the Octavia documentation here:


2020-08-10 11:01:37 -0500 answered a question How to install octavia when using neutron-openvswitch ?

I assume that the brq bridge comes from some other component in the Ubuntu OpenStack setup.

The way neutron on devstack does it may help you however:

There it basically pops an interface out of OVS.

2020-07-20 11:09:58 -0500 answered a question We are checking load balancing option in openstack, Octavia is the one we have come up with. But it says it is currently under development. Can anyone provide update on this

Yeah, as the other comment mentioned, Octavia is under active development and is not an archived project.

It is the load balancing project for OpenStack.

2020-06-01 12:54:46 -0500 answered a question Octavia how to recreate deleted Amphora

Hi there,

You should be able to use the "openstack loadbalancer failover" command to rebuild those amphora. This is the intended mechanism for repairing a load balancer.

That said, there are some bugs in the failover flows in current versions of Octavia, such as Stein. We are working to get those fixed now, but in most cases you will not have a problem with the loadbalancer failover command.

This patch: is the proposed fix for the edge cases where a "openstack loadbalancer failover" may not be successful on the existing versions of Octavia. Once this patch is approved we will work towards backporting it to the stable versions.


2020-05-01 09:44:37 -0500 answered a question Is there any configuration option to stop source natting at octavia ?

Hi there, With the default "amphora" driver, no. It is a full-proxy style load balancer.

However we provide a few ways you can pass the original client IP address through to the backend member servers. You can enable HTTP header insertion and enable the X-Forwarded-For option. We also support the "PROXY" protocol which will insert information about the client into the TCP stream.

Also note, Octavia does not require you to use a floating IP. You can use public (external) networks directly for your VIP unless the operator has disabled it.


2020-04-06 07:02:18 -0500 received badge  Nice Answer (source)
2020-03-24 10:31:26 -0500 answered a question What's the maximum pool member in Load Balancer?

FYI, LBaaSv2 is the API specification implemented by both the end-of-life neutron-lbaas[1] and Octavia.

Both values are dependent on which driver you are using. Some drivers have limitations that other drivers do not. I can't really talk to the limitations of the individual neutron-lbaas drivers.

If you are using Octavia instead of neutron-lbaas, the maximum number of members is dependent on the amount of RAM allocated to the amphora instance. We understand one user has a pool with about 400 members in production with stock settings. At some point as you approach 65,000 members, the performance of the member servers will impact the ability to service the health checks in a timely manner, however you could disable health monitoring to bypass this. Octavia also allows all 65534 ports for the VIP.


2020-03-03 13:24:32 -0500 answered a question Octavia installation on psql( Alembic DB migration for postgresql)


This is correct. OpenStack overall (not just Octavia) stopped testing with postgresql some time ago [1] (Pike I think).

If you have a need to use a postgresql database with Octavia I would recommend the following: 1. Open a story on Storyboard for Octavia:!/p... with the bug you are seeing. Maybe another to track an RFE for adding postgresql support. 2. Join one of the Octavia IRC weekly meetings and raise the discussion of adding support/testing for postgresql.

We may have some base work in getting devstack/tempest gates to support postgresql.

Also, let the team know if you can help with this effort.


2020-02-17 15:12:09 -0500 received badge  Editor (source)
2020-02-17 15:09:10 -0500 answered a question why is octavia not using keystone public endpoint to validate tokens?

Hi there,

There are two settings in Octavia that you will need to set for Octavia when using an alternate keystone endpoint:

For the keystone client code, it is:

auth_url = https://<ip address>/identity

(I just noticed this is not in the keystonemiddleware configuration documentation we import, how odd)

As well as:

interface = public

(however this is not as important for this section) - (

It is also a good practice to set www_authenticate_uri(

The [keystone_authtoken] section is how Octavia validates tenant tokens and comes directly from the keystone client.

You will also need to configure the [service_auth] section. This is how Octavia gets a token to use with other OpenStack services such as nova and neutron.

auth_url = https://<ip address>/identity


As well as:

interface = public


2020-01-13 07:55:42 -0500 commented answer octavia distribution Algorithm

We have not written the "guide" yet that would define these in more depth, but they are similar to HAProxy's definition here:

2019-12-06 11:17:38 -0500 answered a question octavia distribution Algorithm


This is covered in our API documentation under the "Create Pool" section:

Note that features, such as session persistence, are configured in addition to the load balancer algorithm.

Octavia currently supports the following load balancing algorithms: LEAST_CONNECTIONS, ROUND_ROBIN, SOURCE_IP, and SOURCE_IP_PORT

Much of the functionality of the load balancer is available via the API. It does not accept scripts to modify it's behavior from end users. Operators can may proprietary changes to the images and configuration files if they need to. Some drivers may accept scripts as part of the "flavor" definition.

2019-10-28 10:05:05 -0500 answered a question opensatck ansible octavia installation

The Octavia role for OpenStack Ansible has documentation:

2019-10-16 09:31:32 -0500 answered a question Any document providing step by step ADD Octavia installation

Adding the tripleo tag as the Octavia will likely not have answers for tripleo questions.

2019-09-08 14:12:48 -0500 answered a question Octavia LB flavor recommendation for Amphora VMs

For small deployments, the 1GB RAM, 1vCPU, 2GB disk (3GB with centos, etc) should work fine for you. You might even be able to drop the RAM lower if you will not be doing TLS. For example, my devstack amphora instance is allocated 1GB RAM, but is only using less than half that. (just because the flavor says 1GB it doesn't mean it uses all of that all of the time) Kernel page de-duplication will also help with actual consumption as the amphora images are mostly the same.

If you are doing really large numbers of connections, and you are logging the tenant traffic flows locally, you might want to increase the available disk. Normal workloads will be fine with a smaller disk as the amphora do include log rotation. If you do not need the flow logs, there is a configuration setting to disable them.

The main tuning you might want to do is setting the maximum amount of RAM it can consume. If you have a very large number of concurrent connections or are using TLS offloading, you might want to consider increasing the amount of RAM the amphora can consume. The HAProxy documentation states that it normally(non-TLS offload) uses around 32kB of RAM per established connection. You might start with that and see how that aligns to your application/use case.

In testing I have done, adding additional vCPUs has very little impact on the performance(a small bump with the second CPU as the NIC interrupts can be split from the HAProxy processes). You can get pretty high throughput with a single vCPU. We expect once HAProxy 2.0 stabilizes and is available (the distros are not yet shipping it), we will look at enabling the threading support to vertically scale the amphora by adding vCPUs. Versions prior to 2.0 did not have good threading and the multi-process model breaks a bunch of features. If you really need more CPU now, you can always build a custom image with 2.0.x in it and use the "custom HAProxy template" configuration setting to add the threading settings.

Now with Octavia flavors, you can define flavors that select different nova flavors for the amphora at load balancer creation. For example, you can have a "bronze", "silver", "gold", each with different RAM allocations.

We would also love to hear what you find with your deployment and applications.

2019-08-28 14:18:21 -0500 answered a question octavia controller cant connect to amphora

Hi there. Thank you for the in depth information! Super helpful.

The worker log indicates that it was able to connect, but the amphora agent is not responding as expected.

I suspect what has happened is you are running an version of Octavia that is older than the master branch (train) with the master branch version of the amphora-agent.

The master branch required a version bump of the amphora agent API that is not compatible with older controllers. Older images are compatible with the new controllers.

You will need to get an amphora image that matches the release of your Octavia controllers.

One option is to build one: git clone export DIB_REPOLOCATION_amphora_agent=<path above="" checked="" git="" octavia="" out="" repo="" the="" to=""> export DIB_REPOREF_amphora_agent=stable/stein</path>

This will create an Ubuntu 18.04 image with the stable/stein amphora agent.

For more information on image building see the readme:

2019-08-28 12:25:07 -0500 answered a question Octavia amphora curl Got a 404

One thing to note, the "Train" version of the amphora agent now is at version 1.0, so the URL would be There is also now a version discovery document at /.

Aside from that, did you figure out what is causing the amphora agent service to fail? We are not aware of any issues in the amphora-agent at this time.

2019-08-28 12:20:47 -0500 answered a question Octavia: Could not retrieve certificate when create HTTPS listener using application credentials

This depends on the version of Octavia you are running.

As of the Rocky release, Octavia will set the appropriate ACLs in barbican on behalf of the user.

If you are using an older version of Octavia, you will need to add the ACLs manually. This is documented in the Queens version of the Octavia documentation here:

2019-08-21 15:34:22 -0500 answered a question Octavia Health-check IP and Security group

Hi there, and thank you for the kind words.

You are correct that requests from the load balancer to the "member" servers comes from a dynamic IP address on the backend of the load balancer.

This allows users to add or remove members that are on both public and private subnets alike. When a user adds a member to the load balancer pool, we hot plug the network and subnet into the load balancer (If it is not already). This hot-plug process causes neutron to issue us an IP address on that subnet.

The other action that makes this tricky is the load balancer failover mechanisms, where should a load balancer have a failure, the Octavia controllers with replace it with a working load balancer. This applies to both standalone and active/standby load balancers. When this failover occurs, the source IP will change.

Currently we don't have a mechanism in Octavia that would allow you to set a security group on the member server ports that would restrict it down to only the load balancer source IP. I think there is an open bug for this use case, but I was unable to find it in storyboard. One proposal was to leverage FWaaS shared security groups, but this functionality has not yet landed in the FWaaS project.

There are a couple of workarounds until we come up with a solution to this: Put the members on a private network/subnet and firewall this at the router or restrict access to members of the subnet only. Since Octavia will plug a port into the network/subnet, it will be local behind this firewall. Add a security group to the members that only allows the member subnet access.

I hope that helps.

2019-08-21 10:07:14 -0500 answered a question ERROR neutron.db.metering.metering_rpc [req-4d900035-57dd-47ca-880c-0df380436e7a - - - - -] Unable to find agent on host

This is an RDO and/or neutron issue, so I am going to retag this.

2019-08-21 10:02:51 -0500 answered a question how to install and configure openstack octavia (LBaaS) using rdo poc?)

You will need to tag the neutron/OVN and/or RDO for this as Octavia has no limitation on using OVN L2.

2019-08-21 10:01:32 -0500 answered a question How to Install and configure Open stack LBaaS (Octavia) using rdo project with ovn neutron L2 backgroungd agent?

You will need to tag the neutron/OVN and/or RDO for this as Octavia has no limitation on using OVN L2.

2019-04-03 19:49:56 -0500 commented answer Octavia - instance is not reachable via the lb-mgmt-net

Yes, I have commented there. The network message is likely due to the SSL error since I is not able to successfully connect to the instance due to the SSL issue.

2019-04-03 19:47:02 -0500 commented answer OCTAVIA SSL ERROR

No, this error is clear (as openssl is with errors) that the client_ca.cert.pem file is bad or the client.cert-and-key.pem file is bad.

2019-04-03 19:44:51 -0500 answered a question How to disable failover fuction of amphora in Octavia?


Yes, that setting would cause the health manager to not failover any instance for up to 604800 seconds.

We don't use AMQP for the health monitoring, so that should not impact this, but having the compute nodes down will certainly cause Octavia to attempt to repair the load balancer amphora on those nodes.

With the active/standby load balancers you should "fail safe" if it is unable to rebuild the failed amphora. This state will have the Load Balancer in provisioning status ERROR, but one of the two amphora instances will still be up and handling traffic. In the case of the node being down, it will resume operation once the compute node is brought back up (aside from TLS offloading as the secure content will likely be gone).

I have put in an RFE to make a more clear setting for this:!/s...


2019-04-03 19:27:29 -0500 answered a question Any document providing step by step Octavia installation on ubnutu?

Hi there, sorry to hear you are having trouble.

First I want to mention that neutron-lbaas is deprecated and scheduled to be retired this year. See the FAQ for more information:

I have updated that page you referenced (I didn't even know it existed and it's very old/outdated) to reflect the status of neutron-lbaas.

Now, on to Octavia which is what you want.

All of the Octavia documentation is located here:

There is an installation overview guide here: You will also find helpful guides in this section: Specifically the "Octavia Certificate Configuration Guide".

Unfortunately we have not yet been able to write a detailed install guide for doing it by hand.

A reference to the steps we execute to install in for our testing gates is the devstack plugin script here:

There are also folks in the #openstack-lbaas IRC channel that can help answer questions.

I hope this helps, Michael

2019-02-19 10:08:37 -0500 answered a question lbaasv2 reference beetwen HM and Pool

Hi Gleb,

The quick answer to your question is yes, when you delete a pool it cascade deletes the health monitor as the health monitor is a child of the pool.

The current documentation is available here: From this main page, you may find the API reference: and the cookbook: helpful.

Also note, neutron-lbaas will be retired this year:

I hope this helps,


2019-02-11 11:10:12 -0500 answered a question OCTAVIA SSL ERROR

This implies that either your client_ca.cert.pem file is bad or the client.cert-and-key.pem file is bad.

Please double check those files against the instructions here:

2019-01-23 09:31:43 -0500 answered a question Octavia - instance is not reachable via the lb-mgmt-net

Hi there,

From a configuration perspective, yes, the lb-mgmt-net ID goes into the amp_boot_netowrk_list configuration setting in the octavia.conf.

As for the TLS configuration, there is a guide that covers the only required configuration for this:

The /var/lib/octavia/certs folder is inside the amphora instances and is fully managed by the controller processes. There is no manual configuration required there.

2019-01-07 15:34:44 -0500 answered a question octavia SSLError : BAD SIGNATURE

There were a few last minute typos in that patch that was under review. It has now merged here:

2018-12-10 11:16:43 -0500 commented answer SSL BAD SIGNATURE octavia amphora

BAD_SIGNATURE is openssl saying it cannot validate the certificate that was presented to it. So either the cert being presented is bad/incorrect, or the CA certificate is not correct on the controller.

2018-12-03 18:13:29 -0500 commented answer SSL BAD SIGNATURE octavia amphora

Those are automatically filled in at amphora boot time when the configuration file is create. Those should not be set on the controllers.

2018-11-30 16:05:28 -0500 answered a question SSL BAD SIGNATURE octavia amphora

Yes, those certificates are installed into the amphora at nova boot time, so they will not be updated after a controller configuration change. A new amphora will need to be booted, either by rebuilding the loadbalancer, or using the amphroa failover API.

2018-11-30 16:03:56 -0500 received badge  Organizer (source)
2018-11-15 18:57:44 -0500 answered a question Could not connect to instance. Retrying.: SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579)

This means that the server CA is mis-configured on your controllers.

[haproxy_amphora] server_ca and/or [certificates] ca_certificate is pointing to the wrong CA certificate.

I would go through the certificates document and double check your CA certificate configuration.


2018-10-30 13:22:25 -0500 commented answer Could not connect to instance. Retrying.: ... [Errno 113] No route to host. provisioning_status stays on PENDING_CREATE

Yes, ACTIVE in nova just means it started the hypervisor, it does not mean the virtual machine has booted successfully. For the SSL issue, I have commented on the other Ask OpenStack topic for that issue. Please see

2018-10-30 13:18:30 -0500 answered a question Could not connect to instance. Retrying.: SSLError: [SSL] PEM lib (_ssl.c:2554)

Without the rest of the warning message there, it's hard to tell what the exact SSL issue is, but I recently wrote a step-by-step certificates configuration guide for Octaiva that should address this issue.

There is some configuration issue with how the controller and amphora certificates are setup.


2018-10-30 10:50:05 -0500 received badge  Nice Answer (source)
2018-10-29 13:35:45 -0500 answered a question how to create an amphora image for octavia

Please see the README file available here:

It explains the usage of the script for building amphora images.

If you want a CentOS 7 amphora image, you would run the following command line:

diskimage-create -i centos -d 7 -s 3

When it finishes, you will have the image in the diskimage-create directory with the name of "amphora-x64-haproxy.qcow2"


2018-10-29 13:31:01 -0500 answered a question Could not connect to instance. Retrying.: ... [Errno 113] No route to host. provisioning_status stays on PENDING_CREATE

Here are a couple of thoughts:

  1. VirtualBox is VERY slow to boot service VMs. I would check the nova console logs to make sure your service VM has finished booting. While it is booting, Octavia will keep retrying to connect with the above message.
  2. Octavia does not have any dependency on the neutron ML2 selection. Linux bridge and OVS both work just fine.
  3. This initial network connection from the Octavia worker process to the amphora is over the "lb-mgmt-net", which is a neutron network setup by the operator to handle the command/control traffic to/from the service VMs. It uses the network configured in the "[controller_worker]amp_boot_network_list" setting. Check that this network is the network you intend the controller processes to communicate with the amphora over.

I suspect the lb-mgmt-network is not setup correctly or the configuration has the wrong network ID in it.


2018-10-04 17:24:04 -0500 answered a question octavia loadbalancer ip ( is given by noop_driver

Oh, It looks like you have an un-configured Octavia deployment.

The noop or no-op drivers are there for testing. They accept requests, but don't actually provision any resources from the other services.

It might help to look at the configuration file from one of our gate tests:

This should give you an idea of the required settings. Note that many of the timeouts are excellently high for the gate tests and should be adjusted for your environment once you are done testing.


2018-10-04 17:20:13 -0500 commented answer openstack loadbalancer create .... give an ip address out of range of my network ( !!

Oh. The no-op drivers are there for testing. They take no action against the other services. It might help to reference one of our gate test configuration files:

2018-10-02 13:23:08 -0500 answered a question openstack loadbalancer create .... give an ip address out of range of my network ( !!

Using the command: "openstack loadbalancer create --project service --name lb1 --vip-subnet-id selfservice"

Would allocate the VIP address and ports from the "selfservice" subnet in neutron for the project "service". Since you did not ask for a specific IP address when creating the VIP, neutron allocates an IP address for us and provides it either on the port configuration or via DHCP.

If you got an IP address that is not on the neutron subnet "selfservice", please check your neutron configuration and logs to see why it issued Octavia such an address for the VIP port.

Also note, the VIP port should be visible in a port list to the project "service" you specified at load balancer creation time.


2018-10-02 13:16:54 -0500 answered a question Openstack LBaaS High Availability

LBaaS High Availability(HA) is provided by the Octavia project. Octavia can run with an HA control plane and can be configured to use an Active/Standby topology that provides highly available load balancers.

Octavia replaces the now deprecated neutron-lbaas project.


2018-10-02 13:13:42 -0500 answered a question how to configure octavia in queens

Agreed, Octavia is still in need of a detailed installation guide.

What we have at this point is an overview/quick-start guide:

2018-09-05 11:39:29 -0500 answered a question load balancer my openstack service to AWS

Hi Rania,

If you are using Octavia for load balancing, you can specify member servers that are outside your openstack cloud. You just need to specify a subnet that has a route to those resources at member creation time.


2018-09-04 10:06:00 -0500 answered a question openstack queens octavia load balancer

Hi there,

I am not an RDO expert, so I'm going to take some guesses here.

From the look of the bottom error, it appears that the database migration did not occur. In basic devstack deployments we use a tool to do this migration:

octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade head

Give that a try,