Ask Your Question

Octavia LBaaSv2 driver error

asked 2018-03-26 12:54:25 -0500

hcotuk gravatar image

updated 2018-03-26 13:01:46 -0500

Hello everyone,

I couldn't find a step-by-step documentation about Octavia. I installed and configured Octavia on a HA Ocata cluster according to the instructions on devstack notes and lbaas docs. After installing the horizon plugin, i tried to create a load balancer without success. On both the dashboard and the cli, I get the following error when i try to create or delete a load balancer.


Neutron server returns request_ids: ['req-82a63bd7-3a34-4f7d-9dd0-ab57e9546f5e'] Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/neutronclient/", line 877, in run_subcommand return run_command(cmd, cmd_parser, sub_argv) File "/usr/lib/python2.7/dist-packages/neutronclient/", line 114, in run_command return File "/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/", line 324, in run return super(NeutronCommand, self).run(parsed_args) File "/usr/lib/python2.7/dist-packages/cliff/", line 90, in run return self.take_action(parsed_args) or 0 File "/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/", line 506, in take_action self._bulk_delete(obj_deleter, neutron_client, File "/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/", line 521, in _bulk_delete self.delete_item(obj_deleter, neutron_client, item_id) File "/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/", line 561, in delete_item obj_deleter(_id) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/", line 1103, in delete_loadbalancer (lbaas_loadbalancer)) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/", line 357, in delete headers=headers, params=params) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/", line 338, in retry_request headers=headers, params=params) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/", line 301, in do_request self._handle_fault_response(status_code, replybody, resp) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/", line 276, in _handle_fault_response exception_handler_v20(status_code, error_body) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/", line 92, in exception_handler_v20 request_ids=request_ids) InternalServerError: Driver error: The request you have made requires authentication. (HTTP 401) (Request-ID: req-3c1f97c9-82c8-4fa8-9e7e-720b730f9539) Neutron server returns request_ids: ['req-82a63bd7-3a34-4f7d-9dd0-ab57e9546f5e']

Here is my octavia.conf :


[DEFAULT] transport_url = rabbit://openstack:rabbit_pass@testCont01,openstack:rabbit_pass@testCont02,openstack:rabbit_pass@testCont03 api_handler = queue_producer bind_host = bind_port = 9876

[amphora_agent] agent_server_ca = /etc/ssl/private/ agent_server_cert = /etc/ssl/private/


[certificates] ca_private_key_passphrase = None ca_private_key = /etc/octavia/certs/private/cakey.pem ca_certificate = /etc/octavia/certs/ca_01.pem

[controller_worker] amp_boot_network_list = 4652a6ce-e08b-4231-97b1-a70ff850452e amp_image_tag = octavia-amphora-image amp_secgroup_list = c6e919ee-95d5-4eed-a74e-cd1eb08a725c amp_flavor_id = ba60c149-096b-4d42-9683-d627832306c7 amp_image_owner_id = 1fa597e23ac64a00a324094f7f65e03aa amp_ssh_key_name = service network_driver = allowed_address_pairs_driver compute_driver = compute_nova_driver amphora_driver = amphora_haproxy_rest_driver workers = 2 amp_active_retries = 100 amp_active_wait_sec = 2 loadbalancer_topology = SINGLE client_ca = /etc/octavia/certs/ca_01.pem

[database] connection = mysql+pymysql://octavia:KelebeK@testcont/octavia


[haproxy_amphora] server_ca = /etc/octavia/certs/ca_01.pem client_cert = /etc/octavia/certs/client.pem base_path = /var/lib/octavia base_cert_dir = /var/lib/octavia/certs connection_max_retries = 1500 connection_retry_interval = 1 rest_request_conn_timeout = 10 rest_request_read_timeout = 120

[health_manager] bind_port = 5555 bind_ip = controller_ip_port_list =,, heartbeat_key = insecure

[house_keeping] load_balancer_expiry_age = 3600 amphora_expiry_age = 3600 spare_amphora_pool_size = 0

[keepalived_vrrp ... (more)

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2018-03-26 23:01:55 -0500

johnsom gravatar image

Hi Huseyin,

Here are a couple of thoughts: 1. If you don't need neutron-lbaas, it is best to deploy without it and just use Octavia (Pike forward, but you can run octavia at newer versions than the rest of your cloud). 2. The error means some part of the system does not have a keystone token that allows it to access the required services. This could be because: a. The neutron API service is not configured to load the neutron-lbaas.conf. It has the config file list it loads on the command line you can see via "ps -ef", just make sure the way it is starting actually picks up the neutron-lbaas.conf. One way to test this is to just add the neutron-lbaas.conf content into your neutron.conf file. b. Make sure your octavia.conf has the v1 API enabled and the URL neutron-lbaas is using to access Octavia is via the v1 API endpoint (not advertised in keystone endpoints). This is a private API (should not be accessible from users) that neutron-lbaas uses to access Octavia. c. This appears to be the neutron client stack trace. Make sure your user credentials are enabled for the neutron client. For example, make sure you can do "neutron port-list".

Also, there are usually Octavia cores around in #openstack-lbaas on freenode IRC. Feel free to ask in the channel for help.

Michael (johnsom)

edit flag offensive delete link more

answered 2018-03-27 07:35:56 -0500

hcotuk gravatar image

updated 2018-03-27 08:47:10 -0500

Hi Michael,

Thanks a lot for your answer. As you said, neutron-server was running without --config-file neutron_lbaas.conf. So I added the contents of neutron_lbaas.conf to neutron.conf and the problem disappeared.

Now, I have another problem. When i create a load balancer, its provisioning_status turns into error in the dashboard. Then i can not delete this load balancer as well. But when i checked the running instances, i can see the amphora images.

root@testCont01:~# openstack server list +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------+---------------------+ | ID | Name | Status | Networks | Image Name | +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------+---------------------+ | a808c1d4-7332-4353-b9b8-56792370bd1c | amphora-96627539-1984-4b6d-87bd-ce8e6529fccb | ACTIVE | LB_management_network=; demo_internal_net= | amphora-x64-haproxy |

| 6f443bfc-4412-4f7d-95be-f4bf1444bdc2 | amphora-4081d1d2-eb26-45a0-81cf-4957945993a7 | ACTIVE | LB_management_network=; demo_internal_net= | amphora-x64-haproxy |

| 978b2960-9b5b-44bd-860d-8bbb5551e56c | amphora-e3f69972-cb40-4fa5-8ec7-d42bceea8df0 | ACTIVE | LB_management_network=; demo_internal_net= | amphora-x64-haproxy |

| ffc60669-9dbf-4d85-86c2-fcb77674d767 | amphora-1583859f-9774-4b4b-8c4d-b8217b12e5f4 | ACTIVE | LB_management_network=; demo_internal_net= | amphora-x64-haproxy |


I checked all the octavia logs, and the only thing I can find is the below warning.

2018-03-27 16:12:53.970 29479 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.

On the neutron side, all the ports belong to load balancer IPs are down.

| 2177f7cc-857b-407c-8f6a-ac5806ede6b5 | loadbalancer-d05dd82d-3a02-400c-8a5e-bdd19de13aa7 | fa:16:3e:6d:32:57 | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 26a792fc-0b5f-476d-a7ad-8c7f67b23075 | loadbalancer-79490b1c-cf92-4d84-be6c-b301d035ed9f | fa:16:3e:e0:f2:ac | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 55733be1-b61e-47ac-8323-09ea1c9dbb52 | loadbalancer-b47ddf10-b6c2-4db7-9794-89b8660ae278 | fa:16:3e:4a:d3:35 | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 65f2b610-770f-4f8d-9e76-89189e945b07 | loadbalancer-ef76286f-ef7f-4e64-91fb-7aa187e47c05 | fa:16:3e:80:f9:33 | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 91f3fcf9-37b8-437a-bb45-2b3374f5cae8 | loadbalancer-ef719b3a-9b24-4953-825b-5c24d91789ea | fa:16:3e:d5:6a:b9 | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| fda6093f-484e-4b3f-99c8-1e7988b2beaf | loadbalancer-a8ad6712-8f3d-488b-b337-61012c74b88c | fa:16:3e:17:a8:a6 | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| fe04003c-a331-4ba1-ac1a-8b43ba354ea9 | loadbalancer-941f5304-2f15-49d2-b96a-988b476ad870 | fa:16:3e:28:67:bb | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

LB-vrrp ports on the neutron side seem active but I can not ping from the active controller's net namespace. (have an icmp rule on the security-group)

root@testCont01:~# openstack port list |grep octavia | 10cc1b60-2b68-4c54-9cbb-2b135d3de0a0 | octavia-lb-vrrp-1583859f-9774-4b4b-8c4d-b8217b12e5f4 | fa:16:3e:18:9f:ba | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| 3dfffe73-2e93-4b08-bb6a-2c8dcec62f86 | octavia-lb-vrrp-96627539-1984-4b6d-87bd-ce8e6529fccb | fa:16:3e:31:8b:8f | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| 648daa51-7857-469b-97a3-29c8c04c6b7a | octavia-lb-vrrp-e3f69972-cb40-4fa5-8ec7-d42bceea8df0 | fa:16:3e:8f:c6:b6 | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| 9ee56cd6-3c7c-43fe-abe4-c28d8b6727c5 | octavia-lb-vrrp-3e40d7dd-d212-45e3-b36a-4343b8499f30 | fa:16:3e:e4:17:fc | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| d223cf42-baa6-4d55-afa9-1c46a0ed6c32 | octavia-lb-vrrp-4081d1d2-eb26-45a0-81cf-4957945993a7 | fa:16:3e:8d:d8:6f | ip_address='', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

root@testCont02:/var/log/octavia# ip netns exec qrouter-4dd15da5-a1ad-4a07-a6fa-f5a73952fcf2 ping

PING ( 56(84) bytes of data.

From icmp_seq=1 Destination Host Unreachable

From icmp_seq=2 Destination Host Unreachable

From icmp_seq=3 Destination Host Unreachable

Any idea or clue to debug ?

My best,


edit flag offensive delete link more


As we discovered today, the service account was out of quota causing the build to fail and the LB to go into error.

johnsom gravatar imagejohnsom ( 2018-03-29 11:59:41 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2018-03-26 12:54:25 -0500

Seen: 592 times

Last updated: Mar 27 '18