Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Hi Michael,

Thanks a lot for your answer. As you said, neutron-server was running without --config-file neutron_lbaas.conf. So I added the contents of neutron_lbaas.conf to neutron.conf and the problem disappeared.

Now, I have another problem. When i create a load balancer, its provisioning_status turns into error in the dashboard. Then i can not delete this load balancer as well. There is not any error message in the logs. But when i checked the running instances, i can see the amphora images.

root@testCont01:~# openstack server list +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------+---------------------+ | ID | Name | Status | Networks | Image Name | +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------+---------------------+ | a808c1d4-7332-4353-b9b8-56792370bd1c | amphora-96627539-1984-4b6d-87bd-ce8e6529fccb | ACTIVE | LB_management_network=172.16.0.13; demo_internal_net=10.10.200.34 | amphora-x64-haproxy | | 6f443bfc-4412-4f7d-95be-f4bf1444bdc2 | amphora-4081d1d2-eb26-45a0-81cf-4957945993a7 | ACTIVE | LB_management_network=172.16.0.6; demo_internal_net=10.10.200.31 | amphora-x64-haproxy | | 978b2960-9b5b-44bd-860d-8bbb5551e56c | amphora-e3f69972-cb40-4fa5-8ec7-d42bceea8df0 | ACTIVE | LB_management_network=172.16.0.8; demo_internal_net=10.10.200.32 | amphora-x64-haproxy | | ffc60669-9dbf-4d85-86c2-fcb77674d767 | amphora-1583859f-9774-4b4b-8c4d-b8217b12e5f4 | ACTIVE | LB_management_network=172.16.0.5; demo_internal_net=10.10.200.27 | amphora-x64-haproxy | +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------+---------------------+

I checked all the octavia logs, every thing seems normal. Any idea or clue to debug?

My best,

Huseyin

Hi Michael,

Thanks a lot for your answer. As you said, neutron-server was running without --config-file neutron_lbaas.conf. So I added the contents of neutron_lbaas.conf to neutron.conf and the problem disappeared.

Now, I have another problem. When i create a load balancer, its provisioning_status turns into error in the dashboard. Then i can not delete this load balancer as well. There is not any error message in the logs. But when i checked the running instances, i can see the amphora images.

root@testCont01:~# openstack server list +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------+---------------------+ | ID | Name | Status | Networks | Image Name | +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------+---------------------+ | a808c1d4-7332-4353-b9b8-56792370bd1c | amphora-96627539-1984-4b6d-87bd-ce8e6529fccb | ACTIVE | LB_management_network=172.16.0.13; demo_internal_net=10.10.200.34 | amphora-x64-haproxy | |

| 6f443bfc-4412-4f7d-95be-f4bf1444bdc2 | amphora-4081d1d2-eb26-45a0-81cf-4957945993a7 | ACTIVE | LB_management_network=172.16.0.6; demo_internal_net=10.10.200.31 | amphora-x64-haproxy | |

| 978b2960-9b5b-44bd-860d-8bbb5551e56c | amphora-e3f69972-cb40-4fa5-8ec7-d42bceea8df0 | ACTIVE | LB_management_network=172.16.0.8; demo_internal_net=10.10.200.32 | amphora-x64-haproxy | |

| ffc60669-9dbf-4d85-86c2-fcb77674d767 | amphora-1583859f-9774-4b4b-8c4d-b8217b12e5f4 | ACTIVE | LB_management_network=172.16.0.5; demo_internal_net=10.10.200.27 | amphora-x64-haproxy | |

+--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------+---------------------+

I checked all the octavia logs, every and the only thing seems normal. I can find is the below warning.

2018-03-27 16:12:53.970 29479 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.

On the neutron side, all the ports belong to load balancer IPs are down.

| 2177f7cc-857b-407c-8f6a-ac5806ede6b5 | loadbalancer-d05dd82d-3a02-400c-8a5e-bdd19de13aa7 | fa:16:3e:6d:32:57 | ip_address='10.10.200.20', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 26a792fc-0b5f-476d-a7ad-8c7f67b23075 | loadbalancer-79490b1c-cf92-4d84-be6c-b301d035ed9f | fa:16:3e:e0:f2:ac | ip_address='10.10.200.22', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 55733be1-b61e-47ac-8323-09ea1c9dbb52 | loadbalancer-b47ddf10-b6c2-4db7-9794-89b8660ae278 | fa:16:3e:4a:d3:35 | ip_address='10.10.200.16', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 65f2b610-770f-4f8d-9e76-89189e945b07 | loadbalancer-ef76286f-ef7f-4e64-91fb-7aa187e47c05 | fa:16:3e:80:f9:33 | ip_address='10.10.200.23', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 91f3fcf9-37b8-437a-bb45-2b3374f5cae8 | loadbalancer-ef719b3a-9b24-4953-825b-5c24d91789ea | fa:16:3e:d5:6a:b9 | ip_address='10.10.200.25', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| fda6093f-484e-4b3f-99c8-1e7988b2beaf | loadbalancer-a8ad6712-8f3d-488b-b337-61012c74b88c | fa:16:3e:17:a8:a6 | ip_address='10.10.200.30', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| fe04003c-a331-4ba1-ac1a-8b43ba354ea9 | loadbalancer-941f5304-2f15-49d2-b96a-988b476ad870 | fa:16:3e:28:67:bb | ip_address='10.10.200.33', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

LB-vrrp ports on the neutron side seem active but I can not ping from the active controller's net namespace. (have an icmp rule on the security-group)

root@testCont01:~# openstack port list |grep octavia | 10cc1b60-2b68-4c54-9cbb-2b135d3de0a0 | octavia-lb-vrrp-1583859f-9774-4b4b-8c4d-b8217b12e5f4 | fa:16:3e:18:9f:ba | ip_address='10.10.200.27', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| 3dfffe73-2e93-4b08-bb6a-2c8dcec62f86 | octavia-lb-vrrp-96627539-1984-4b6d-87bd-ce8e6529fccb | fa:16:3e:31:8b:8f | ip_address='10.10.200.34', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| 648daa51-7857-469b-97a3-29c8c04c6b7a | octavia-lb-vrrp-e3f69972-cb40-4fa5-8ec7-d42bceea8df0 | fa:16:3e:8f:c6:b6 | ip_address='10.10.200.32', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| 9ee56cd6-3c7c-43fe-abe4-c28d8b6727c5 | octavia-lb-vrrp-3e40d7dd-d212-45e3-b36a-4343b8499f30 | fa:16:3e:e4:17:fc | ip_address='10.10.200.28', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| d223cf42-baa6-4d55-afa9-1c46a0ed6c32 | octavia-lb-vrrp-4081d1d2-eb26-45a0-81cf-4957945993a7 | fa:16:3e:8d:d8:6f | ip_address='10.10.200.31', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

root@testCont02:/var/log/octavia# ip netns exec qrouter-4dd15da5-a1ad-4a07-a6fa-f5a73952fcf2 ping 10.10.200.31

PING 10.10.200.31 (10.10.200.31) 56(84) bytes of data.

From 10.10.200.1 icmp_seq=1 Destination Host Unreachable

From 10.10.200.1 icmp_seq=2 Destination Host Unreachable

From 10.10.200.1 icmp_seq=3 Destination Host Unreachable

Any idea or clue to debug?debug ?

My best,

Huseyin