Ask Your Question

hcotuk's profile - activity

2018-11-27 23:45:55 -0500 received badge  Student (source)
2018-11-27 06:02:46 -0500 commented answer OpenStack NAT Logs

Thanks for your answer. AFAIS, linuxbridge implementation is under development. It seems that I have to wait a bit more.

2018-11-27 06:02:21 -0500 answered a question OpenStack NAT Logs

Thanks for your answer. AFAIS, linuxbridge implementation is under development. It seems that I have to wait a bit more.

2018-11-27 04:54:44 -0500 asked a question OpenStack NAT Logs

Hi there,

I am looking for a way to log VM inbound/outbound traffic to/from Internet. Instances with floating IPs can be followed by the floating IP itself. But instances without floating IPs are NATted to project router IP. In order to conform to regulations, I want to log source/destination IP and port numbers with a valid timestamp. Actually iptables can log this traffic with "-j LOG" parameter but neutron does not have any flag to enable iptables logging.

Is there any way to do this?

My Best,

Huseyin

2018-06-13 12:40:36 -0500 received badge  Famous Question (source)
2018-04-12 10:08:49 -0500 received badge  Notable Question (source)
2018-03-28 06:22:54 -0500 received badge  Popular Question (source)
2018-03-27 07:35:56 -0500 answered a question Octavia LBaaSv2 driver error

Hi Michael,

Thanks a lot for your answer. As you said, neutron-server was running without --config-file neutron_lbaas.conf. So I added the contents of neutron_lbaas.conf to neutron.conf and the problem disappeared.

Now, I have another problem. When i create a load balancer, its provisioning_status turns into error in the dashboard. Then i can not delete this load balancer as well. But when i checked the running instances, i can see the amphora images.

root@testCont01:~# openstack server list +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------+---------------------+ | ID | Name | Status | Networks | Image Name | +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------+---------------------+ | a808c1d4-7332-4353-b9b8-56792370bd1c | amphora-96627539-1984-4b6d-87bd-ce8e6529fccb | ACTIVE | LB_management_network=172.16.0.13; demo_internal_net=10.10.200.34 | amphora-x64-haproxy |

| 6f443bfc-4412-4f7d-95be-f4bf1444bdc2 | amphora-4081d1d2-eb26-45a0-81cf-4957945993a7 | ACTIVE | LB_management_network=172.16.0.6; demo_internal_net=10.10.200.31 | amphora-x64-haproxy |

| 978b2960-9b5b-44bd-860d-8bbb5551e56c | amphora-e3f69972-cb40-4fa5-8ec7-d42bceea8df0 | ACTIVE | LB_management_network=172.16.0.8; demo_internal_net=10.10.200.32 | amphora-x64-haproxy |

| ffc60669-9dbf-4d85-86c2-fcb77674d767 | amphora-1583859f-9774-4b4b-8c4d-b8217b12e5f4 | ACTIVE | LB_management_network=172.16.0.5; demo_internal_net=10.10.200.27 | amphora-x64-haproxy |

+--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------+---------------------+

I checked all the octavia logs, and the only thing I can find is the below warning.

2018-03-27 16:12:53.970 29479 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.

On the neutron side, all the ports belong to load balancer IPs are down.

| 2177f7cc-857b-407c-8f6a-ac5806ede6b5 | loadbalancer-d05dd82d-3a02-400c-8a5e-bdd19de13aa7 | fa:16:3e:6d:32:57 | ip_address='10.10.200.20', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 26a792fc-0b5f-476d-a7ad-8c7f67b23075 | loadbalancer-79490b1c-cf92-4d84-be6c-b301d035ed9f | fa:16:3e:e0:f2:ac | ip_address='10.10.200.22', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 55733be1-b61e-47ac-8323-09ea1c9dbb52 | loadbalancer-b47ddf10-b6c2-4db7-9794-89b8660ae278 | fa:16:3e:4a:d3:35 | ip_address='10.10.200.16', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 65f2b610-770f-4f8d-9e76-89189e945b07 | loadbalancer-ef76286f-ef7f-4e64-91fb-7aa187e47c05 | fa:16:3e:80:f9:33 | ip_address='10.10.200.23', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| 91f3fcf9-37b8-437a-bb45-2b3374f5cae8 | loadbalancer-ef719b3a-9b24-4953-825b-5c24d91789ea | fa:16:3e:d5:6a:b9 | ip_address='10.10.200.25', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| fda6093f-484e-4b3f-99c8-1e7988b2beaf | loadbalancer-a8ad6712-8f3d-488b-b337-61012c74b88c | fa:16:3e:17:a8:a6 | ip_address='10.10.200.30', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

| fe04003c-a331-4ba1-ac1a-8b43ba354ea9 | loadbalancer-941f5304-2f15-49d2-b96a-988b476ad870 | fa:16:3e:28:67:bb | ip_address='10.10.200.33', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | DOWN |

LB-vrrp ports on the neutron side seem active but I can not ping from the active controller's net namespace. (have an icmp rule on the security-group)

root@testCont01:~# openstack port list |grep octavia | 10cc1b60-2b68-4c54-9cbb-2b135d3de0a0 | octavia-lb-vrrp-1583859f-9774-4b4b-8c4d-b8217b12e5f4 | fa:16:3e:18:9f:ba | ip_address='10.10.200.27', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| 3dfffe73-2e93-4b08-bb6a-2c8dcec62f86 | octavia-lb-vrrp-96627539-1984-4b6d-87bd-ce8e6529fccb | fa:16:3e:31:8b:8f | ip_address='10.10.200.34', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| 648daa51-7857-469b-97a3-29c8c04c6b7a | octavia-lb-vrrp-e3f69972-cb40-4fa5-8ec7-d42bceea8df0 | fa:16:3e:8f:c6:b6 | ip_address='10.10.200.32', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| 9ee56cd6-3c7c-43fe-abe4-c28d8b6727c5 | octavia-lb-vrrp-3e40d7dd-d212-45e3-b36a-4343b8499f30 | fa:16:3e:e4:17:fc | ip_address='10.10.200.28', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

| d223cf42-baa6-4d55-afa9-1c46a0ed6c32 | octavia-lb-vrrp-4081d1d2-eb26-45a0-81cf-4957945993a7 | fa:16:3e:8d:d8:6f | ip_address='10.10.200.31', subnet_id='f2abcdfd-73d9-454d-91ba-f0f111878775' | ACTIVE |

root@testCont02:/var/log/octavia# ip netns exec qrouter-4dd15da5-a1ad-4a07-a6fa-f5a73952fcf2 ping 10.10.200.31

PING 10.10.200.31 (10.10.200.31) 56(84) bytes of data.

From 10.10.200.1 icmp_seq=1 Destination Host Unreachable

From 10.10.200.1 icmp_seq=2 Destination Host Unreachable

From 10.10.200.1 icmp_seq=3 Destination Host Unreachable

Any idea or clue to debug ?

My best,

Huseyin

2018-03-26 12:54:25 -0500 asked a question Octavia LBaaSv2 driver error

Hello everyone,

I couldn't find a step-by-step documentation about Octavia. I installed and configured Octavia on a HA Ocata cluster according to the instructions on devstack notes and lbaas docs. After installing the horizon plugin, i tried to create a load balancer without success. On both the dashboard and the cli, I get the following error when i try to create or delete a load balancer.

Error

Neutron server returns request_ids: ['req-82a63bd7-3a34-4f7d-9dd0-ab57e9546f5e'] Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 877, in run_subcommand return run_command(cmd, cmd_parser, sub_argv) File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 114, in run_command return cmd.run(known_args) File "/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py", line 324, in run return super(NeutronCommand, self).run(parsed_args) File "/usr/lib/python2.7/dist-packages/cliff/command.py", line 90, in run return self.take_action(parsed_args) or 0 File "/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py", line 506, in take_action self._bulk_delete(obj_deleter, neutron_client, parsed_args.id) File "/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py", line 521, in _bulk_delete self.delete_item(obj_deleter, neutron_client, item_id) File "/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py", line 561, in delete_item obj_deleter(_id) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 1103, in delete_loadbalancer (lbaas_loadbalancer)) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 357, in delete headers=headers, params=params) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 338, in retry_request headers=headers, params=params) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 301, in do_request self._handle_fault_response(status_code, replybody, resp) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 276, in _handle_fault_response exception_handler_v20(status_code, error_body) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 92, in exception_handler_v20 request_ids=request_ids) InternalServerError: Driver error: The request you have made requires authentication. (HTTP 401) (Request-ID: req-3c1f97c9-82c8-4fa8-9e7e-720b730f9539) Neutron server returns request_ids: ['req-82a63bd7-3a34-4f7d-9dd0-ab57e9546f5e']

Here is my octavia.conf :

octavia.conf

[DEFAULT] transport_url = rabbit://openstack:rabbit_pass@testCont01,openstack:rabbit_pass@testCont02,openstack:rabbit_pass@testCont03 api_handler = queue_producer bind_host = 10.20.1.91 bind_port = 9876

[amphora_agent] agent_server_ca = /etc/ssl/private/uyum.in.ca.crt agent_server_cert = /etc/ssl/private/uyum.in.crt

[anchor]

[certificates] ca_private_key_passphrase = None ca_private_key = /etc/octavia/certs/private/cakey.pem ca_certificate = /etc/octavia/certs/ca_01.pem

[controller_worker] amp_boot_network_list = 4652a6ce-e08b-4231-97b1-a70ff850452e amp_image_tag = octavia-amphora-image amp_secgroup_list = c6e919ee-95d5-4eed-a74e-cd1eb08a725c amp_flavor_id = ba60c149-096b-4d42-9683-d627832306c7 amp_image_owner_id = 1fa597e23ac64a00a324094f7f65e03aa amp_ssh_key_name = service network_driver = allowed_address_pairs_driver compute_driver = compute_nova_driver amphora_driver = amphora_haproxy_rest_driver workers = 2 amp_active_retries = 100 amp_active_wait_sec = 2 loadbalancer_topology = SINGLE client_ca = /etc/octavia/certs/ca_01.pem

[database] connection = mysql+pymysql://octavia:KelebeK@testcont/octavia

[glance]

[haproxy_amphora] server_ca = /etc/octavia/certs/ca_01.pem client_cert = /etc/octavia/certs/client.pem base_path = /var/lib/octavia base_cert_dir = /var/lib/octavia/certs connection_max_retries = 1500 connection_retry_interval = 1 rest_request_conn_timeout = 10 rest_request_read_timeout = 120

[health_manager] bind_port = 5555 bind_ip = 10.20.1.91 controller_ip_port_list = 10.20.1.91:5555,10.20.1.92:5555,10.20.1.93:5555 heartbeat_key = insecure

[house_keeping] load_balancer_expiry_age = 3600 amphora_expiry_age = 3600 spare_amphora_pool_size = 0

[keepalived_vrrp ... (more)

2017-09-07 12:37:50 -0500 answered a question Instance getting two IP addresses

Yaguang's advice fixed the problem for us. Thanks

2017-09-06 06:51:36 -0500 received badge  Supporter (source)
2013-10-29 04:29:28 -0500 received badge  Famous Question (source)
2013-10-29 04:29:28 -0500 received badge  Notable Question (source)
2013-07-24 01:52:57 -0500 received badge  Popular Question (source)
2013-07-09 02:49:50 -0500 received badge  Editor (source)
2013-07-09 02:46:36 -0500 asked a question Block format 'qcow' used by device '' does not support feature 'live migration'

I am using Openstack Grizzly on ubuntu (version 1:2013.1.1-0ubuntu2~cloud0). All compute nodes use /var/lib/nova/instances directory on a shared storage. I confirmed all the steps specified on migration configuration page.

df -kh (on computes)

Filesystem Size Used Avail Use% Mounted on

/dev/sda1 332G 6.4G 309G 3% /

udev 63G 4.0K 63G 1% /dev

tmpfs 26G 548K 26G 1% /run

none 5.0M 0 5.0M 0% /run/lock

none 63G 0 63G 0% /run/shm

none 100M 0 100M 0% /run/user

cgroup 63G 0 63G 0% /sys/fs/cgroup

10.0.20.1:/export/fatih-openstack-shared 229T 537G 229T 1% /var/lib/nova/instances

10.0.20.1:/export/fatih-openstack-volume 229T 526G 229T 1% /var/lib/nova/mnt/7340a4cfb943b72a10821ea4b9732b60

nova.conf (migration related section)

live_migration_bandwidth=0

live_migration_flag= VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE

live_migration_retry_count=30

live_migration_uri=qemu+tcp://%s/system

ps -ef |grep libvirtd

root 1962 49944 0 10:24 pts/0 00:00:00 grep --color=auto libvirtd

root 55682 1 0 Jun24 ? 01:53:42 /usr/sbin/libvirtd -d -l

/etc/libvirt/libvirtd.conf:

listen_tls = 0

listen_tcp = 1

auth_tcp = "none"

When i use the following command, i get a strange error.

nova live-migration 739bf0cd-9aac-4919-b9c4-1de5d092c8a9 dest_host

Error message in nova-compute.log

2013-07-09 08:55:58.054 26690 ERROR nova.virt.libvirt.driver [-] [instance: 739bf0cd-9aac-4919-b9c4-1de5d092c8a9] Live Migration failure: internal error unable to execute QEMU command 'migrate': Block format 'qcow' used by device '' does not support feature 'live migration'

Error in libvirtd.log

2013-07-09 08:55:56.669+0000: 2206: error : qemuMonitorJSONCheckError:345 : internal error unable to execute QEMU command 'migrate': Block format 'qcow' used by device '' does not support feature 'live migration'

2013-07-09 08:55:58.044+0000: 2206: error : virNetClientProgramDispatchError:174 : An error occurred, but the cause is unknown

Following packages are installed in the compute nodes:

ii nova-common 1:2013.1.1-0ubuntu2~cloud0 all OpenStack Compute - common files

ii nova-compute 1:2013.1.1-0ubuntu2~cloud0 all OpenStack Compute - compute node

ii nova-compute-kvm 1:2013.1.1-0ubuntu2~cloud0 all OpenStack Compute - compute node (KVM)

ii nova-conductor 1:2013.1.1-0ubuntu2~cloud0 all OpenStack Compute - conductor service

ii python-nova 1:2013.1.1-0ubuntu2~cloud0 all OpenStack Compute Python libraries

ii python-novaclient 1:2.13.0-0ubuntu1~cloud0 all client library for OpenStack Compute API

ii libvirt-bin 0.9.13-0ubuntu12.2 amd64 programs for the libvirt library

ii libvirt0 0.9.13-0ubuntu12.2 amd64 library for interfacing with different virtualization systems

ii python-libvirt 0.9.13-0ubuntu12.2 amd64 libvirt Python bindings

ii kvm 1:84+dfsg-0ubuntu16+1.2.0+noroms+0ubuntu2.12.10.4 amd64 dummy transitional package from kvm to qemu-kvm

ii qemu-common 1.2.0+noroms-0ubuntu2.12.10.4 all qemu common functionality (bios, documentation, etc)

ii qemu-kvm 1.2.0+noroms-0ubuntu2.12.10.4 amd64 Full virtualization on supported hardware

ii qemu-utils 1.2.0+noroms-0ubuntu2.12.10.4 amd64 qemu utilities

ii kvm-ipxe 1.0.0+git-3.55f6c88-0ubuntu5 all PXE ROM's for KVM

Any suggestions?