Ask Your Question
0

Can't create instance. neutron error.

asked 2020-03-04 02:06:02 -0500

m_yasko gravatar image

updated 2020-03-05 05:12:29 -0500

Hello all!

The essence of the problem:

All operations maked by the web interface.

I needed to transfer an instance from one network A to network B. I disconnect the network interface and tried to connect the new network interface of network B. I didn’t succeed. Then I deleted the instance to create a new one. But after this operation with network interfaces, I cannot create instances at all. When I create a new instance, instance is created in the Error status. All other instances which created before working good.

I get an error:

Error: Failed to perform requested operation on instance "medsim-vinn-dev-cfg-aas-0", the instance has an error status: Please try again later [Error: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance 13adf292-bc1f-48ce-9aa9-5b564210b81e. Last exception: Binding failed for port c62f6802-18c7-48e2-b685-597bf28a2b50, please check neutron logs for more information.].

Data from instance status page: Message Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance 13adf292-bc1f-48ce-9aa9-5b564210b81e. Last exception: Binding failed for port c62f6802-18c7-48e2-b685-597bf28a2b50, please check neutron logs for more information. Code 500 Details File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 586, in build_instances filter_properties, instances [0] .uuid) File "/usr/lib/python2.7/ dist-packages / nova / scheduler / utils.py ", line 675, in populate_retry raise exception.MaxRetriesExceeded (reason = msg)

OpenStack Rocky Version

/var/log/neutron/neutron-dhcp-agent.log

2020-03-03 17:19:42.303 6024 INFO neutron.agent.dhcp.agent [req-593909ce-43bb-444b-a8f3-a2bccf829d6c a6c36ced902a45b3b4d381a77b6dc862 8029edcafd5e450fbe57c7813e434e7b - - -] Trigger reload_allocations for po
rt admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2020-03-03T15:19:41Z, description=,
device_id=13adf292-bc1f-48ce-9aa9-5b564210b81e, device_owner=, extra_dhcp_opts=[], fixed_ips=[{u'subnet_id': u'ecc7f45c-12e3-49e3-a0da-b253b1771fd5', u'ip_address': u'10.5.110.45'}], id=c62f6802-18c7-48e2-b6
85-597bf28a2b50, mac_address=fa:16:3e:54:56:dc, name=, network_id=c89c7ab0-cd49-4a67-af81-68ab5baeb0a2, port_security_enabled=True, project_id=8029edcafd5e450fbe57c7813e434e7b, revision_number=1, security_gr
oups=[u'2c35109c-2731-4a2b-b7a0-907b02bb329d', u'35fbc9b2-71d4-4bb7-9e9d-28ac4d73146d', u'5ccc4866-55d9-4f28-9f62-28e50d2db957', u'6d2d393a-f234-4427-800c-0c27573874b9'], status=DOWN, tags=[], tenant_id=8029
edcafd5e450fbe57c7813e434e7b, updated_at=2020-03-03T15:19:41Z
2020-03-03 17:19:44.139 6024 INFO neutron.agent.dhcp.agent [req-710ff178-0a2c-46cf-a065-a7969f24dca2 df21660acb3849fe97519d8d9c5d5b87 1b3ce35e7dd64fa0b957783f50c08191 - - -] Trigger reload_allocations for po
rt admin_state_up=True, allowed_address_pairs=[], binding:host_id=compute-7.onseocloud.internal, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2020-03
-03T15:19:41Z, description=, device_id=13adf292-bc1f-48ce-9aa9-5b564210b81e, device_owner=compute:windows_only, extra_dhcp_opts=[], fixed_ips=[{u'subnet_id': u'ecc7f45c-12e3-49e3-a0da-b253b1771fd5', u'ip_add
ress': u'10.5.110.45'}], id=c62f6802-18c7-48e2-b685-597bf28a2b50, mac_address=fa:16:3e:54:56:dc, name=, network_id=c89c7ab0-cd49-4a67-af81-68ab5baeb0a2, port_security_enabled=True, project_id=8029edcafd5e450
fbe57c7813e434e7b, revision_number=2, security_groups=[u'2c35109c-2731-4a2b-b7a0-907b02bb329d', u'35fbc9b2-71d4-4bb7-9e9d-28ac4d73146d', u'5ccc4866-55d9-4f28-9f62-28e50d2db957', u'6d2d393a-f234-4427-800c-0c2
7573874b9'], status=DOWN, tags=[], tenant_id=8029edcafd5e450fbe57c7813e434e7b, updated_at=2020-03-03T15:19:43Z
2020-03-03 17:19:45.433 6024 INFO neutron.agent.dhcp.agent [req-710ff178-0a2c-46cf-a065-a7969f24dca2 df21660acb3849fe97519d8d9c5d5b87 1b3ce35e7dd64fa0b957783f50c08191 - - -] Trigger reload_allocations for po
rt admin_state_up=True, allowed_address_pairs=[], binding:host_id=compute-7.onseocloud.internal, binding:profile=, binding:vif_details=, binding:vif_type=binding_failed, binding:vnic_type=normal, created_at=
2020-03-03T15:19:41Z, description=, device_id=13adf292-bc1f-48ce-9aa9-5b564210b81e, device_owner=compute:windows_only, extra_dhcp_opts=[], fixed_ips=[{u'subnet_id': u'ecc7f45c-12e3-49e3-a0da-b253b1771fd5', u
'ip_address': u'10.5.110.45'}], id=c62f6802-18c7-48e2-b685-597bf28a2b50, mac_address=fa:16:3e:54:56:dc, name=, network_id=c89c7ab0-cd49-4a67-af81-68ab5baeb0a2, port_security_enabled=True, project_id=8029edca
fd5e450fbe57c7813e434e7b, revision_number=3, security_groups=[u'2c35109c-2731-4a2b-b7a0-907b02bb329d', u ...
(more)
edit retag flag offensive close merge delete

Comments

If your Neutron uses the ML2 plugin with either the Linuxbridge or the Openvswitch mechanism driver (which is likely), it is the agent on the compute node that binds the port, i.e., sets up the plumbing that implements the virtual network connection. Check the agent's log file on the compute node.

Bernd Bausch gravatar imageBernd Bausch ( 2020-03-04 04:08:52 -0500 )edit

By the way, format log file content so that it becomes a bit more readable. Use the "010101" button.

Bernd Bausch gravatar imageBernd Bausch ( 2020-03-04 04:11:08 -0500 )edit

add /var/log/nova/nova-compute.log

m_yasko gravatar imagem_yasko ( 2020-03-05 05:13:17 -0500 )edit

Usually this error means that you created a port within a project that you are trying to use in a different project. Change or re-create the port's project assignment to the one you require to and let us know if it works.

eblock gravatar imageeblock ( 2020-03-05 05:56:46 -0500 )edit

From neutron server log it seems for me that probably You don't have "lan_eph" in Your bridge_mappings in the L2 agent's config file on compute-7.onseocloud.internal Can You check if this is configured properly in the bridge mapping for the agent?

slaweq gravatar imageslaweq ( 2020-03-12 02:56:35 -0500 )edit

1 answer

Sort by » oldest newest most voted
0

answered 2020-03-05 05:36:49 -0500

Hi,

Fail to bind port has nothing with DHCP agent for sure. And in neutron-server log You should have much more informations about this binding process. From things which You should check are: 1. Is L2 agent up and running on the compute-7.onseocloud.internal node? 2. Is physical network "lan_eph" configured in bridge mappings in the L2 agent on this compute node?

edit flag offensive delete link more

Comments

Thanks, i will check it

m_yasko gravatar imagem_yasko ( 2020-03-12 03:59:20 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

Stats

Asked: 2020-03-04 02:03:46 -0500

Seen: 357 times

Last updated: Mar 05