Ask Your Question
0

Issue while creating an instance in nova (Error out with port binding failure )

asked 2019-12-01 00:38:37 -0600

bipul gravatar image

updated 2019-12-01 12:01:13 -0600

Dear users,

I am getting problem while creating an instance . Its stuck at scheduling phase

I have noticed when i restart nova compute service , it restarted successfully BUT It logs error on nova-compute.log

2019-11-30 06:29:30.324 145061 ERROR nova.compute.resource_tracker [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 27a39914-a509-4261-90f5-8135ad471843:

: ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 27a39914-a509-4261-90f5-8135ad471843: 2019-11-30 06:29:30.388 145061 ERROR nova.scheduler.client.report [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] [None] Failed to retrieve resource provider tree from placement API for UUID 27a39914-a509-4261-90f5-8135ad471843. Got 500:

UUID is correct

MariaDB [(none)]> select uuid from nova.compute_nodes where host=' openstack.bipul.com'; +--------------------------------------+ | uuid | +--------------------------------------+ | 27a39914-a509-4261-90f5-8135ad471843 | +--------------------------------------+ 1 row in set (0.000 sec)

MariaDB [(none)]>

1) nova.conf is not changed , It just the same which comes with the distribution

2) Openstack overall health seems OK, all services are in running state

3) Problem : placement url running on port 8778 ( URL : http://<ip address="" &gt;:8778="" placement="" )="" is="" showing="" internal="" server="" error="" (500)="" while="" accessing="" via="" web="" browser="" or="" curl="" .<="" p="">

4) nova-status upgrade check also showing error InternalServerError: Internal Server Error (HTTP 500)

5) followed the standard method of installation described in https://www.rdoproject.org/install/packstack/ (https://www.rdoproject.org/install/pa...)

6) I have pasted o/p of log during a nove compute service restart and nova service status

Appreciate all your help

Thanks Bipul

<< Nova log >>

2019-11-30 06:29:27.391 145061 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge, noop
2019-11-30 06:29:28.560 145061 INFO nova.virt.driver [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
2019-11-30 06:29:29.228 145061 WARNING os_brick.initiator.connectors.remotefs [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Connection details not present. RemoteFsClient may not initialize properly.
2019-11-30 06:29:29.244 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "dhcpbridge" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
).  Its value may be silently ignored in the future.
2019-11-30 06:29:29.244 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "dhcpbridge_flagfile" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
).  Its value may be silently ignored in the future.
2019-11-30 06:29:29.246 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "force_dhcp_release" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
).  Its value may be silently ignored in the future.
2019-11-30 06:29:29.250 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "force_snat_range" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
).  Its value may be silently ignored in the future.
2019-11-30 06:29:29.272 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "url" from group "neutron" is deprecated for removal (Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, "url" will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the ...
(more)
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2019-12-02 00:37:19 -0600

Please check network details neutron net-show <id>

Check for network_type, physical_network, segmentation_id all are correctly mapped , Verified same is being used in neutron.conf you are using ovs only for port creation?

edit flag offensive delete link more

Comments

Thanks for the response. update output of net-show in the question, I am using very basic network only for instance creation . Just created a network and subnet , Nothing changed in neutron.conf.

bipul gravatar imagebipul ( 2019-12-02 04:36:35 -0600 )edit

how many physical server you have ? you are using admin tenant to create new instance? verify if "geneve" is mapped to correct interface on server

piyushsrivastava gravatar imagepiyushsrivastava ( 2019-12-02 09:35:11 -0600 )edit
0

answered 2019-12-01 00:57:38 -0600

bipul gravatar image

updated 2019-12-02 04:46:11 -0600

UPDATE :

While investigating more on logs , i found that there is a permission issue on file

chmod 777 /var/log/nova/nova-placement-api.log

and restart httpd service resolve the URL issue and nova-status upgrade now shows everything successful .

However , Instance creation still stuck at scheduling with below log

  2019-12-01 01:16:13.410 31934 INFO nova.api.openstack.placement.requestlog [req-c9a2982a-5397-4bfb-bab4-39ae1c6493c5 a826d1b4b465467eaee8b13d9832ab5c 60533b189d5647d2abc4cbdd3ee79c69 - default default] 10.5.87.31 "GET /placement/resource_providers/27a39914-a509-4261-90f5-8135ad471843/allocations" status: 200 len: 54 microversion: 1.0

    2019-12-01 01:17:13.891 31933 INFO nova.api.openstack.placement.requestlog [req-fe62fc5d-adb3-400c-8277-1bb54aa083ef a826d1b4b465467eaee8b13d9832ab5c 60533b189d5647d2abc4cbdd3ee79c69 - default default] 10.5.87.31 "GET /placement/resource_providers/27a39914-a509-4261-90f5-8135ad471843/allocations" **status: 200** len: 54 microversion: 1.0

requests are just logging in with different ID's , Status 200 (So it seems success response ) But it stuck at scheduling phase of creation and ERROR out with a timeout mesg

Kindly help

New Update :

After a restart of the server , Now instance creation is Error out with below log . Its showing binding problem..

2019-12-01 12:21:54.516 32378 ERROR nova.scheduler.utils [req-397c180f-da6e-4c91-8a8d-0b8b47741253 f50fc4d35a8b4c4ebf36852294ef9783 6849af726d06454f80391d4aab230ae1 - default default] [instance: 2c4fb98d-2dcf-42c5-a26b-2afa70edfcdc] Error from last host: openstack.bipul.com (node openstack.bipul.com): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1984, in _do_build_and_run_instance\n    filter_properties, request_spec)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2354, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 2c4fb98d-2dcf-42c5-a26b-2afa70edfcdc was re-scheduled: Binding failed for port 2aa29056-fe69-405c-8f5e-d7bd0276b4d7, please check neutron logs for more information.\n']
2019-12-01 12:21:54.518 3

2378 WARNING nova.scheduler.utils [req-397c180f-da6e-4c91-8a8d-0b8b47741253 f50fc4d35a8b4c4ebf36852294ef9783 6849af726d06454f80391d4aab230ae1 - default default] Failed to compute_task_build_instances: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance 2c4fb98d-2dcf-42c5-a26b-2afa70edfcdc.: MaxRetriesExceeded: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance 2c4fb98d-2dcf-42c5-a26b-2afa70edfcdc. 2019-12-01 12:21:54.519 32378 WARNING nova.scheduler.utils [req-397c180f-da6e-4c91-8a8d-0b8b47741253 f50fc4d35a8b4c4ebf36852294ef9783 6849af726d06454f80391d4aab230ae1 - default default] [instance: 2c4fb98d-2dcf-42c5-a26b-2afa70edfcdc] Setting instance to ERROR state.: MaxRetriesExceeded: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance 2c4fb98d-2dcf-42c5-a26b-2afa70edfcdc

Neutron log do not have much info , Just Http status 200 mesg's

# neutron net-show 0ea1e5c0-09ac-4ec0-9205-4450067a2c2f

+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2019-11-29T04:27:44Z                 |
| description               |                                      |
| id                        | 0ea1e5c0-09ac-4ec0-9205-4450067a2c2f |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1442                                 |
| name                      | public                               |
| port_security_enabled     | True                                 |
| project_id                | 6849af726d06454f80391d4aab230ae1     |
| provider:network_type     | geneve                               |
| provider:physical_network |                                      |
| provider:segmentation_id  | 57                                   |
| qos_policy_id             |                                      |
| revision_number           | 3                                    |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | ce1a13eb-a424-4bf1-8c4f-9713d9c2cd46 |
| tags                      |                                      |
| tenant_id                 | 6849af726d06454f80391d4aab230ae1     |
| updated_at                | 2019-11-29T04:27:47Z                 |
+---------------------------+--------------------------------------+
edit flag offensive delete link more

Comments

Port binding errors usually occur when you created a port as admin user, maybe even in the admin project, and you try to attach it to an instance in a different project. Try to re-create the port with openstack port create ... --project <PROJECT_ID>.

eblock gravatar imageeblock ( 2019-12-02 05:42:21 -0600 )edit

if it's not too late try checking openstack port list if your port is active: also check if yor nic interface is connected to neutron linuxbridge interface

shrapneel gravatar imageshrapneel ( 2019-12-11 03:19:36 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2019-11-30 21:37:23 -0600

Seen: 114 times

Last updated: Dec 02 '19