Launch Instance fails when large image has to copy to _base single allinone host

asked 2016-05-26 06:38:07 -0600

guiltybeta gravatar image

updated 2016-05-26 16:30:41 -0600

Running Openstack on CentOS7 using PackStack AllInOne.

When Launching an Instance from an image larger than a few GBs the first time when it has to copy the image to _base the instance fails to launch and nova logs the error as No Valid Host and reason being is Request to {HOSTIP}:9696/v2.0/ports.json timed out.

However if I then try to launch an instance again from the same image now that it exists in _base it launches without issue.

The error seems to happen within 90 seconds of launching the instance. The default in nova.conf for vif_plugging_timeout was 300 which is well below 90. I went ahead and upped to 900 still no change. Even setting vif_plugging_is_fatal=false and it still fails with same error.

MORE INFO

Enabled debugging on nova and neutron. I can see the request hit neutron and neutron does allocate an IP

    2016-05-26 15:15:06.223 38151 DEBUG neutron.api.v2.base [req-87a7fcd5-0eca-4dbc-83f8-901830e8e126 5b351a6a852b498180f3b94d3a535ffa e41278bfb0a149229a416dce14b5e98b - - -] Request body: {u'port': {u'binding:host_id': u'openstack1.jpg.com', u'admin_state_up': True, u'network_id': u'f2e4b564-4594-4374-99e4-e38f33516634', u'dns_name': u'test', u'device_owner': u'compute:nova', u'tenant_id': u'070783f785154e739958989748a18942', u'security_groups': [u'a13c8d4b-bd69-4338-9467-fb38078e795e'], u'device_id': u'403d8898-8a62-4b06-8f8d-a00fa7aae3d3'}} prepare_request_body /usr/lib/python2.7/site-packages/neutron/api/v2/base.py:656
2016-05-26 15:15:06.223 38151 INFO neutron.quota [req-87a7fcd5-0eca-4dbc-83f8-901830e8e126 5b351a6a852b498180f3b94d3a535ffa e41278bfb0a149229a416dce14b5e98b - - -] Loaded quota_driver: <neutron.db.quota.driver.DbQuotaDriver object at 0x6b35990>.
2016-05-26 15:15:06.227 38151 DEBUG neutron.db.quota.driver [req-87a7fcd5-0eca-4dbc-83f8-901830e8e126 5b351a6a852b498180f3b94d3a535ffa e41278bfb0a149229a416dce14b5e98b - - -] Resources subnetpool have unlimited quota limit. It is not required to calculated headroom  make_reservation /usr/lib/python2.7/site-packages/neutron/db/quota/driver.py:170
2016-05-26 15:15:06.232 38151 DEBUG neutron.quota.resource [req-87a7fcd5-0eca-4dbc-83f8-901830e8e126 5b351a6a852b498180f3b94d3a535ffa e41278bfb0a149229a416dce14b5e98b - - -] Usage tracker for resource:port and tenant:070783f785154e739958989748a18942 is out of sync, need to count used quota count /usr/lib/python2.7/site-packages/neutron/quota/resource.py:277
2016-05-26 15:15:06.237 38151 DEBUG neutron.quota.resource [req-87a7fcd5-0eca-4dbc-83f8-901830e8e126 5b351a6a852b498180f3b94d3a535ffa e41278bfb0a149229a416dce14b5e98b - - -] Quota usage for port was recalculated. Used quota:4. count /usr/lib/python2.7/site-packages/neutron/quota/resource.py:296
2016-05-26 15:15:06.239 38151 DEBUG neutron.db.quota.driver [req-87a7fcd5-0eca-4dbc-83f8-901830e8e126 5b351a6a852b498180f3b94d3a535ffa e41278bfb0a149229a416dce14b5e98b - - -] Attempting to reserve 1 items for resource port. Total usage: 4; quota limit: 20; headroom:16 make_reservation /usr/lib/python2.7/site-packages/neutron/db/quota/driver.py:202
2016-05-26 15:15:06.254 38167 DEBUG oslo_policy._cache_handler [req-c2a643a5-2fc4-4b12-9031-d54eda659c6e 05d28790655c216e3e0909698da5c6f435f1a8dde7f32578253f05843b2f9213 070783f785154e739958989748a18942 - - -] Reloading cached file /etc/neutron/policy.json read_cached_file /usr/lib/python2.7/site-packages/oslo_policy/_cache_handler.py:38
2016-05-26 15:15:06.259 38167 DEBUG oslo_policy.policy [req-c2a643a5-2fc4-4b12-9031-d54eda659c6e 05d28790655c216e3e0909698da5c6f435f1a8dde7f32578253f05843b2f9213 070783f785154e739958989748a18942 - - -] Reloaded policy file: /etc/neutron/policy.json _load_policy_file /usr/lib/python2.7/site-packages/oslo_policy/policy.py:493
2016-05-26 15:15:06.287 38151 DEBUG neutron.notifiers.nova [req-87a7fcd5-0eca-4dbc-83f8-901830e8e126 5b351a6a852b498180f3b94d3a535ffa e41278bfb0a149229a416dce14b5e98b - - -] Ignoring state change previous_port_status: symbol('NO_VALUE') current_port_status: DOWN port_id 36fa3b83-77fc-43d5-924f-bc804cf4db95 record_port_status_changed /usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:194
2016-05-26 15:15:06.287 38151 DEBUG neutron.notifiers.nova [req-87a7fcd5-0eca-4dbc-83f8-901830e8e126 5b351a6a852b498180f3b94d3a535ffa e41278bfb0a149229a416dce14b5e98b - - -] Ignoring state ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2016-05-26 21:53:00 -0600

guiltybeta gravatar image

I found the solution. Even though I could see it create the port and the POST response I guess it doesn't actually complete until the image is done being copied to _base. I upped the the url timeout from default 30 to 120 in nova and its all good now.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-05-26 06:38:07 -0600

Seen: 150 times

Last updated: May 26 '16