Ask Your Question

timss's profile - activity

2018-03-26 04:30:07 -0600 commented answer No handlers could be found for logger "swift-proxy"

I was able to get rid of this error by not specifying a default log route, i.e. log_name etc in the [DEFAULT] section.

2017-08-10 13:17:31 -0600 commented answer DNS Resolution fails

All right, thanks for the heads up anyway! :)

2017-07-13 05:57:33 -0600 commented answer DNS Resolution fails

Would you happen to have any details to share on this issue, bug reports or whatnot? I'm facing similar issues on a Mitaka installation that can't easily be upgraded.

2017-07-11 02:39:21 -0600 received badge  Enthusiast
2017-07-06 10:07:59 -0600 commented answer Search domain in resolv.conf not updating

Same thing happened for me, S. Boran. Tried changing dns_domain in neutron.conf, restarting neutron-dhcp-agent didn't change the --domain=openstacklocal argument to the dnsmasq process. Only when I changed dhcp_domain in dhcp_agent.ini and restarted again it works. Also running Mitaka.

2017-07-06 10:05:32 -0600 received badge  Supporter (source)
2016-03-21 07:28:34 -0600 received badge  Famous Question (source)
2016-03-19 13:02:38 -0600 received badge  Notable Question (source)
2016-03-18 23:27:52 -0600 received badge  Popular Question (source)
2016-03-18 12:12:22 -0600 commented answer Instances with large images sometimes fail to start/spawn

Thanks for the steps on how to replace the Swift endpoint though!

2016-03-18 12:10:41 -0600 commented answer Instances with large images sometimes fail to start/spawn

CentOS 6.6 is one of the images registered in Glance, not the distro that Kilo is running on.

Having a complete and seperate Swift install is definitely a better solution and is in the works, but I'm still curious why this is one is now broken. For now, the 6.6 image is disabled (not public).

2016-03-17 11:38:53 -0600 asked a question Instances with large images sometimes fail to start/spawn

OpenStack Kilo 2015.1.1 (Mirantis)
Glance with Swift as backend (running on controllers)

When creating instances with a CentOS 6.6 image sized at 1.1 GB it fails consistently for all compute nodes but one. However when using an Ubuntu 14.04 image sized at ~250 MB it works all the time. This setup used to work, so it may have been triggered by a config change or something else.

Looking at the traceback it looks like Glance is having some issues. Tried restarting both the glance-api and glance-registry service but didn't help. Using Horizon or novaclient it initially reports as 'building', and looking at nova --debug boot [..] looks ok (checked API calls using curl).

Not sure what could be wrong, or where to look next.

Logs filtered for readability.

controller1:/var/log/nova/nova-conductor.log:

Error from last host: compute1 (compute1): [...] u'RescheduledException: Build of instance <hash> was rescheduled: HTTPInternalServerError (HTTP 500)\n'
Failed to compute_task_build_instances: No valid host was found. Exceeding max scheduling attempts 3 for instance <hash>. [...]
Setting instance to ERROR state.

compute1:/var/log/nova-compute.log:

INFO nova.compute.manager [...] Starting instance...
WARNING nova.compute.resource_tracker [...] Host field should not be set on the instance until resources have been claimed.
WARNING nova.compute.resource_tracker [...] Host field should not be set on the instance until resources have been claimed.
INFO nova.compute.claims [...] Attempting claim: memory 4096 MB, disk 10 GB
INFO nova.compute.claims [...] Total memory: ...
INFO nova.compute.claims [...] memory limit: ...
INFO nova.compute.claims [...] Total disk: ...
INFO nova.compute.claims [...] disk limit: ...
INFO nova.compute.claims [...] Claim succesful
INFO nova.virt.libvirt.driver [...] Creating image
ERROR nova.compute.manager [...] Instance failed to spawn
TRACE nova.compute.manager [...] Traceback (most recent call):

Traceback:

  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2471, in _build_resources
    yield resources
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2343, in _build_and_run_instance
    block_device_info=block_device_info)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2405, in spawn
    admin_pass=admin_password)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2803, in _create_image
    instance, size, fallback_from_host)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5930, in _try_fetch_image_cache
    size=size)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 231, in cache
    *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 480, in create_image
    prepare_template(target=base, max_size=size, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 445, in inner
    return f(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 221, in fetch_func_sync
    fetch_func(target=target, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 507, in fetch_image
    max_size=max_size
  File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 87, in fetch_to_raw
    max_size=max_size
  File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 77, in fetch
    IMAGE_API.download ...
(more)