Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

We have run nova service-list several times, and the service appears to be up every time. The compute-node service is on the controller node as well as on the compute node (we are not sure if it has to be on one of the nodes only) because nova-compute.log are different on compute node and controller node. Not sure if nova-compute service should be there on the controller node or not.

---------------------nova-compute.log on compute node------------------------------------------------------------

2016-11-02 10:29:30.676 17047 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2016-11-02 10:29:31.000 17047 WARNING nova.compute.resource_tracker [-] No service record for host compute

2016-11-02 10:31:58.311 13672 AUDIT nova.service [-] Starting compute node (version 2014.2.4) 2016-11-02 10:31:58.312 13672 DEBUG nova.virt.libvirt.driver [-] Connecting to libvirt: qemu:///system _get_new_connection /us r/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:766 2016-11-02 10:31:58.421 13672 DEBUG nova.virt.libvirt.driver [-] Registering for lifecycle events <nova.virt.libvirt.driver.li bvirtdriver="" object="" at="" 0x7f3aeca36d10=""> _get_new_connection /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:783 2016-11-02 10:31:58.434 13672 DEBUG nova.virt.libvirt.driver [-] Registering for connection events: <nova.virt.libvirt.driver. libvirtdriver="" object="" at="" 0x7f3aeca36d10=""> _get_new_connection /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:794 2016-11-02 10:31:58.446 13672 DEBUG nova.virt.libvirt.config [-] Generated XML ('<cpu>\n <arch>x86_64</arch>\n <model>core2d uo</model>\n <vendor>Intel</vendor>\n <topology sockets="1" cores="1" threads="1"/>\n <feature name="lahf_lm"/>\n <feature name="rdtscp"/>\n</cpu>\n',) to_xml /usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py:82 2016-11-02 10:31:58.477 13672 DEBUG nova.virt.libvirt.driver [-] Starting native event thread _init_events /usr/lib/python2.7/ dist-packages/nova/virt/libvirt/driver.py:705 2016-11-02 10:31:58.484 13672 DEBUG nova.virt.libvirt.driver [-] Starting green dispatch thread _init_events /usr/lib/python2. 7/dist-packages/nova/virt/libvirt/driver.py:710 2016-11-02 10:31:58.644 13672 DEBUG nova.virt.libvirt.driver [-] Updating host stats update_status /usr/lib/python2.7/dist-pac kages/nova/virt/libvirt/driver.py:6496 2016-11-02 10:31:58.796 13672 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2016-11-02 10:31:58.806 13672 DEBUG nova.virt.libvirt.driver [-] Updating host stats update_status /usr/lib/python2.7/dist-pac kages/nova/virt/libvirt/driver.py:6496 2016-11-02 10:31:58.959 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: free ram (MB): 1282 _report_hypervisor_resou rce_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:488 2016-11-02 10:31:58.959 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: free disk (GB): 42 _report_hypervisor_resour ce_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:489 2016-11-02 10:31:58.960 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: free VCPUs: 1 _report_hypervisor_resource_vi ew /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:494 2016-11-02 10:31:58.960 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: assignable PCI devices: [] _report_hyperviso r_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:501 2016-11-02 10:31:58.978 13672 DEBUG nova.openstack.common.lockutils [-] Created new semaphore "compute_resources" internal_loc k /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:206 2016-11-02 10:31:58.980 13672 DEBUG nova.openstack.common.lockutils [-] Acquired semaphore "compute_resources" lock /usr/lib/p ython2.7/dist-packages/nova/openstack/common/lockutils.py:229 2016-11-02 10:31:58.984 13672 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "_update_available_resource" inne r /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:271 2016-11-02 10:31:59.077 13672 WARNING nova.compute.resource_tracker [-] No service record for host compute 2016-11-02 10:31:59.078 13672 DEBUG nova.openstack.common.lockutils [-] Releasing semaphore "compute_resources" lock /usr/lib/ python2.7/dist-packages/nova/openstack/common/lockutils.py:238 2016-11-02 10:31:59.084 13672 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "_update_available_resource" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:275


------------------------------nova-compute.log on controller node----------------------------------------------------------

2016-11-02 10:29:57.345 20631 DEBUG nova.openstack.common.periodic_task [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Running p eriodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/dist-packages/nova/openstack/common/period ic_task.py:219 2016-11-02 10:29:57.348 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Auditing locally available compute resources for node controller 2016-11-02 10:29:57.381 20631 DEBUG nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Hypervisor: fre e VCPUs: 4 _report_hypervisor_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:519 2016-11-02 10:29:57.381 20631 DEBUG nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Hypervisor: ass ignable PCI devices: [{"dev_id": "pci_0000_00_00_0", "product_id": "0c00", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "80 86", "label": "label_8086_0c00", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "041e", "dev_type": "type-PC I", "numa_node": null, "vendor_id": "8086", "label": "label_8086_041e", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_03_0", "pr oduct_id": "0c0c", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_0c0c", "address": "0000:00:03. 0"}, {"dev_id": "pci_0000_00_14_0", "product_id": "8c31", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "la bel_8086_8c31", "address": "0000:00:14.0"}, {"dev_id": "pci_0000_00_16_0", "product_id": "8c3a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c3a", "address": "0000:00:16.0"}, {"dev_id": "pci_0000_00_1a_0", "product_id": "8c2d" , "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c2d", "address": "0000:00:1a.0"}, {"dev_id": " pci_0000_00_1b_0", "product_id": "8c20", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c20", " address": "0000:00:1b.0"}, {"dev_id": "pci_0000_00_1c_0", "product_id": "8c10", "dev_type": "type-PCI", "numa_node": null, "vendor_id" : "8086", "label": "label_8086_8c10", "address": "0000:00:1c.0"}, {"dev_id": "pci_0000_01_00_0", "product_id": "8168", "dev_type": "ty pe-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10ec_8168", "address": "0000:01:00.0"}, {"dev_id": "pci_0000_00_1c_1" , "product_id": "8c12", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c12", "address": "0000:0 0:1c.1"}, {"dev_id": "pci_0000_02_00_0", "product_id": "8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label" : "label_10ec_8168", "address": "0000:02:00.0"}, {"dev_id": "pci_0000_00_1c_2", "product_id": "8c14", "dev_type": "type-PCI", "numa_no de": null, "vendor_id": "8086", "label": "label_8086_8c14", "address": "0000:00:1c.2"}, {"dev_id": "pci_0000_03_00_0", "product_id": " 8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10ec_8168", "address": "0000:03:00.0"}, {"dev_i d": "pci_0000_00_1c_3", "product_id": "8c16", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c1 6", "address": "0000:00:1c.3"}, {"dev_id": "pci_0000_04_00_0", "product_id": "244e", "dev_type": "type-PCI", "numa_node": null, "vendo r_id": "8086", "label": "label_8086_244e", "address": "0000:04:00.0"}, {"dev_id": "pci_0000_00_1d_0", "product_id": "8c26", "dev_type" : "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c26", "address": "0000:00:1d.0"}, {"dev_id": "pci_0000_00_ 1f_0", "product_id": "8c5c", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c5c", "address": "0 000:00:1f.0"}, {"dev_id": "pci_0000_00_1f_2", "product_id": "8c02", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "l abel": "label_8086_8c02", "address": "0000:00:1f.2"}, {"dev_id": "pci_0000_00_1f_3", "product_id": "8c22", "dev_type": "type-PCI", "nu ma_node": null, "vendor_id": "8086", "label": "label_8086_8c22", "address": "0000:00:1f.3"}] _report_hypervisor_resource_view /usr/lib /python2.7/dist-packages/nova/compute/resource_tracker.py:527 2016-11-02 10:29:57.382 20631 DEBUG nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Hypervisor/Node resource view: name=controller free_ram=898MB free_disk=376GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_00_0", "product_id": "0c00", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_0c00", "address": "0000:00:00.0"}, {"dev_ id": "pci_0000_00_02_0", "product_id": "041e", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_04 1e", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "0c0c", "dev_type": "type-PCI", "numa_node": null, "vend or_id": "8086", "label": "label_8086_0c0c", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_14_0", "product_id": "8c31", "dev_type ": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c31", "address": "0000:00:14.0"}, {"dev_id": "pci_0000_00 _16_0", "product_id": "8c3a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c3a", "address": " 0000:00:16.0"}, {"dev_id": "pci_0000_00_1a_0", "product_id": "8c2d", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", " label": "label_8086_8c2d", "address": "0000:00:1a.0"}, {"dev_id": "pci_0000_00_1b_0", "product_id": "8c20", "dev_type": "type-PCI", "n uma_node": null, "vendor_id": "8086", "label": "label_8086_8c20", "address": "0000:00:1b.0"}, {"dev_id": "pci_0000_00_1c_0", "product_ id": "8c10", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c10", "address": "0000:00:1c.0"}, { "dev_id": "pci_0000_01_00_0", "product_id": "8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10 ec_8168", "address": "0000:01:00.0"}, {"dev_id": "pci_0000_00_1c_1", "product_id": "8c12", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c12", "address": "0000:00:1c.1"}, {"dev_id": "pci_0000_02_00_0", "product_id": "8168", "dev _type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10ec_8168", "address": "0000:02:00.0"}, {"dev_id": "pci_00 00_00_1c_2", "product_id": "8c14", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c14", "addres s": "0000:00:1c.2"}, {"dev_id": "pci_0000_03_00_0", "product_id": "8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10e c", "label": "label_10ec_8168", "address": "0000:03:00.0"}, {"dev_id": "pci_0000_00_1c_3", "product_id": "8c16", "dev_type": "type-PCI ", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c16", "address": "0000:00:1c.3"}, {"dev_id": "pci_0000_04_00_0", "pro duct_id": "244e", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_244e", "address": "0000:04:00.0 "}, {"dev_id": "pci_0000_00_1d_0", "product_id": "8c26", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "lab el_8086_8c26", "address": "0000:00:1d.0"}, {"dev_id": "pci_0000_00_1f_0", "product_id": "8c5c", "dev_type": "type-PCI", "numa_node": n ull, "vendor_id": "8086", "label": "label_8086_8c5c", "address": "0000:00:1f.0"}, {"dev_id": "pci_0000_00_1f_2", "product_id": "8c02", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c02", "address": "0000:00:1f.2"}, {"dev_id": "p ci_0000_00_1f_3", "product_id": "8c22", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c22", "a ddress": "0000:00:1f.3"}] _report_hypervisor_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:541 2016-11-02 10:29:57.383 20631 DEBUG oslo_concurrency.lockutils [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Lock "compute_reso urces" acquired by "_update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py: 444 2016-11-02 10:29:57.438 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Total usable vcp us: 4, total allocated vcpus: 0 2016-11-02 10:29:57.439 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Final resource v iew: name=controller phys_ram=7872MB used_ram=512MB phys_disk=454GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=<nova.pci.stats .pcidevicestats="" object="" at="" 0x7f7144493910=""> 2016-11-02 10:29:57.511 20631 INFO nova.scheduler.client.report [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Compute_service r ecord updated for ('controller', 'controller') 2016-11-02 10:29:57.512 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Compute_service record updated for controller:controller 2016-11-02 10:29:57.512 20631 DEBUG oslo_concurrency.lockutils [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Lock "compute_reso urces" released by "_update_available_resource" :: held 0.130s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:45 6 2016-11-02 10:29:57.537 20631 DEBUG nova.openstack.common.loopingcall [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Dynamic loo ping call <bound method="" service.periodic_tasks="" of="" <nova.service.service="" object="" at="" 0x7f714e67e950="">> sleeping for 0.81 seconds _inner /u sr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132

-------------------------------------nova-api.log----------------------------------------------

2016-11-02 10:44:26.151 20403 DEBUG keystoneclient.session [-] REQ: curl -g -i -X GET http://controller:35357/v3/auth/tokens -H "X-Subject-Token: {SHA1}bb556b02080d675af1c81adf201d171ada7e8702" -H "User-Agent: python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}c9b1731ffcf271cd044e1d8359277fea5240a175" _http_log_request /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195 2016-11-02 10:44:26.203 20403 DEBUG keystoneclient.session [-] RESP: [200] content-length: 2810 x-subject-token: {SHA1}bb556b02080d675af1c81adf201d171ada7e8702 vary: X-Auth-Token x-distribution: Ubuntu connection: keep-alive date: Wed, 02 Nov 2016 01:44:26 GMT content-type: application/json x-openstack-request-id: req-1dfad370-201a-42df-ba59-a0431eebb387 RESP BODY: {"token": {"methods": ["password", "token"], "roles": [{"id": "1e3c093fe18f4127869d27a10e4cf3fc", "name": "admin"}], "expires_at": "2016-11-02T02:44:26.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "03a58bc4d34c4c0e9c52b679cd659d6c", "name": "admin"}, "catalog": "<removed>", "extras": {}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "9224f04451e74a8daafb9fba6a7ac047", "name": "admin"}, "audit_ids": ["pjA_aGIqQayvZQa4xS7owg"], "issued_at": "2016-11-02T01:44:26.037479"}} _http_log_response /usr/lib/python2.7/dist-packages/keystoneclient/session.py:223 2016-11-02 10:44:26.207 20403 DEBUG nova.api.openstack.wsgi [req-62f2db41-cca2-4d5b-a067-c6fcb52386c5 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Calling method '<bound method="" hypervisorscontroller.index="" of="" <nova.api.openstack.compute.contrib.hypervisors.hypervisorscontroller="" object="" at="" 0x7f010803bd50="">>' _process_stack /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:783 2016-11-02 10:44:26.223 20403 INFO nova.osapi_compute.wsgi.server [req-62f2db41-cca2-4d5b-a067-c6fcb52386c5 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] 117.17.102.109 "GET /v2/03a58bc4d34c4c0e9c52b679cd659d6c/os-hypervisors HTTP/1.1" status: 200 len: 298 time: 0.0730362

---------------------------------nova-conductor.log-----------------------------------------------------

2016-11-02 11:09:39.453 20492 DEBUG nova.openstack.common.loopingcall [req-f467a1ec-4c88-4957-a01a-2e49a2f54561 - - - - -] Dynamic looping call <bound method="" service.periodic_tasks="" of="" <nova.service.service="" object="" at="" 0x7fba45b40590="">> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132 2016-11-02 11:09:47.475 20493 INFO oslo_messaging._drivers.impl_rabbit [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Connecting to AMQP server on controller:5672 2016-11-02 11:09:47.487 20493 INFO oslo_messaging._drivers.impl_rabbit [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Connected to AMQP server on controller:5672 2016-11-02 11:10:09.406 20494 DEBUG nova.openstack.common.loopingcall [req-72a02c34-ff0c-4928-ab54-0fd00a3f1771 - - - - -] Dynamic looping call <bound method="" service.periodic_tasks="" of="" <nova.service.service="" object="" at="" 0x7fba45b40590="">> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132 2016-11-02 11:10:17.227 20493 DEBUG nova.openstack.common.loopingcall [req-1e7f32d8-4550-4b0a-a204-ecc401a3dff7 - - - - -] Dynamic looping call <bound method="" service.periodic_tasks="" of="" <nova.service.service="" object="" at="" 0x7fba45b40590="">> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132 2016-11-02 11:10:26.475 20495 DEBUG nova.openstack.common.loopingcall [req-339f1f42-22a9-42d5-a2a8-8b87f8f2a0c8 - - - - -] Dynamic looping call <bound method="" service.periodic_tasks="" of="" <nova.service.service="" object="" at="" 0x7fba45b40590="">> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132 2016-11-02 11:10:28.779 20493 ERROR nova.scheduler.utils [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] [instance: 4ee0efd4-84eb-452f-9768-ba6cc17642e8] Error from last host: controller (node controller): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2248, in _do_build_and_run_instance\n filter_properties)\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2393, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 4ee0efd4-84eb-452f-9768-ba6cc17642e8 was re-scheduled: operation failed: domain is no longer running\n'] 2016-11-02 11:10:28.847 20493 WARNING nova.scheduler.utils [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available. Traceback (most recent call last):

File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 142, in inner return func(args, *kwargs)

File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 86, in select_destinations filter_properties)

File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 80, in select_destinations raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

We have run nova service-list several times, and the service appears to be up every time. The compute-node service is on the controller node as well as on the compute node (we are not sure if it has to be on one of the nodes only) because nova-compute.log are different on compute node and controller node. Not sure if nova-compute service should be there on the controller node or not.

---------------------nova-compute.log on compute node------------------------------------------------------------

2016-11-02 10:29:30.676 17047 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2016-11-02 10:29:31.000 17047 WARNING nova.compute.resource_tracker [-] No service record for host compute

compute 2016-11-02 10:31:58.311 13672 AUDIT nova.service [-] Starting compute node (version 2014.2.4) 2016-11-02 10:31:58.312 13672 DEBUG nova.virt.libvirt.driver [-] Connecting to libvirt: qemu:///system _get_new_connection /us r/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:766 2016-11-02 10:31:58.421 13672 DEBUG nova.virt.libvirt.driver [-] Registering for lifecycle events <nova.virt.libvirt.driver.li bvirtdriver="" object="" at="" 0x7f3aeca36d10=""> <nova.virt.libvirt.driver.Li bvirtDriver object at 0x7f3aeca36d10> _get_new_connection /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:783 2016-11-02 10:31:58.434 13672 DEBUG nova.virt.libvirt.driver [-] Registering for connection events: <nova.virt.libvirt.driver. libvirtdriver="" object="" at="" 0x7f3aeca36d10=""> <nova.virt.libvirt.driver. LibvirtDriver object at 0x7f3aeca36d10> _get_new_connection /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:794 2016-11-02 10:31:58.446 13672 DEBUG nova.virt.libvirt.config [-] Generated XML ('<cpu>\n <arch>x86_64</arch>\n <model>core2d uo</model>\n <vendor>Intel</vendor>\n <topology sockets="1" cores="1" threads="1"/>\n <feature name="lahf_lm"/>\n <feature <feature name="rdtscp"/>\n</cpu>\n',) to_xml /usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py:82 2016-11-02 10:31:58.477 13672 DEBUG nova.virt.libvirt.driver [-] Starting native event thread _init_events /usr/lib/python2.7/ dist-packages/nova/virt/libvirt/driver.py:705 2016-11-02 10:31:58.484 13672 DEBUG nova.virt.libvirt.driver [-] Starting green dispatch thread _init_events /usr/lib/python2. 7/dist-packages/nova/virt/libvirt/driver.py:710 2016-11-02 10:31:58.644 13672 DEBUG nova.virt.libvirt.driver [-] Updating host stats update_status /usr/lib/python2.7/dist-pac kages/nova/virt/libvirt/driver.py:6496 2016-11-02 10:31:58.796 13672 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2016-11-02 10:31:58.806 13672 DEBUG nova.virt.libvirt.driver [-] Updating host stats update_status /usr/lib/python2.7/dist-pac kages/nova/virt/libvirt/driver.py:6496 2016-11-02 10:31:58.959 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: free ram (MB): 1282 _report_hypervisor_resou rce_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:488 2016-11-02 10:31:58.959 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: free disk (GB): 42 _report_hypervisor_resour ce_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:489 2016-11-02 10:31:58.960 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: free VCPUs: 1 _report_hypervisor_resource_vi ew /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:494 2016-11-02 10:31:58.960 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: assignable PCI devices: [] _report_hyperviso r_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:501 2016-11-02 10:31:58.978 13672 DEBUG nova.openstack.common.lockutils [-] Created new semaphore "compute_resources" internal_loc k /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:206 2016-11-02 10:31:58.980 13672 DEBUG nova.openstack.common.lockutils [-] Acquired semaphore "compute_resources" lock /usr/lib/p ython2.7/dist-packages/nova/openstack/common/lockutils.py:229 2016-11-02 10:31:58.984 13672 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "_update_available_resource" inne r /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:271 2016-11-02 10:31:59.077 13672 WARNING nova.compute.resource_tracker [-] No service record for host compute 2016-11-02 10:31:59.078 13672 DEBUG nova.openstack.common.lockutils [-] Releasing semaphore "compute_resources" lock /usr/lib/ python2.7/dist-packages/nova/openstack/common/lockutils.py:238 2016-11-02 10:31:59.084 13672 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "_update_available_resource" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:275


/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:275 ************************************************************************************************************************** ------------------------------nova-compute.log on controller node----------------------------------------------------------

node---------------------------------------------------------- 2016-11-02 10:29:57.345 20631 DEBUG nova.openstack.common.periodic_task [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Running p eriodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/dist-packages/nova/openstack/common/period ic_task.py:219 2016-11-02 10:29:57.348 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Auditing locally available compute resources for node controller 2016-11-02 10:29:57.381 20631 DEBUG nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Hypervisor: fre e VCPUs: 4 _report_hypervisor_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:519 2016-11-02 10:29:57.381 20631 DEBUG nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Hypervisor: ass ignable PCI devices: [{"dev_id": "pci_0000_00_00_0", "product_id": "0c00", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "80 86", "label": "label_8086_0c00", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "041e", "dev_type": "type-PC I", "numa_node": null, "vendor_id": "8086", "label": "label_8086_041e", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_03_0", "pr oduct_id": "0c0c", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_0c0c", "address": "0000:00:03. 0"}, {"dev_id": "pci_0000_00_14_0", "product_id": "8c31", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "la bel_8086_8c31", "address": "0000:00:14.0"}, {"dev_id": "pci_0000_00_16_0", "product_id": "8c3a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c3a", "address": "0000:00:16.0"}, {"dev_id": "pci_0000_00_1a_0", "product_id": "8c2d" , "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c2d", "address": "0000:00:1a.0"}, {"dev_id": " pci_0000_00_1b_0", "product_id": "8c20", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c20", " address": "0000:00:1b.0"}, {"dev_id": "pci_0000_00_1c_0", "product_id": "8c10", "dev_type": "type-PCI", "numa_node": null, "vendor_id" : "8086", "label": "label_8086_8c10", "address": "0000:00:1c.0"}, {"dev_id": "pci_0000_01_00_0", "product_id": "8168", "dev_type": "ty pe-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10ec_8168", "address": "0000:01:00.0"}, {"dev_id": "pci_0000_00_1c_1" , "product_id": "8c12", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c12", "address": "0000:0 0:1c.1"}, {"dev_id": "pci_0000_02_00_0", "product_id": "8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label" : "label_10ec_8168", "address": "0000:02:00.0"}, {"dev_id": "pci_0000_00_1c_2", "product_id": "8c14", "dev_type": "type-PCI", "numa_no de": null, "vendor_id": "8086", "label": "label_8086_8c14", "address": "0000:00:1c.2"}, {"dev_id": "pci_0000_03_00_0", "product_id": " 8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10ec_8168", "address": "0000:03:00.0"}, {"dev_i d": "pci_0000_00_1c_3", "product_id": "8c16", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c1 6", "address": "0000:00:1c.3"}, {"dev_id": "pci_0000_04_00_0", "product_id": "244e", "dev_type": "type-PCI", "numa_node": null, "vendo r_id": "8086", "label": "label_8086_244e", "address": "0000:04:00.0"}, {"dev_id": "pci_0000_00_1d_0", "product_id": "8c26", "dev_type" : "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c26", "address": "0000:00:1d.0"}, {"dev_id": "pci_0000_00_ 1f_0", "product_id": "8c5c", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c5c", "address": "0 000:00:1f.0"}, {"dev_id": "pci_0000_00_1f_2", "product_id": "8c02", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "l abel": "label_8086_8c02", "address": "0000:00:1f.2"}, {"dev_id": "pci_0000_00_1f_3", "product_id": "8c22", "dev_type": "type-PCI", "nu ma_node": null, "vendor_id": "8086", "label": "label_8086_8c22", "address": "0000:00:1f.3"}] _report_hypervisor_resource_view /usr/lib /python2.7/dist-packages/nova/compute/resource_tracker.py:527 2016-11-02 10:29:57.382 20631 DEBUG nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Hypervisor/Node resource view: name=controller free_ram=898MB free_disk=376GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_00_0", "product_id": "0c00", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_0c00", "address": "0000:00:00.0"}, {"dev_ id": "pci_0000_00_02_0", "product_id": "041e", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_04 1e", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "0c0c", "dev_type": "type-PCI", "numa_node": null, "vend or_id": "8086", "label": "label_8086_0c0c", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_14_0", "product_id": "8c31", "dev_type ": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c31", "address": "0000:00:14.0"}, {"dev_id": "pci_0000_00 _16_0", "product_id": "8c3a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c3a", "address": " 0000:00:16.0"}, {"dev_id": "pci_0000_00_1a_0", "product_id": "8c2d", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", " label": "label_8086_8c2d", "address": "0000:00:1a.0"}, {"dev_id": "pci_0000_00_1b_0", "product_id": "8c20", "dev_type": "type-PCI", "n uma_node": null, "vendor_id": "8086", "label": "label_8086_8c20", "address": "0000:00:1b.0"}, {"dev_id": "pci_0000_00_1c_0", "product_ id": "8c10", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c10", "address": "0000:00:1c.0"}, { "dev_id": "pci_0000_01_00_0", "product_id": "8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10 ec_8168", "address": "0000:01:00.0"}, {"dev_id": "pci_0000_00_1c_1", "product_id": "8c12", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c12", "address": "0000:00:1c.1"}, {"dev_id": "pci_0000_02_00_0", "product_id": "8168", "dev _type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10ec_8168", "address": "0000:02:00.0"}, {"dev_id": "pci_00 00_00_1c_2", "product_id": "8c14", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c14", "addres s": "0000:00:1c.2"}, {"dev_id": "pci_0000_03_00_0", "product_id": "8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10e c", "label": "label_10ec_8168", "address": "0000:03:00.0"}, {"dev_id": "pci_0000_00_1c_3", "product_id": "8c16", "dev_type": "type-PCI ", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c16", "address": "0000:00:1c.3"}, {"dev_id": "pci_0000_04_00_0", "pro duct_id": "244e", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_244e", "address": "0000:04:00.0 "}, {"dev_id": "pci_0000_00_1d_0", "product_id": "8c26", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "lab el_8086_8c26", "address": "0000:00:1d.0"}, {"dev_id": "pci_0000_00_1f_0", "product_id": "8c5c", "dev_type": "type-PCI", "numa_node": n ull, "vendor_id": "8086", "label": "label_8086_8c5c", "address": "0000:00:1f.0"}, {"dev_id": "pci_0000_00_1f_2", "product_id": "8c02", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c02", "address": "0000:00:1f.2"}, {"dev_id": "p ci_0000_00_1f_3", "product_id": "8c22", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c22", "a ddress": "0000:00:1f.3"}] _report_hypervisor_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:541 2016-11-02 10:29:57.383 20631 DEBUG oslo_concurrency.lockutils [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Lock "compute_reso urces" acquired by "_update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py: 444 2016-11-02 10:29:57.438 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Total usable vcp us: 4, total allocated vcpus: 0 2016-11-02 10:29:57.439 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Final resource v iew: name=controller phys_ram=7872MB used_ram=512MB phys_disk=454GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=<nova.pci.stats .pcidevicestats="" object="" at="" 0x7f7144493910=""> pci_stats=<nova.pci.stats .PciDeviceStats object at 0x7f7144493910> 2016-11-02 10:29:57.511 20631 INFO nova.scheduler.client.report [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Compute_service r ecord updated for ('controller', 'controller') 2016-11-02 10:29:57.512 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Compute_service record updated for controller:controller 2016-11-02 10:29:57.512 20631 DEBUG oslo_concurrency.lockutils [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Lock "compute_reso urces" released by "_update_available_resource" :: held 0.130s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:45 6 2016-11-02 10:29:57.537 20631 DEBUG nova.openstack.common.loopingcall [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Dynamic loo ping call <bound method="" service.periodic_tasks="" of="" <nova.service.service="" object="" at="" 0x7f714e67e950="">> method Service.periodic_tasks of <nova.service.Service object at 0x7f714e67e950>> sleeping for 0.81 seconds _inner /u sr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132

-------------------------------------nova-api.log----------------------------------------------

sr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132 -------------------------------------nova-api.log---------------------------------------------- 2016-11-02 10:44:26.151 20403 DEBUG keystoneclient.session [-] REQ: curl -g -i -X GET http://controller:35357/v3/auth/tokens -H "X-Subject-Token: {SHA1}bb556b02080d675af1c81adf201d171ada7e8702" -H "User-Agent: python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}c9b1731ffcf271cd044e1d8359277fea5240a175" _http_log_request /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195 2016-11-02 10:44:26.203 20403 DEBUG keystoneclient.session [-] RESP: [200] content-length: 2810 x-subject-token: {SHA1}bb556b02080d675af1c81adf201d171ada7e8702 vary: X-Auth-Token x-distribution: Ubuntu connection: keep-alive date: Wed, 02 Nov 2016 01:44:26 GMT content-type: application/json x-openstack-request-id: req-1dfad370-201a-42df-ba59-a0431eebb387 RESP BODY: {"token": {"methods": ["password", "token"], "roles": [{"id": "1e3c093fe18f4127869d27a10e4cf3fc", "name": "admin"}], "expires_at": "2016-11-02T02:44:26.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "03a58bc4d34c4c0e9c52b679cd659d6c", "name": "admin"}, "catalog": "<removed>", "extras": {}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "9224f04451e74a8daafb9fba6a7ac047", "name": "admin"}, "audit_ids": ["pjA_aGIqQayvZQa4xS7owg"], "issued_at": "2016-11-02T01:44:26.037479"}} _http_log_response /usr/lib/python2.7/dist-packages/keystoneclient/session.py:223 2016-11-02 10:44:26.207 20403 DEBUG nova.api.openstack.wsgi [req-62f2db41-cca2-4d5b-a067-c6fcb52386c5 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Calling method '<bound method="" hypervisorscontroller.index="" of="" <nova.api.openstack.compute.contrib.hypervisors.hypervisorscontroller="" object="" at="" 0x7f010803bd50="">>' method HypervisorsController.index of <nova.api.openstack.compute.contrib.hypervisors.HypervisorsController object at 0x7f010803bd50>>' _process_stack /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:783 2016-11-02 10:44:26.223 20403 INFO nova.osapi_compute.wsgi.server [req-62f2db41-cca2-4d5b-a067-c6fcb52386c5 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] 117.17.102.109 "GET /v2/03a58bc4d34c4c0e9c52b679cd659d6c/os-hypervisors HTTP/1.1" status: 200 len: 298 time: 0.0730362

---------------------------------nova-conductor.log-----------------------------------------------------

0.0730362 ---------------------------------nova-conductor.log----------------------------------------------------- 2016-11-02 11:09:39.453 20492 DEBUG nova.openstack.common.loopingcall [req-f467a1ec-4c88-4957-a01a-2e49a2f54561 - - - - -] Dynamic looping call <bound method="" service.periodic_tasks="" of="" <nova.service.service="" object="" at="" 0x7fba45b40590="">> method Service.periodic_tasks of <nova.service.Service object at 0x7fba45b40590>> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132 2016-11-02 11:09:47.475 20493 INFO oslo_messaging._drivers.impl_rabbit [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Connecting to AMQP server on controller:5672 2016-11-02 11:09:47.487 20493 INFO oslo_messaging._drivers.impl_rabbit [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Connected to AMQP server on controller:5672 2016-11-02 11:10:09.406 20494 DEBUG nova.openstack.common.loopingcall [req-72a02c34-ff0c-4928-ab54-0fd00a3f1771 - - - - -] Dynamic looping call <bound method="" service.periodic_tasks="" of="" <nova.service.service="" object="" at="" 0x7fba45b40590="">> method Service.periodic_tasks of <nova.service.Service object at 0x7fba45b40590>> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132 2016-11-02 11:10:17.227 20493 DEBUG nova.openstack.common.loopingcall [req-1e7f32d8-4550-4b0a-a204-ecc401a3dff7 - - - - -] Dynamic looping call <bound method="" service.periodic_tasks="" of="" <nova.service.service="" object="" at="" 0x7fba45b40590="">> method Service.periodic_tasks of <nova.service.Service object at 0x7fba45b40590>> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132 2016-11-02 11:10:26.475 20495 DEBUG nova.openstack.common.loopingcall [req-339f1f42-22a9-42d5-a2a8-8b87f8f2a0c8 - - - - -] Dynamic looping call <bound method="" service.periodic_tasks="" of="" <nova.service.service="" object="" at="" 0x7fba45b40590="">> method Service.periodic_tasks of <nova.service.Service object at 0x7fba45b40590>> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132 2016-11-02 11:10:28.779 20493 ERROR nova.scheduler.utils [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] [instance: 4ee0efd4-84eb-452f-9768-ba6cc17642e8] Error from last host: controller (node controller): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2248, in _do_build_and_run_instance\n filter_properties)\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2393, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 4ee0efd4-84eb-452f-9768-ba6cc17642e8 was re-scheduled: operation failed: domain is no longer running\n'] 2016-11-02 11:10:28.847 20493 WARNING nova.scheduler.utils [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available. Traceback (most recent call last):

last): File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 142, in inner return func(args, *kwargs)

func(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 86, in select_destinations filter_properties)

filter_properties) File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 80, in select_destinations raise exception.NoValidHost(reason=reason)

exception.NoValidHost(reason=reason) NoValidHost: No valid host was found. There are not enough hosts available.

available.

We have run nova service-list several times, and the service appears to be up every time. The compute-node service is on the controller node as well as on the compute node (we are not sure if it has to be on one of the nodes only) because nova-compute.log are different on compute node and controller node. Not sure if nova-compute service should be there on the controller node or not.

---------------------nova-compute.log on compute node------------------------------------------------------------

2016-11-02 10:29:30.676 17047 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2016-11-02 10:29:31.000 17047 WARNING nova.compute.resource_tracker [-] No service record for host compute


2016-11-02 10:31:58.311 13672 AUDIT nova.service [-] Starting compute node (version 2014.2.4)
2016-11-02 10:31:58.312 13672 DEBUG nova.virt.libvirt.driver [-] Connecting to libvirt: qemu:///system _get_new_connection /us
r/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:766
2016-11-02 10:31:58.421 13672 DEBUG nova.virt.libvirt.driver [-] Registering for lifecycle events <nova.virt.libvirt.driver.Li
bvirtDriver object at 0x7f3aeca36d10> _get_new_connection /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:783
2016-11-02 10:31:58.434 13672 DEBUG nova.virt.libvirt.driver [-] Registering for connection events: <nova.virt.libvirt.driver.
LibvirtDriver object at 0x7f3aeca36d10> _get_new_connection /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:794
2016-11-02 10:31:58.446 13672 DEBUG nova.virt.libvirt.config [-] Generated XML ('<cpu>\n  <arch>x86_64</arch>\n  <model>core2d
uo</model>\n  <vendor>Intel</vendor>\n  <topology sockets="1" cores="1" threads="1"/>\n  <feature name="lahf_lm"/>\n  <feature
 name="rdtscp"/>\n</cpu>\n',)  to_xml /usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py:82
2016-11-02 10:31:58.477 13672 DEBUG nova.virt.libvirt.driver [-] Starting native event thread _init_events /usr/lib/python2.7/
dist-packages/nova/virt/libvirt/driver.py:705
2016-11-02 10:31:58.484 13672 DEBUG nova.virt.libvirt.driver [-] Starting green dispatch thread _init_events /usr/lib/python2.
7/dist-packages/nova/virt/libvirt/driver.py:710
2016-11-02 10:31:58.644 13672 DEBUG nova.virt.libvirt.driver [-] Updating host stats update_status /usr/lib/python2.7/dist-pac
kages/nova/virt/libvirt/driver.py:6496
2016-11-02 10:31:58.796 13672 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2016-11-02 10:31:58.806 13672 DEBUG nova.virt.libvirt.driver [-] Updating host stats update_status /usr/lib/python2.7/dist-pac
kages/nova/virt/libvirt/driver.py:6496
2016-11-02 10:31:58.959 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: free ram (MB): 1282 _report_hypervisor_resou
rce_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:488
2016-11-02 10:31:58.959 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: free disk (GB): 42 _report_hypervisor_resour
ce_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:489
2016-11-02 10:31:58.960 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: free VCPUs: 1 _report_hypervisor_resource_vi
ew /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:494
2016-11-02 10:31:58.960 13672 DEBUG nova.compute.resource_tracker [-] Hypervisor: assignable PCI devices: [] _report_hyperviso
r_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:501
2016-11-02 10:31:58.978 13672 DEBUG nova.openstack.common.lockutils [-] Created new semaphore "compute_resources" internal_loc
k /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:206
2016-11-02 10:31:58.980 13672 DEBUG nova.openstack.common.lockutils [-] Acquired semaphore "compute_resources" lock /usr/lib/p
ython2.7/dist-packages/nova/openstack/common/lockutils.py:229
2016-11-02 10:31:58.984 13672 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "_update_available_resource" inne
r /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:271
2016-11-02 10:31:59.077 13672 WARNING nova.compute.resource_tracker [-] No service record for host compute
2016-11-02 10:31:59.078 13672 DEBUG nova.openstack.common.lockutils [-] Releasing semaphore "compute_resources" lock /usr/lib/
python2.7/dist-packages/nova/openstack/common/lockutils.py:238
2016-11-02 10:31:59.084 13672 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "_update_available_resource"
 inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:275

**************************************************************************************************************************


------------------------------nova-compute.log on controller node---------------------------------------------------------- node----------------------------------------------------------

2016-11-02 10:29:57.345 20631 DEBUG nova.openstack.common.periodic_task [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Running p
eriodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/dist-packages/nova/openstack/common/period
ic_task.py:219
2016-11-02 10:29:57.348 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Auditing locally
 available compute resources for node controller
2016-11-02 10:29:57.381 20631 DEBUG nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Hypervisor: fre
e VCPUs: 4 _report_hypervisor_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:519
2016-11-02 10:29:57.381 20631 DEBUG nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Hypervisor: ass
ignable PCI devices: [{"dev_id": "pci_0000_00_00_0", "product_id": "0c00", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "80
86", "label": "label_8086_0c00", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "041e", "dev_type": "type-PC
I", "numa_node": null, "vendor_id": "8086", "label": "label_8086_041e", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_03_0", "pr
oduct_id": "0c0c", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_0c0c", "address": "0000:00:03.
0"}, {"dev_id": "pci_0000_00_14_0", "product_id": "8c31", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "la
bel_8086_8c31", "address": "0000:00:14.0"}, {"dev_id": "pci_0000_00_16_0", "product_id": "8c3a", "dev_type": "type-PCI", "numa_node": 
null, "vendor_id": "8086", "label": "label_8086_8c3a", "address": "0000:00:16.0"}, {"dev_id": "pci_0000_00_1a_0", "product_id": "8c2d"
, "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c2d", "address": "0000:00:1a.0"}, {"dev_id": "
pci_0000_00_1b_0", "product_id": "8c20", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c20", "
address": "0000:00:1b.0"}, {"dev_id": "pci_0000_00_1c_0", "product_id": "8c10", "dev_type": "type-PCI", "numa_node": null, "vendor_id"
: "8086", "label": "label_8086_8c10", "address": "0000:00:1c.0"}, {"dev_id": "pci_0000_01_00_0", "product_id": "8168", "dev_type": "ty
pe-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10ec_8168", "address": "0000:01:00.0"}, {"dev_id": "pci_0000_00_1c_1"
, "product_id": "8c12", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c12", "address": "0000:0
0:1c.1"}, {"dev_id": "pci_0000_02_00_0", "product_id": "8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label"
: "label_10ec_8168", "address": "0000:02:00.0"}, {"dev_id": "pci_0000_00_1c_2", "product_id": "8c14", "dev_type": "type-PCI", "numa_no
de": null, "vendor_id": "8086", "label": "label_8086_8c14", "address": "0000:00:1c.2"}, {"dev_id": "pci_0000_03_00_0", "product_id": "
8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10ec_8168", "address": "0000:03:00.0"}, {"dev_i
d": "pci_0000_00_1c_3", "product_id": "8c16", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c1
6", "address": "0000:00:1c.3"}, {"dev_id": "pci_0000_04_00_0", "product_id": "244e", "dev_type": "type-PCI", "numa_node": null, "vendo
r_id": "8086", "label": "label_8086_244e", "address": "0000:04:00.0"}, {"dev_id": "pci_0000_00_1d_0", "product_id": "8c26", "dev_type"
: "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c26", "address": "0000:00:1d.0"}, {"dev_id": "pci_0000_00_
1f_0", "product_id": "8c5c", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c5c", "address": "0
000:00:1f.0"}, {"dev_id": "pci_0000_00_1f_2", "product_id": "8c02", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "l
abel": "label_8086_8c02", "address": "0000:00:1f.2"}, {"dev_id": "pci_0000_00_1f_3", "product_id": "8c22", "dev_type": "type-PCI", "nu
ma_node": null, "vendor_id": "8086", "label": "label_8086_8c22", "address": "0000:00:1f.3"}] _report_hypervisor_resource_view /usr/lib
/python2.7/dist-packages/nova/compute/resource_tracker.py:527
2016-11-02 10:29:57.382 20631 DEBUG nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Hypervisor/Node
 resource view: name=controller free_ram=898MB free_disk=376GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_00_0", "product_id": 
"0c00", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_0c00", "address": "0000:00:00.0"}, {"dev_
id": "pci_0000_00_02_0", "product_id": "041e", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_04
1e", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "0c0c", "dev_type": "type-PCI", "numa_node": null, "vend
or_id": "8086", "label": "label_8086_0c0c", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_14_0", "product_id": "8c31", "dev_type
": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c31", "address": "0000:00:14.0"}, {"dev_id": "pci_0000_00
_16_0", "product_id": "8c3a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c3a", "address": "
0000:00:16.0"}, {"dev_id": "pci_0000_00_1a_0", "product_id": "8c2d", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "
label": "label_8086_8c2d", "address": "0000:00:1a.0"}, {"dev_id": "pci_0000_00_1b_0", "product_id": "8c20", "dev_type": "type-PCI", "n
uma_node": null, "vendor_id": "8086", "label": "label_8086_8c20", "address": "0000:00:1b.0"}, {"dev_id": "pci_0000_00_1c_0", "product_
id": "8c10", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c10", "address": "0000:00:1c.0"}, {
"dev_id": "pci_0000_01_00_0", "product_id": "8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10
ec_8168", "address": "0000:01:00.0"}, {"dev_id": "pci_0000_00_1c_1", "product_id": "8c12", "dev_type": "type-PCI", "numa_node": null, 
"vendor_id": "8086", "label": "label_8086_8c12", "address": "0000:00:1c.1"}, {"dev_id": "pci_0000_02_00_0", "product_id": "8168", "dev
_type": "type-PCI", "numa_node": null, "vendor_id": "10ec", "label": "label_10ec_8168", "address": "0000:02:00.0"}, {"dev_id": "pci_00
00_00_1c_2", "product_id": "8c14", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c14", "addres
s": "0000:00:1c.2"}, {"dev_id": "pci_0000_03_00_0", "product_id": "8168", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "10e
c", "label": "label_10ec_8168", "address": "0000:03:00.0"}, {"dev_id": "pci_0000_00_1c_3", "product_id": "8c16", "dev_type": "type-PCI
", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c16", "address": "0000:00:1c.3"}, {"dev_id": "pci_0000_04_00_0", "pro
duct_id": "244e", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_244e", "address": "0000:04:00.0
"}, {"dev_id": "pci_0000_00_1d_0", "product_id": "8c26", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "lab
el_8086_8c26", "address": "0000:00:1d.0"}, {"dev_id": "pci_0000_00_1f_0", "product_id": "8c5c", "dev_type": "type-PCI", "numa_node": n
ull, "vendor_id": "8086", "label": "label_8086_8c5c", "address": "0000:00:1f.0"}, {"dev_id": "pci_0000_00_1f_2", "product_id": "8c02",
 "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c02", "address": "0000:00:1f.2"}, {"dev_id": "p
ci_0000_00_1f_3", "product_id": "8c22", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_8c22", "a
ddress": "0000:00:1f.3"}] _report_hypervisor_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:541
2016-11-02 10:29:57.383 20631 DEBUG oslo_concurrency.lockutils [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Lock "compute_reso
urces" acquired by "_update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:
444
2016-11-02 10:29:57.438 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Total usable vcp
us: 4, total allocated vcpus: 0
2016-11-02 10:29:57.439 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Final resource v
iew: name=controller phys_ram=7872MB used_ram=512MB phys_disk=454GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=<nova.pci.stats
.PciDeviceStats object at 0x7f7144493910>
2016-11-02 10:29:57.511 20631 INFO nova.scheduler.client.report [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Compute_service r
ecord updated for ('controller', 'controller')
2016-11-02 10:29:57.512 20631 INFO nova.compute.resource_tracker [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Compute_service 
record updated for controller:controller
2016-11-02 10:29:57.512 20631 DEBUG oslo_concurrency.lockutils [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Lock "compute_reso
urces" released by "_update_available_resource" :: held 0.130s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:45
6
2016-11-02 10:29:57.537 20631 DEBUG nova.openstack.common.loopingcall [req-4eb1cd58-0ea4-4c00-838c-c2d76d72963e - - - - -] Dynamic loo
ping call <bound method Service.periodic_tasks of <nova.service.Service object at 0x7f714e67e950>> sleeping for 0.81 seconds _inner /u
sr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132

-------------------------------------nova-api.log----------------------------------------------

-------------------------------------nova-api.log----------------------------------------------

2016-11-02 10:44:26.151 20403 DEBUG keystoneclient.session [-] REQ: curl -g -i -X GET http://controller:35357/v3/auth/tokens -H "X-Subject-Token: {SHA1}bb556b02080d675af1c81adf201d171ada7e8702" -H "User-Agent: python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}c9b1731ffcf271cd044e1d8359277fea5240a175" _http_log_request /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
2016-11-02 10:44:26.203 20403 DEBUG keystoneclient.session [-] RESP: [200] content-length: 2810 x-subject-token: {SHA1}bb556b02080d675af1c81adf201d171ada7e8702 vary: X-Auth-Token x-distribution: Ubuntu connection: keep-alive date: Wed, 02 Nov 2016 01:44:26 GMT content-type: application/json x-openstack-request-id: req-1dfad370-201a-42df-ba59-a0431eebb387 
RESP BODY: {"token": {"methods": ["password", "token"], "roles": [{"id": "1e3c093fe18f4127869d27a10e4cf3fc", "name": "admin"}], "expires_at": "2016-11-02T02:44:26.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "03a58bc4d34c4c0e9c52b679cd659d6c", "name": "admin"}, "catalog": "<removed>", "extras": {}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "9224f04451e74a8daafb9fba6a7ac047", "name": "admin"}, "audit_ids": ["pjA_aGIqQayvZQa4xS7owg"], "issued_at": "2016-11-02T01:44:26.037479"}}
 _http_log_response /usr/lib/python2.7/dist-packages/keystoneclient/session.py:223
2016-11-02 10:44:26.207 20403 DEBUG nova.api.openstack.wsgi [req-62f2db41-cca2-4d5b-a067-c6fcb52386c5 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Calling method '<bound method HypervisorsController.index of <nova.api.openstack.compute.contrib.hypervisors.HypervisorsController object at 0x7f010803bd50>>' _process_stack /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:783
2016-11-02 10:44:26.223 20403 INFO nova.osapi_compute.wsgi.server [req-62f2db41-cca2-4d5b-a067-c6fcb52386c5 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] 117.17.102.109 "GET /v2/03a58bc4d34c4c0e9c52b679cd659d6c/os-hypervisors HTTP/1.1" status: 200 len: 298 time: 0.0730362

---------------------------------nova-conductor.log-----------------------------------------------------

---------------------------------nova-conductor.log-----------------------------------------------------

2016-11-02 11:09:39.453 20492 DEBUG nova.openstack.common.loopingcall [req-f467a1ec-4c88-4957-a01a-2e49a2f54561 - - - - -] Dynamic looping call <bound method Service.periodic_tasks of <nova.service.Service object at 0x7fba45b40590>> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132
2016-11-02 11:09:47.475 20493 INFO oslo_messaging._drivers.impl_rabbit [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Connecting to AMQP server on controller:5672
2016-11-02 11:09:47.487 20493 INFO oslo_messaging._drivers.impl_rabbit [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Connected to AMQP server on controller:5672
2016-11-02 11:10:09.406 20494 DEBUG nova.openstack.common.loopingcall [req-72a02c34-ff0c-4928-ab54-0fd00a3f1771 - - - - -] Dynamic looping call <bound method Service.periodic_tasks of <nova.service.Service object at 0x7fba45b40590>> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132
2016-11-02 11:10:17.227 20493 DEBUG nova.openstack.common.loopingcall [req-1e7f32d8-4550-4b0a-a204-ecc401a3dff7 - - - - -] Dynamic looping call <bound method Service.periodic_tasks of <nova.service.Service object at 0x7fba45b40590>> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132
2016-11-02 11:10:26.475 20495 DEBUG nova.openstack.common.loopingcall [req-339f1f42-22a9-42d5-a2a8-8b87f8f2a0c8 - - - - -] Dynamic looping call <bound method Service.periodic_tasks of <nova.service.Service object at 0x7fba45b40590>> sleeping for 60.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132
2016-11-02 11:10:28.779 20493 ERROR nova.scheduler.utils [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] [instance: 4ee0efd4-84eb-452f-9768-ba6cc17642e8] Error from last host: controller (node controller): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2248, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2393, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 4ee0efd4-84eb-452f-9768-ba6cc17642e8 was re-scheduled: operation failed: domain is no longer running\n']
2016-11-02 11:10:28.847 20493 WARNING nova.scheduler.utils [req-3732306c-b25c-4537-9d0f-a4e2b52cdb28 9224f04451e74a8daafb9fba6a7ac047 03a58bc4d34c4c0e9c52b679cd659d6c - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 142, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 86, in select_destinations
    filter_properties)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 80, in select_destinations
    raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.