Error: Failed to launch instance [Error: No valid host was found. ].
THe compute node is a vm with 8GM ram, 3 cores and 250GB of disk. The image I want to use for instance is cirros using tiny flavor. So the issue isn't a lack of resources available.
/var/log/nova/nova-scheduler.log
2016-12-06 05:10:47.149 9864 INFO nova.openstack.common.service [-] Caught SIGTERM, exiting
2016-12-06 05:10:48.103 12916 AUDIT nova.service [-] Starting scheduler node (version 2014.2.4)
2016-12-06 05:10:49.013 12916 INFO oslo.messaging._drivers.impl_rabbit [req-37ad79e9-a338-4365-a026-376ed7a5bb8c ] C$
2016-12-06 05:10:49.034 12916 INFO oslo.messaging._drivers.impl_rabbit [req-37ad79e9-a338-4365-a026-376ed7a5bb8c ] C$
2016-12-06 05:12:46.225 12916 INFO nova.openstack.common.periodic_task [req-8284d20a-3f6e-46f3-95a5-9a48814a8423 Non$
2016-12-06 05:12:46.301 12916 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on controller:5$
2016-12-06 05:12:46.331 12916 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on controller:56$
2016-12-06 05:12:49.505 12916 INFO nova.filters [req-8284d20a-3f6e-46f3-95a5-9a48814a8423 None] Filter RetryFilter r$lter returned 0 hosts
Nova-compute logs on the compute node is shown below:
2016-12-09 23:33:19.829 13806 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 7983, total allocated virtual ram (MB): 1024
2016-12-09 23:33:19.829 13806 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 241
2016-12-09 23:33:19.829 13806 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 3, total allocated vcpus: 0
2016-12-09 23:33:19.829 13806 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2016-12-09 23:33:19.830 13806 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute:compute
2016-12-09 23:33:37.722 13806 INFO nova.virt.libvirt.imagecache [-] image d72b6577-224f-427a-82ab-d8231c4b2711 at (/var/lib/nova/instances/_base/a893a8504f71e403a02df5401bd0fb3e08bd0933): checking
2016-12-09 23:33:37.722 13806 INFO nova.virt.libvirt.imagecache [-] image d72b6577-224f-427a-82ab-d8231c4b2711 at (/var/lib/nova/instances/_base/a893a8504f71e403a02df5401bd0fb3e08bd0933): in use: on this node 1 local, 0 on other nodes sharing this instance storage
2016-12-09 23:33:37.774 13806 INFO nova.virt.libvirt.imagecache [-] Active base files: /var/lib/nova/instances/_base/a893a8504f71e403a02df5401bd0fb3e08bd0933
2016-12-09 23:34:20.698 13806 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2016-12-09 23:34:20.818 13806 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 7983, total allocated virtual ram (MB): 1024
2016-12-09 23:34:20.819 13806 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 241
2016-12-09 23:34:20.819 13806 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 3, total allocated vcpus: 0
2016-12-09 23:34:20.819 13806 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2016-12-09 23:34:20.820 13806 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute:compute
2016-12-09 23:34:21.397 13806 WARNING nova.compute.manager [-] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
After turning debug to true in nova.conf, I get the following in nova-scheduler.log
2016-12-06 06:18:17.314 12916 INFO nova.openstack.common.service [-] Caught SIGTERM, exiting
2016-12-06 06:18:18.188 14015 AUDIT nova.service [-] Starting scheduler node (version 2014.2.4 ...
I am not sure how to interpret the following line from nova-compute log on compute node
wow this error again try to change the hypervisor (from kvm to qemu) in the nova.conf file under /etc/nova [libvirt] virt_type =qemu then restart nova services
@yas already been done, I did that when deploying openstack in /etc/nova/nova-compute.conf on compute node. Since then ive restarted nova multiple times.
Please post your neutron logs.. This issue can be of neutron port binding issue.. Instance cant be able to create its port in your compute node..
@vathanlal You were right, it had something to do with network ports. I still don't know why admin ports are down though.