while launching instance it shows no valid host found [closed]

asked 2015-06-30 06:57:07 -0500

smcas gravatar image

updated 2015-07-01 00:19:11 -0500

I am new to using openstack so plz help to sort out this issue i m using icehouse on centos 6.6 with three node setup controller, network and compute when i launch the cirros image instance it only show on the console strating up.... and nothing will happen, when i launched centos than it some time show no valid host found or it shutoff automatically and show on console Failed to connect to server (code: 1006)..... t**his is my nova/conductor.log on controller

 **015-06-30 17:04:47.007 57091 DEBUG nova.openstack.common.loopingcall [-] Dynamic looping call sleeping for 60.00 seconds _inner /usr/lib/python2.6/site-packages/nova/openstack/common/loopingcall.py:132
    the same log comes in   console.log, scheduler.log, consoleauth.log

my compute.log on compute node

`2015-06-30 17:08:47.016` 45889 DEBUG nova.network.api [-] Updating cache with info: [VIF({'ovs_interfaceid': u'258fd02c-7a4c-4e32-877b-6b1a1d99064d', 'network': Network({'bridge': 'br-int', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 'fixed', 'floating_ips': [], 'address': u''})], 'version': 4, 'meta': {'dhcp_server': u''}, 'dns': [IP({'meta': {}, 'version': 4, 'type': 'dns', 'address': u''})], 'routes': [], 'cidr': u'', 'gateway': IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': u''})})], 'meta': {'injected': False, 'tenant_id': u'9edc66f8d08d451f94017c8b2a59de6f'}, 'id': u'1b11aad8-8e09-4c70-9453-7cad013f94c0', 'label': u'int-net'}), 'devname': u'tap258fd02c-7a', 'qbh_params': None, 'meta': {}, 'details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address': u'fa:16:3e:f5:66:42', 'active': False, 'type': u'ovs', 'id': u'258fd02c-7a4c-4e32-877b-6b1a1d99064d', 'qbg_params': None})] update_instance_cache_with_nw_info /usr/lib/python2.6/site-packages/nova/network/api.py:81**strong text**
2015-06-30 17:08:47.038 45889 DEBUG nova.compute.manager [-] [instance: f13cc874-e0b7-44fa-a412-6af59dcf185f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.6/site-packages/nova/compute/manager.py:4897
2015-06-30 17:08:47.038 45889 DEBUG nova.openstack.common.loopingcall [-] Dynamic looping call sleeping for 60.00 seconds _inner /usr/lib/python2.6/site-packages/nova/openstack/common/loopingcall.py:132****

openstack-status [root@newcontroller nova]# openstack-status

== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 dead      (disabled on boot)
openstack-nova-network:                 dead      (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              **active**
== Keystone service ==
openstack-keystone:                     **active**
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    dead      (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Support services ==
libvirtd:                               active
openvswitch:                            active
messagebus:                             active
tgtd:                                   active
rabbitmq-server:                        active
memcached:                              active

/neutron/lbaas.log on controller

2015-06-30 14:39:24.309 46154 TRACE neutron
2015-06-30 14:39:26.373 46177 INFO neutron.common.config [-] Logging enabled!
2015-06-30 14:39:26.381 46177 CRITICAL neutron [req-1fa6eb98-a6d5-42ef-bf3c-6ced54d78b78 None] 'NoneType' object has no attribute 'rpartition'
2015-06-30 14:39:26.381 46177 TRACE neutron Traceback (most recent call last):
2015-06-30 14:39:26.381 46177 TRACE neutron   File "/usr/bin/neutron-lbaas-agent", line 10, in <module>
2015-06-30 14:39:26.381 46177 TRACE neutron     sys.exit(main())
2015-06-30 14:39:26.381 46177 TRACE neutron   File "/usr ...
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by smcas
close date 2015-07-09 05:29:23.601941


This is unreadable; could you mark the log entries as code? Highlight it and click the "101010" icon.

Bernd Bausch gravatar imageBernd Bausch ( 2015-06-30 07:15:04 -0500 )edit

Finally got the solution in nova.conf file i use the virt_type=qemu but after changing the virt_type=kvm

smcas gravatar imagesmcas ( 2015-07-09 05:28:38 -0500 )edit

2 answers

Sort by ยป oldest newest most voted

answered 2015-06-30 20:12:36 -0500

nethawk gravatar image

There are many reseaons for this error. You can see the log file scheduler.log of nova to check which filter is failed.

edit flag offensive delete link more

answered 2015-07-01 13:46:15 -0500

jdiaz9 gravatar image

It may be that you have run out of resources on your compute node. I would check your nova-scheduler.log and and see what that says. It is located here: /var/log/nova/nova-scheduler.log on your controller node. Also, nova hypervisor-list will tell you what resources you have available, if any.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools


Asked: 2015-06-30 06:57:07 -0500

Seen: 320 times

Last updated: Jul 01 '15