Ask Your Question

error connecting to the cluster nova.compute.manager

asked 2016-04-08 12:30:58 -0600

Nicest gravatar image

updated 2016-04-11 12:35:02 -0600

Hi All,

I am currently seeing this error on a dedicated compute node in my cluster. my controller and first compute node are working fine but I continue to see this exception on a second node I attempted to add to the cluster. I can see the hypervisor,nova-compute service, and linux bridge are all running and connected to controller but when I try to spawn an instance it fails with a similar message.

[UPDATE] I see the node in the compute service list but not the hypervisor list. I checked the libvirtd process and it's running okay

I tried renaming the node from 01 to 02 to see if the record was the problem but same exception:

2016-04-11 13:34:00.130 27957 ERROR nova.compute.manager [req-6f16a31f-ba3a-432c-a5f5-d19562e4a472 - - - - -] No compute node record for host compute-node02 2016-04-11 13:34:00.152 27957 WARNING nova.compute.monitors [req-6f16a31f-ba3a-432c-a5f5-d19562e4a472 - - - - -] Excluding nova.compute.monitors.cpu monitor virt_driver. Not in the list of enabled monitors (CONF.compute_monitors). 2016-04-11 13:34:00.155 27957 INFO nova.compute.resource_tracker [req-6f16a31f-ba3a-432c-a5f5-d19562e4a472 - - - - -] Auditing locally available compute resources for node compute-node01 2016-04-11 13:34:00.258 27957 ERROR nova.compute.manager [req-6f16a31f-ba3a-432c-a5f5-d19562e4a472 - - - - -] Error updating resources for node compute-node01: error connecting to the cluster

Included my nova.conf below:

rpc_backend = rabbit
auth_strategy = keystone
my_ip =
network_api_class =
security_group_api = neutron
linuxnet_interface_driver =
firewall_driver = nova.virt.firewall.NoopFirewallDriver
host = controller-node01
auth_uri = http://controller-node01:5000
auth_url = http://controller-node01:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = OpenstackMain123
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = admin
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
disk_cachemodes = network=writeback
url = http://controller-node01:9696
auth_url = http://controller-node01:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = OpenstackMain123
service_metadata_proxy = True
metadata_proxy_shared_secret = OpenstackMain123
lock_path = /var/lib/nova/tmp
rabbit_host =
rabbit_userid = openstack
rabbit_password = OpenstackMain123
enabled = True
vncserver_listen =
vncserver_proxyclient_address =
novncproxy_base_url =

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2018-05-23 00:12:03 -0600

i resolved it by rebooting the compute nodes after that the controller was online

edit flag offensive delete link more

answered 2016-04-11 12:55:57 -0600

Nicest gravatar image

I resolved it by uninstalling the compute service and manually typing the config. there must have been a mistake in my nova config file but it's working now.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools


Asked: 2016-04-08 12:30:58 -0600

Seen: 3,317 times

Last updated: Apr 11 '16