# Revision history [back]

### error connecting to the cluster nova.compute.manager

Hi All,

I am currently seeing this error on a dedicated compute node in my cluster. my controller and first compute node are working fine but I continue to see this exception on a second node I attempted to add to the cluster. I can see the hypervisor,nova-compute service, and linux bridge are all running and connected to controller but when I try to spawn an instance it fails with a similar message.

### error connecting to the cluster nova.compute.manager

Hi All,

I am currently seeing this error on a dedicated compute node in my cluster. my controller and first compute node are working fine but I continue to see this exception on a second node I attempted to add to the cluster. I can see the hypervisor,nova-compute service, and linux bridge are all running and connected to controller but when I try to spawn an instance it fails with a similar message.

Included my nova.conf below:

[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.16.121.248
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
debug=true
verbose=true
[api_database]
[barbican]
[cells]
[cinder]
[conductor]
[cors]
[cors.subdomain]
[database]
[ephemeral_storage_encryption]
[glance]
host = controller-node01
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://controller-node01:5000
auth_url = http://controller-node01:35357
project_domain_id = default
user_domain_id = default
project_name = service
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
disk_cachemodes = network=writeback
[matchmaker_redis]
[matchmaker_ring]
[metrics]
[neutron]
url = http://controller-node01:9696
auth_url = http://controller-node01:35357
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = 10.16.121.248
rabbit_userid = openstack
[oslo_middleware]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[vmware]
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
novncproxy_base_url = http://10.16.121.229:6080/vnc_auto.html
[workarounds]
[xenserver]
[zookeeper]  

### error connecting to the cluster nova.compute.manager

Hi All,

I am currently seeing this error on a dedicated compute node in my cluster. my controller and first compute node are working fine but I continue to see this exception on a second node I attempted to add to the cluster. I can see the hypervisor,nova-compute service, and linux bridge are all running and connected to controller but when I try to spawn an instance it fails with a similar message.

[UPDATE] I see the node in the compute service list but not the hypervisor list. I checked the libvirtd process and it's running okay

I tried renaming the node from 01 to 02 to see if the record was the problem but same exception:

2016-04-11 13:34:00.130 27957 ERROR nova.compute.manager [req-6f16a31f-ba3a-432c-a5f5-d19562e4a472 - - - - -] No compute node record for host compute-node02 2016-04-11 13:34:00.152 27957 WARNING nova.compute.monitors [req-6f16a31f-ba3a-432c-a5f5-d19562e4a472 - - - - -] Excluding nova.compute.monitors.cpu monitor virt_driver. Not in the list of enabled monitors (CONF.compute_monitors). 2016-04-11 13:34:00.155 27957 INFO nova.compute.resource_tracker [req-6f16a31f-ba3a-432c-a5f5-d19562e4a472 - - - - -] Auditing locally available compute resources for node compute-node01 2016-04-11 13:34:00.258 27957 ERROR nova.compute.manager [req-6f16a31f-ba3a-432c-a5f5-d19562e4a472 - - - - -] Error updating resources for node compute-node01: error connecting to the cluster

Included my nova.conf below:

[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.16.121.248
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
debug=true
verbose=true
[api_database]
[barbican]
[cells]
[cinder]
[conductor]
[cors]
[cors.subdomain]
[database]
[ephemeral_storage_encryption]
[glance]
host = controller-node01
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://controller-node01:5000
auth_url = http://controller-node01:35357
project_domain_id = default
user_domain_id = default
project_name = service
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
disk_cachemodes = network=writeback
[matchmaker_redis]
[matchmaker_ring]
[metrics]
[neutron]
url = http://controller-node01:9696
auth_url = http://controller-node01:35357
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = 10.16.121.248
rabbit_userid = openstack
[oslo_middleware]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[zookeeper]