Ask Your Question

Is there a reason why cinder throws an error on one compute node, but not others?

asked 2013-09-17 08:44:31 -0600

samishah gravatar image

I recently added a new system to my pool, and set it up and configured it exactly the same way that I did for the previous 3 systems. When I set up those systems, everything seemed to run smooth and I was able to create instances instantly.

With this latest addition, using the same install files (so as not to change anything), I installed openstack, but when I try to create an instance, the status goes to "Error" and "No State".

Looking at the console, I see the following error: 2013-09-17 08:57:08.505 TRACE cinder.service ProgrammingError: (ProgrammingError) (1146, "Table 'cinder.volume_admin_metadata' doesn't exist") 'SELECT volumes.created_at AS volumes_created_at, volumes.updated_at AS volumes_updated_at, volumes.deleted_at AS volumes_deleted_at, AS volumes_id, volumes._name_id AS volumes__name_id, volumes.ec2_id AS volumes_ec2_id, volumes.user_id AS volumes_user_id, volumes.project_id AS volumes_project_id, volumes.snapshot_id AS volumes_snapshot_id, AS volumes_host, volumes.size AS volumes_size, volumes.availability_zone AS volumes_availability_zone, volumes.instance_uuid AS volumes_instance_uuid, volumes.attached_host AS volumes_attached_host, volumes.mountpoint AS volumes_mountpoint, volumes.attach_time AS volumes_attach_time, volumes.status AS volumes_status, volumes.attach_status AS volumes_attach_status, volumes.migration_status AS volumes_migration_status, volumes.scheduled_at AS volumes_scheduled_at, volumes.launched_at AS volumes_launched_at, volumes.terminated_at AS volumes_terminated_at, volumes.display_name AS volumes_display_name, volumes.display_description AS volumes_display_description, volumes.provider_location AS volumes_provider_location, volumes.provider_auth AS volumes_provider_auth, volumes.provider_geometry AS volumes_provider_geometry, volumes.volume_type_id AS volumes_volume_type_id, volumes.source_volid AS volumes_source_volid, volumes.encryption_key_id AS volumes_encryption_key_id, volumes.deleted AS volumes_deleted, volumes.bootable AS volumes_bootable, volume_types_1.created_at AS volume_types_1_created_at, volume_types_1.updated_at AS volume_types_1_updated_at, volume_types_1.deleted_at AS volume_types_1_deleted_at, volume_types_1.deleted AS volume_types_1_deleted, AS volume_types_1_id, AS volume_types_1_name, volume_types_1.qos_specs_id AS volume_types_1_qos_specs_id, volume_admin_metadata_1.created_at AS volume_admin_metadata_1_created_at, volume_admin_metadata_1.updated_at AS volume_admin_metadata_1_updated_at, volume_admin_metadata_1.deleted_at AS volume_admin_metadata_1_deleted_at, volume_admin_metadata_1.deleted AS volume_admin_metadata_1_deleted, AS volume_admin_metadata_1_id, volume_admin_metadata_1.key AS volume_admin_metadata_1_key, volume_admin_metadata_1.value AS volume_admin_metadata_1_value, volume_admin_metadata_1.volume_id AS volume_admin_metadata_1_volume_id, volume_metadata_1.created_at AS volume_metadata_1_created_at, volume_metadata_1.updated_at AS volume_metadata_1_updated_at, volume_metadata_1.deleted_at AS volume_metadata_1_deleted_at, volume_metadata_1.deleted AS volume_metadata_1_deleted, AS volume_metadata_1_id, volume_metadata_1.key AS volume_metadata_1_key, volume_metadata_1.value AS volume_metadata_1_value, volume_metadata_1.volume_id AS volume_metadata_1_volume_id \nFROM volumes LEFT OUTER JOIN volume_types AS volume_types_1 ON volumes.volume_type_id = AND volume_types_1.deleted = %s LEFT OUTER JOIN volume_admin_metadata AS volume_admin_metadata_1 ON volume_admin_metadata_1.volume_id = AND volume_admin_metadata_1.deleted = %s LEFT OUTER JOIN volume_metadata AS volume_metadata_1 ON volume_metadata_1.volume_id = AND volume_metadata_1.deleted = %s \nWHERE volumes.deleted = %s AND = %s' (0, 0, 0, 0, '<hostname>') 2013-09-17 08:57:08.505 TRACE cinder.service</hostname>

edit retag flag offensive close merge delete


The traceback above indicates probably the Cinder db sync did not happen correctly on the new system. Can you be more specific about (1) what service(s) you installed on the new system (2) what configuration settings did you do on this new server.

unmesh-gurjar gravatar imageunmesh-gurjar ( 2013-09-18 03:12:43 -0600 )edit

I was setting this system up as compute - the services enabled were: n-cpu,n-net,n-api,c-sch,c-api,c-vol. I think there is a version mismatch between the setup of new and old compute nodes. I tried to correct it by by using the same install files as my other nodes, but i think the db is not in sync.

samishah gravatar imagesamishah ( 2013-09-18 08:45:38 -0600 )edit

If you want the system to be a compute node, you can remove "c-api, c-sch, c-vol" from the enabled services list on this node.

unmesh-gurjar gravatar imageunmesh-gurjar ( 2013-09-18 09:45:47 -0600 )edit

Hi, could you edit your post and be more specific about the configuration files, the method of installation and the versions you've used? But at first sight, it seems like your services aren't pointing to the right database.

Sirushti Murugesan gravatar imageSirushti Murugesan ( 2013-09-20 12:17:07 -0600 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2016-04-15 02:02:10 -0600

DarkKnight gravatar image

I have faced a similar issue on one of my compute nodes, there was issues in gracefully detaching volumes from one of the compute nodes(from a 3 node compute and 1 controller). What you may check is if these packages are installed on each compute nodes. "sysfsutils ,sg3_utils, multipath-tools"

On redhat based systems, use "yum install sysfsutils sg3_utils multipath-tools -y " to install these . All the best!

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2013-09-17 08:44:31 -0600

Seen: 438 times

Last updated: Apr 15 '16