Ask Your Question
0

what is the prerequisite for multinode cinder

asked 2013-04-05 18:49:00 -0600

shantha-kumar gravatar image

I have installed Openstack controller + compute in single node. Now I want expand my O~S cinder component alone in another node.

Kindly let me know the prerequisite for configuring the cinder node in separate server.

edit retag flag offensive close merge delete

23 answers

Sort by ยป oldest newest most voted
0

answered 2013-04-16 09:55:16 -0600

shantha-kumar gravatar image

Thanks for your response.

install component includes cinder-api , scheduler also or only cinder volume ?

edit flag offensive delete link more
0

answered 2013-04-17 15:28:46 -0600

shantha-kumar gravatar image

Thanks for your response.

So as per your setup, you have two cinder scheduler (one in controller node and other on new node).. when I serve a request from horizon how it goes to 2 schedulers and where it create the volumes?

edit flag offensive delete link more
0

answered 2013-04-18 09:07:52 -0600

shantha-kumar gravatar image

Thanks for your response.

Now I have removed the cinder-volumes services from node1 and setup looks like below but still im getting the "no valid host found error".

mysql and rabbit is point to controller node.

##################################

Controller node1: CInder-api cinder-scheduler

Cluster node2: Cinder-volume tgtd

cinder.conf file [DEFAULT] rootwrap_config=/etc/cinder/rootwrap.conf sql_connection = mysql://cinderUser:cinderPass@10.1.0.29/cinder api_paste_confg = /etc/cinder/api-paste.ini iscsi_helper=ietadm volume_name_template = volume-%s volume_group = cinder-volumes rabbit_host = 10.1.0.29 volumes_dir = /etc/cinder/volumes state_path = /var/lib/cinder lock_path = /var/lock/cinder verbose = True auth_strategy = keystone debug=true #osapi_volume_listen_port=5900 iscsi_ip_address=10.1.0.7

edit flag offensive delete link more
0

answered 2013-04-18 09:09:57 -0600

shantha-kumar gravatar image

Thanks for your resposne on multiback end drivers.

We have multiback end enabled and its working !!!! now we want to expand the backend to individual system !!!

It means each cinder to point only one backend!!!!

edit flag offensive delete link more
0

answered 2013-04-22 13:26:55 -0600

share me your log which included that error, it's easier to detect your problem

edit flag offensive delete link more
0

answered 2013-04-23 05:25:02 -0600

shantha-kumar gravatar image

thanks for your response.

I have attached the scheduler log, when I m trying to create the volume

FYI - Cinder host list commands list the host and in mysql DB the host is added

###############################

tailf /var/log/cinder/cinder-scheduler.log

2013-04-24 00:20:59 DEBUG [cinder.openstack.common.rpc.amqp] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-17ecae07-31ff-4096-812d-062060fb2695', u'_context_quota_class': None, u'_unique_id': u'367cc5b4acf74f2ba4d379f9db804b81', u'_context_read_deleted': u'no', u'args': {u'service_name': u'volume', u'host': u'cinder', u'capabilities': {u'QoS_support': False, u'volume_backend_name': u'LVM_iSCSI', u'free_capacity_gb': 549.75, u'driver_version': u'1.0', u'total_capacity_gb': 549.75, u'reserved_percentage': 0, u'vendor_name': u'Open Source', u'storage_protocol': u'iSCSI'}}, u'_context_tenant': None, u'_context_auth_token': '<sanitized>', u'_context_is_admin': True, u'version': u'1.0', u'_context_project_id': None, u'_context_timestamp': u'2013-04-23T05:21:00.558719', u'_context_user': None, u'_context_user_id': None, u'method': u'update_service_capabilities', u'_context_remote_address': None} 2013-04-24 00:20:59 DEBUG [cinder.openstack.common.rpc.amqp] unpacked context: {'user_id': None, 'roles': [u'admin'], 'timestamp': u'2013-04-23T05:21:00.558719', 'auth_token': '<sanitized>', 'remote_address': None, 'quota_class': None, 'is_admin': True, 'user': None, 'request_id': u'req-17ecae07-31ff-4096-812d-062060fb2695', 'project_id': None, 'read_deleted': u'no', 'tenant': None} 2013-04-24 00:20:59 DEBUG [cinder.scheduler.host_manager] Received volume service update from cinder.

2013-04-24 00:21:18 DEBUG [cinder.openstack.common.rpc.amqp] received {u'_context_roles': [u'KeystoneAdmin', u'admin', u'KeystoneServiceAdmin'], u'_context_request_id': u'req-213a11c7-9e44-4fb1-a0f3-e9363f247bd1', u'_context_quota_class': None, u'_unique_id': u'7d86b91fcab24a3c8b00a9a3ad18d0f1', u'_context_read_deleted': u'no', u'args': {u'request_spec': {u'volume_id': u'73978248-b515-4c03-a339-5ca7979324a2', u'volume_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': None, u'availability_zone': u'nova', u'attach_status': u'detached', u'source_volid': None, u'metadata': {}, u'volume_metadata': [], u'display_description': None, u'snapshot_id': None, u'user_id': u'4b106ff812b54472b8e2d3596524f730', u'project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', u'id': u'73978248-b515-4c03-a339-5ca7979324a2', u'size': 2}, u'volume_type': {}, u'image_id': None, u'source_volid': None, u'snapshot_id': None}, u'volume_id': u'73978248-b515-4c03-a339-5ca7979324a2', u'filter_properties': {}, u'topic': u'cinder-volume', u'image_id': None, u'snapshot_id': None}, u'_context_tenant': u'fedfaa89548e41188c2dbd6f96d0de4a', u'_context_auth_token': '<sanitized>', u'_context_is_admin': True, u'version': u'1.2', u'_context_project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', u'_context_timestamp': u'2013-04-23T18:51:18.663053', u'_context_user': u'4b106ff812b54472b8e2d3596524f730', u'_context_user_id': u'4b106ff812b54472b8e2d3596524f730', u'method': u'create_volume', u'_context_remote_address': u'10.1.0.29'} 2013-04-24 00:21:18 DEBUG [cinder.openstack.common.rpc.amqp] unpacked context: {'user_id': u'4b106ff812b54472b8e2d3596524f730', 'roles': [u'KeystoneAdmin', u'admin', u'KeystoneServiceAdmin'], 'timestamp': u'2013-04-23T18:51:18.663053', 'auth_token': '<sanitized>', 'remote_address': u'10.1.0.29', 'quota_class': None, 'is_admin': True, 'user': u'4b106ff812b54472b8e2d3596524f730', 'request_id': u'req-213a11c7-9e44-4fb1-a0f3-e9363f247bd1', 'project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', 'read_deleted': u'no', 'tenant': u'fedfaa89548e41188c2dbd6f96d0de4a'} 2013-04-24 00:21:18 WARNING [cinder.scheduler.host_manager] service is down or disabled. 2013-04-24 00:21:18 WARNING [cinder.scheduler.host_manager] service is down or disabled. 2013-04-24 00:21:18 ERROR [cinder.scheduler.manager] Failed to schedule_create_volume: No valid host was found.

Kindly let me know if u need any more logs.

edit flag offensive delete link more
0

answered 2013-04-25 02:27:12 -0600

from your log, i see that cinder-volume services is down or disable.

Take a look the table services in cinder db to make sure that the cinder-volume is present and hasn't disabled yet.

edit flag offensive delete link more
0

answered 2013-04-25 05:19:50 -0600

shantha-kumar gravatar image

Thanks for your response.

Cinder-volume service is running in node(CINDER)

please find the cinderdb , services which shows the cinder volume

root@grzrc3:~# mysql -uroot -piso*help cinder -e 'select * from services;' +---------------------+---------------------+------------+---------+----+-----------------+------------------+------------------+--------------+----------+-------------------+ | created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone | +---------------------+---------------------+------------+---------+----+-----------------+------------------+------------------+--------------+----------+-------------------+ | 2013-04-08 22:48:21 | 2013-04-25 18:45:44 | NULL | 0 | 1 | grzrc3 | cinder-scheduler | cinder-scheduler | 144940 | 0 | nova | | 2013-04-18 10:14:32 | 2013-04-24 09:49:08 | NULL | 0 | 6 | cinder | cinder-volume | cinder-volume | 43577 | 0 | nova | +---------------------+---------------------+------------+---------+----+-----------------+------------------+------------------+--------------+----------+-------------------+ root@grzrc3:~#

root@grzrc3:~# cinder-manage host list host zone cinder nova root@grzrc3:~# root@grzrc3:~#

Any configuration changes needs to be done ?

edit flag offensive delete link more
0

answered 2013-04-26 02:45:01 -0600

| 2013-04-08 22:48:21 | 2013-04-25 18:45:44 | NULL | 0 | 1 | grzrc3 | cinder-scheduler | cinder-scheduler | 144940 | 0 | nova | | 2013-04-18 10:14:32 | 2013-04-24 09:49:08 | NULL | 0 | 6 | cinder | cinder-volume | cinder-volume | 43577 | 0 | nova |

I see that the update-time field of cinder-volume hasn't updated frequently -> so the cinder scheduler service marked it like as a service down.

I share you the formula which uses to check service down :

last_heartbeat = service['updated_at'] or service['created_at']
# Timestamps in DB are UTC.
elapsed = total_seconds(timeutils.utcnow() - last_heartbeat)
return abs(elapsed) <= FLAGS.service_down_time

FLAGS.service_down_time is 60s in default.

Hope it help you detect your problem...

edit flag offensive delete link more
0

answered 2013-04-26 18:39:53 -0600

shantha-kumar gravatar image

Thanks for your response.

Im unaware of using the above mentioned formula, Kindly I request you to advise me to proceed further.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-04-05 18:49:00 -0600

Seen: 988 times

Last updated: Aug 06 '18