Ask Your Question
0

what is the prerequisite for multinode cinder

asked 2013-04-05 18:49:00 -0500

shantha-kumar gravatar image

I have installed Openstack controller + compute in single node. Now I want expand my O~S cinder component alone in another node.

Kindly let me know the prerequisite for configuring the cinder node in separate server.

edit retag flag offensive close merge delete

23 answers

Sort by ยป oldest newest most voted
0

answered 2013-04-10 06:55:19 -0500

Go ahead to install cinder component in another node. ^.^. Just remind, update correct end-point of cinder volume in keystore to new server path.

edit flag offensive delete link more
0

answered 2013-04-16 09:55:16 -0500

shantha-kumar gravatar image

Thanks for your response.

install component includes cinder-api , scheduler also or only cinder volume ?

edit flag offensive delete link more
0

answered 2013-04-16 10:07:49 -0500

for me, I installed all on the storage node.

edit flag offensive delete link more
0

answered 2013-04-17 15:28:46 -0500

shantha-kumar gravatar image

Thanks for your response.

So as per your setup, you have two cinder scheduler (one in controller node and other on new node).. when I serve a request from horizon how it goes to 2 schedulers and where it create the volumes?

edit flag offensive delete link more
0

answered 2013-04-18 02:41:02 -0500

for the simple deployement, i suggest you install one cinder-api, and cinder-scheduler on controller node and install cinder-volume on the cluster nodes..

I included HA for these cinder services, so i installed all.

edit flag offensive delete link more
0

answered 2013-04-18 03:07:45 -0500

another options you to expand the storage nodes:

  1. use multi-backend lvm volumes, see more detail in the guide: http://docs.openstack.org/trunk/openstack-block-storage/admin/content/multi_backend.html (http://docs.openstack.org/trunk/opens...)

  2. select one the stable storage back-end, -> it's easier to extend more storage node dynamically. In my deployment, i'm using ceph likes as the storage back-end. ^.^

Hope they're useful to you ;)

edit flag offensive delete link more
0

answered 2013-04-18 09:07:52 -0500

shantha-kumar gravatar image

Thanks for your response.

Now I have removed the cinder-volumes services from node1 and setup looks like below but still im getting the "no valid host found error".

mysql and rabbit is point to controller node.

##################################

Controller node1: CInder-api cinder-scheduler

Cluster node2: Cinder-volume tgtd

cinder.conf file [DEFAULT] rootwrap_config=/etc/cinder/rootwrap.conf sql_connection = mysql://cinderUser:cinderPass@10.1.0.29/cinder api_paste_confg = /etc/cinder/api-paste.ini iscsi_helper=ietadm volume_name_template = volume-%s volume_group = cinder-volumes rabbit_host = 10.1.0.29 volumes_dir = /etc/cinder/volumes state_path = /var/lib/cinder lock_path = /var/lock/cinder verbose = True auth_strategy = keystone debug=true #osapi_volume_listen_port=5900 iscsi_ip_address=10.1.0.7

edit flag offensive delete link more
0

answered 2013-04-18 09:09:57 -0500

shantha-kumar gravatar image

Thanks for your resposne on multiback end drivers.

We have multiback end enabled and its working !!!! now we want to expand the backend to individual system !!!

It means each cinder to point only one backend!!!!

edit flag offensive delete link more
0

answered 2013-04-22 13:26:55 -0500

share me your log which included that error, it's easier to detect your problem

edit flag offensive delete link more
0

answered 2013-04-23 05:25:02 -0500

shantha-kumar gravatar image

thanks for your response.

I have attached the scheduler log, when I m trying to create the volume

FYI - Cinder host list commands list the host and in mysql DB the host is added

###############################

tailf /var/log/cinder/cinder-scheduler.log

2013-04-24 00:20:59 DEBUG [cinder.openstack.common.rpc.amqp] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-17ecae07-31ff-4096-812d-062060fb2695', u'_context_quota_class': None, u'_unique_id': u'367cc5b4acf74f2ba4d379f9db804b81', u'_context_read_deleted': u'no', u'args': {u'service_name': u'volume', u'host': u'cinder', u'capabilities': {u'QoS_support': False, u'volume_backend_name': u'LVM_iSCSI', u'free_capacity_gb': 549.75, u'driver_version': u'1.0', u'total_capacity_gb': 549.75, u'reserved_percentage': 0, u'vendor_name': u'Open Source', u'storage_protocol': u'iSCSI'}}, u'_context_tenant': None, u'_context_auth_token': '<sanitized>', u'_context_is_admin': True, u'version': u'1.0', u'_context_project_id': None, u'_context_timestamp': u'2013-04-23T05:21:00.558719', u'_context_user': None, u'_context_user_id': None, u'method': u'update_service_capabilities', u'_context_remote_address': None} 2013-04-24 00:20:59 DEBUG [cinder.openstack.common.rpc.amqp] unpacked context: {'user_id': None, 'roles': [u'admin'], 'timestamp': u'2013-04-23T05:21:00.558719', 'auth_token': '<sanitized>', 'remote_address': None, 'quota_class': None, 'is_admin': True, 'user': None, 'request_id': u'req-17ecae07-31ff-4096-812d-062060fb2695', 'project_id': None, 'read_deleted': u'no', 'tenant': None} 2013-04-24 00:20:59 DEBUG [cinder.scheduler.host_manager] Received volume service update from cinder.

2013-04-24 00:21:18 DEBUG [cinder.openstack.common.rpc.amqp] received {u'_context_roles': [u'KeystoneAdmin', u'admin', u'KeystoneServiceAdmin'], u'_context_request_id': u'req-213a11c7-9e44-4fb1-a0f3-e9363f247bd1', u'_context_quota_class': None, u'_unique_id': u'7d86b91fcab24a3c8b00a9a3ad18d0f1', u'_context_read_deleted': u'no', u'args': {u'request_spec': {u'volume_id': u'73978248-b515-4c03-a339-5ca7979324a2', u'volume_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': None, u'availability_zone': u'nova', u'attach_status': u'detached', u'source_volid': None, u'metadata': {}, u'volume_metadata': [], u'display_description': None, u'snapshot_id': None, u'user_id': u'4b106ff812b54472b8e2d3596524f730', u'project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', u'id': u'73978248-b515-4c03-a339-5ca7979324a2', u'size': 2}, u'volume_type': {}, u'image_id': None, u'source_volid': None, u'snapshot_id': None}, u'volume_id': u'73978248-b515-4c03-a339-5ca7979324a2', u'filter_properties': {}, u'topic': u'cinder-volume', u'image_id': None, u'snapshot_id': None}, u'_context_tenant': u'fedfaa89548e41188c2dbd6f96d0de4a', u'_context_auth_token': '<sanitized>', u'_context_is_admin': True, u'version': u'1.2', u'_context_project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', u'_context_timestamp': u'2013-04-23T18:51:18.663053', u'_context_user': u'4b106ff812b54472b8e2d3596524f730', u'_context_user_id': u'4b106ff812b54472b8e2d3596524f730', u'method': u'create_volume', u'_context_remote_address': u'10.1.0.29'} 2013-04-24 00:21:18 DEBUG [cinder.openstack.common.rpc.amqp] unpacked context: {'user_id': u'4b106ff812b54472b8e2d3596524f730', 'roles': [u'KeystoneAdmin', u'admin', u'KeystoneServiceAdmin'], 'timestamp': u'2013-04-23T18:51:18.663053', 'auth_token': '<sanitized>', 'remote_address': u'10.1.0.29', 'quota_class': None, 'is_admin': True, 'user': u'4b106ff812b54472b8e2d3596524f730', 'request_id': u'req-213a11c7-9e44-4fb1-a0f3-e9363f247bd1', 'project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', 'read_deleted': u'no', 'tenant': u'fedfaa89548e41188c2dbd6f96d0de4a'} 2013-04-24 00:21:18 WARNING [cinder.scheduler.host_manager] service is down or disabled. 2013-04-24 00:21:18 WARNING [cinder.scheduler.host_manager] service is down or disabled. 2013-04-24 00:21:18 ERROR [cinder.scheduler.manager] Failed to schedule_create_volume: No valid host was found.

Kindly let me know if u need any more logs.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-04-05 18:49:00 -0500

Seen: 801 times

Last updated: Aug 06 '18