Ask Your Question
0

what is the prerequisite for multinode cinder

asked 2013-04-05 18:49:00 -0600

shantha-kumar gravatar image

I have installed Openstack controller + compute in single node. Now I want expand my O~S cinder component alone in another node.

Kindly let me know the prerequisite for configuring the cinder node in separate server.

edit retag flag offensive close merge delete

23 answers

Sort by ยป oldest newest most voted
0

answered 2018-08-06 11:39:15 -0600

novainfinite gravatar image

you should innstall openstack packages and ntp for your node

edit flag offensive delete link more
0

answered 2013-10-21 09:21:10 -0600

Thanks Shausy!

I was having same issue where creating a volume on a cinder-volume host failed with " No valid host found" error. This was a problem of timesync between nodes. After correcting the time difference, creating volume started working.

Also, To select a particular volume node, I used different volume backend names and create a volume type assosiated with the particular backend name. The create a volume with the selected type.

e.g:

1) cinder.conf on the volume node:

enabled_backends=lvm-host1

[lvm-host1] volume_group=host-cinder volume_backend_name-LVM_iSCSI_host1

2) Create volume type cinder type-create host-lvm

3) Assosiate a backend name with the type: cinder type-key host-lvm set volume_backend_name=LVM_iSCSI_host1

4) Create a volume with specified type volume create --display-name sample-volume --volume-type host-lvm 10

Thanks, Vijesh

edit flag offensive delete link more
0

answered 2013-09-18 13:29:58 -0600

shantha-kumar gravatar image

I believe you are able to create the volume successfully from other node or controller node where u have cinder installed.

when u tryin to create on newly created not u are seeing this problem " ERROR [cinder.scheduler.manager] Failed to schedule_create_volume: No valid host was found."

No valid host -Though its listing in host list of cinder, its not successfully register.

check all the settings properly.

esp- NTP

edit flag offensive delete link more
0

answered 2013-09-13 10:21:48 -0600

Thanks for your response. I planed to add another cinder node "c01". "c01" was installed cinder-volume package, and created a VG cinder-vol.

The result by running cinder-manage host list is:

cinder-manage host list

host zone
cloud nova
c01 nova

When the VG on "cloud" was used totally, I suppose cinder-vol on "c01" should be used. So I run "cinder create --display_name multi-node-test-01 8", but:

cinder list

+--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+ 7ac33606-7030-4786-9b09-0eb90d781d3a | error | multi-node-test-01 | 8 | None | false | |

cinder-scheduler.log: 2013-09-13 18:13:12 ERROR [cinder.scheduler.filters.capacity_filter] Free capacity not set: volume node info collection broken. 2013-09-13 18:13:12 WARNING [cinder.scheduler.filters.capacity_filter] Insufficient free space for volume creation (requested / avail): 8/7.0 2013-09-13 18:13:12 ERROR [cinder.scheduler.manager] Failed to schedule_create_volume: No valid host was found.

At the same time I can see this on "c01": c01:~# pvscan PV /dev/loop2 VG cinder-vol lvm2 [10.00 GiB / 10.00 GiB free] Total: 1 [10.00 GiB] / in use: 1 [10.00 GiB] / in no VG: 0 [0 ]

so why the VG on c01 could not be used by cinder?

thanks again.

edit flag offensive delete link more
0

answered 2013-09-13 09:07:04 -0600

shantha-kumar gravatar image

Can u please share the logs and exactly tell me whats the problem you have now

edit flag offensive delete link more
0

answered 2013-04-26 18:39:53 -0600

shantha-kumar gravatar image

Thanks for your response.

Im unaware of using the above mentioned formula, Kindly I request you to advise me to proceed further.

edit flag offensive delete link more
0

answered 2013-04-26 02:45:01 -0600

| 2013-04-08 22:48:21 | 2013-04-25 18:45:44 | NULL | 0 | 1 | grzrc3 | cinder-scheduler | cinder-scheduler | 144940 | 0 | nova | | 2013-04-18 10:14:32 | 2013-04-24 09:49:08 | NULL | 0 | 6 | cinder | cinder-volume | cinder-volume | 43577 | 0 | nova |

I see that the update-time field of cinder-volume hasn't updated frequently -> so the cinder scheduler service marked it like as a service down.

I share you the formula which uses to check service down :

last_heartbeat = service['updated_at'] or service['created_at']
# Timestamps in DB are UTC.
elapsed = total_seconds(timeutils.utcnow() - last_heartbeat)
return abs(elapsed) <= FLAGS.service_down_time

FLAGS.service_down_time is 60s in default.

Hope it help you detect your problem...

edit flag offensive delete link more
0

answered 2013-04-25 05:19:50 -0600

shantha-kumar gravatar image

Thanks for your response.

Cinder-volume service is running in node(CINDER)

please find the cinderdb , services which shows the cinder volume

root@grzrc3:~# mysql -uroot -piso*help cinder -e 'select * from services;' +---------------------+---------------------+------------+---------+----+-----------------+------------------+------------------+--------------+----------+-------------------+ | created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone | +---------------------+---------------------+------------+---------+----+-----------------+------------------+------------------+--------------+----------+-------------------+ | 2013-04-08 22:48:21 | 2013-04-25 18:45:44 | NULL | 0 | 1 | grzrc3 | cinder-scheduler | cinder-scheduler | 144940 | 0 | nova | | 2013-04-18 10:14:32 | 2013-04-24 09:49:08 | NULL | 0 | 6 | cinder | cinder-volume | cinder-volume | 43577 | 0 | nova | +---------------------+---------------------+------------+---------+----+-----------------+------------------+------------------+--------------+----------+-------------------+ root@grzrc3:~#

root@grzrc3:~# cinder-manage host list host zone cinder nova root@grzrc3:~# root@grzrc3:~#

Any configuration changes needs to be done ?

edit flag offensive delete link more
0

answered 2013-04-25 02:27:12 -0600

from your log, i see that cinder-volume services is down or disable.

Take a look the table services in cinder db to make sure that the cinder-volume is present and hasn't disabled yet.

edit flag offensive delete link more
0

answered 2013-04-23 05:25:02 -0600

shantha-kumar gravatar image

thanks for your response.

I have attached the scheduler log, when I m trying to create the volume

FYI - Cinder host list commands list the host and in mysql DB the host is added

###############################

tailf /var/log/cinder/cinder-scheduler.log

2013-04-24 00:20:59 DEBUG [cinder.openstack.common.rpc.amqp] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-17ecae07-31ff-4096-812d-062060fb2695', u'_context_quota_class': None, u'_unique_id': u'367cc5b4acf74f2ba4d379f9db804b81', u'_context_read_deleted': u'no', u'args': {u'service_name': u'volume', u'host': u'cinder', u'capabilities': {u'QoS_support': False, u'volume_backend_name': u'LVM_iSCSI', u'free_capacity_gb': 549.75, u'driver_version': u'1.0', u'total_capacity_gb': 549.75, u'reserved_percentage': 0, u'vendor_name': u'Open Source', u'storage_protocol': u'iSCSI'}}, u'_context_tenant': None, u'_context_auth_token': '<sanitized>', u'_context_is_admin': True, u'version': u'1.0', u'_context_project_id': None, u'_context_timestamp': u'2013-04-23T05:21:00.558719', u'_context_user': None, u'_context_user_id': None, u'method': u'update_service_capabilities', u'_context_remote_address': None} 2013-04-24 00:20:59 DEBUG [cinder.openstack.common.rpc.amqp] unpacked context: {'user_id': None, 'roles': [u'admin'], 'timestamp': u'2013-04-23T05:21:00.558719', 'auth_token': '<sanitized>', 'remote_address': None, 'quota_class': None, 'is_admin': True, 'user': None, 'request_id': u'req-17ecae07-31ff-4096-812d-062060fb2695', 'project_id': None, 'read_deleted': u'no', 'tenant': None} 2013-04-24 00:20:59 DEBUG [cinder.scheduler.host_manager] Received volume service update from cinder.

2013-04-24 00:21:18 DEBUG [cinder.openstack.common.rpc.amqp] received {u'_context_roles': [u'KeystoneAdmin', u'admin', u'KeystoneServiceAdmin'], u'_context_request_id': u'req-213a11c7-9e44-4fb1-a0f3-e9363f247bd1', u'_context_quota_class': None, u'_unique_id': u'7d86b91fcab24a3c8b00a9a3ad18d0f1', u'_context_read_deleted': u'no', u'args': {u'request_spec': {u'volume_id': u'73978248-b515-4c03-a339-5ca7979324a2', u'volume_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': None, u'availability_zone': u'nova', u'attach_status': u'detached', u'source_volid': None, u'metadata': {}, u'volume_metadata': [], u'display_description': None, u'snapshot_id': None, u'user_id': u'4b106ff812b54472b8e2d3596524f730', u'project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', u'id': u'73978248-b515-4c03-a339-5ca7979324a2', u'size': 2}, u'volume_type': {}, u'image_id': None, u'source_volid': None, u'snapshot_id': None}, u'volume_id': u'73978248-b515-4c03-a339-5ca7979324a2', u'filter_properties': {}, u'topic': u'cinder-volume', u'image_id': None, u'snapshot_id': None}, u'_context_tenant': u'fedfaa89548e41188c2dbd6f96d0de4a', u'_context_auth_token': '<sanitized>', u'_context_is_admin': True, u'version': u'1.2', u'_context_project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', u'_context_timestamp': u'2013-04-23T18:51:18.663053', u'_context_user': u'4b106ff812b54472b8e2d3596524f730', u'_context_user_id': u'4b106ff812b54472b8e2d3596524f730', u'method': u'create_volume', u'_context_remote_address': u'10.1.0.29'} 2013-04-24 00:21:18 DEBUG [cinder.openstack.common.rpc.amqp] unpacked context: {'user_id': u'4b106ff812b54472b8e2d3596524f730', 'roles': [u'KeystoneAdmin', u'admin', u'KeystoneServiceAdmin'], 'timestamp': u'2013-04-23T18:51:18.663053', 'auth_token': '<sanitized>', 'remote_address': u'10.1.0.29', 'quota_class': None, 'is_admin': True, 'user': u'4b106ff812b54472b8e2d3596524f730', 'request_id': u'req-213a11c7-9e44-4fb1-a0f3-e9363f247bd1', 'project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', 'read_deleted': u'no', 'tenant': u'fedfaa89548e41188c2dbd6f96d0de4a'} 2013-04-24 00:21:18 WARNING [cinder.scheduler.host_manager] service is down or disabled. 2013-04-24 00:21:18 WARNING [cinder.scheduler.host_manager] service is down or disabled. 2013-04-24 00:21:18 ERROR [cinder.scheduler.manager] Failed to schedule_create_volume: No valid host was found.

Kindly let me know if u need any more logs.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-04-05 18:49:00 -0600

Seen: 758 times

Last updated: Aug 06