Ask Your Question
0

cinder max volumes per vm

asked 2016-09-14 11:29:37 -0500

schegi gravatar image

updated 2016-09-15 03:33:32 -0500

Hi,

short question, what is the max number of attachable cinder volumes per vm (testing lvm volumes currently). Currently when i subsequently attach volumes to a vm, there seems to be a limit of 5. After the 5th volume, when i try to attach another volume, is changes its state to attaching, the corresponding iscsi target/alc is created (cinder-volume node) and the target is discovered and logged into (initiator/nova-compute node) but after a certain time the volume state switches back to available and there are no changes in the libvirt .xml for this vm (as noticed for the previous 5 volumes). In the logs there seems to be no difference between successfull attachment (the first 5) and the unsuccessfull 6th try. But, maybe i have to have a closer look.

Another observation, this behaviour only holds for machines using the virtio-iscsi driver (i need this driver for discard support) images has properties 'hw_scsi_model=virtio-scsi hw_disk_bus=scsi'.

Here is the debug output of openstack server add volume virtio-scsi_test <volume> --debug, once the output of the last volume which was successfully attached and then the output of the next volumes i try to attach which is finally not attached:

Sucessfull attachment of volume #6

[root@jalanvaloss]# openstack server add volume virtio-scsi_test attach_test-16 --debug START with options: ['server', 'add', 'volume', 'virtio-scsi_test', 'attach_test-16', '--debug'] options: Namespace(access_token_endpoint='', auth_type='', auth_url='http://eldenser.mgmt.tsc2.uf0.de:5000/v3', cacert='', client_id='', client_secret='', cloud='', debug=True, default_domain='default', deferred_help=False, domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='', log_file=None, os_compute_api_version='', os_identity_api_version='3', os_image_api_version='', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='', password='', profile=None, project_domain_id='', project_domain_name='', project_id='c4e13d325373451483c51bc1fabf9dad', project_name='admin', protocol='', region_name='Berlin', scope='', service_provider_endpoint='', timing=False, token='', trust_id='', url='', user_domain_id='', user_domain_name='Default', user_id='', username='admin', verbose_level=3, verify=None) defaults: {u'auth_type': 'password', u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 'api_timeout': None, u'baremetal_api_version': u'1', u'image_api_version': u'2', 'cacert': None, u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', u'interface': None, u'network_api_version': u'2', u'image_format': u'qcow2', u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 'verify': True, u'identity_api_version': u'2.0', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', u'container_api_version': u'1', u'dns_api_version': u'2', u'object_store_api_version': u'1', u'disable_vendor_agent': {}} cloud cfg: {'auth_type': 'password', u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 'timing': False, u'network_api_version': u'2', u'image_format': u'qcow2', u'image_api_version': u'2', 'verify': True, u'dns_api_version': u'2', u'object_store_api_version': u'1', 'verbose_level': 3, 'region_name': 'Berlin', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth': {'username': 'admin', 'password': '', 'project_name': 'admin', 'user_domain_name': 'Default', 'auth_url': 'http://eldenser.mgmt.tsc2.uf0.de:5000/v3', 'project_id': 'c4e13d325373451483c51bc1fabf9dad'}, 'default_domain': 'default', u'container_api_version': u'1', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', u'interface': None, 'cacert': None, u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 'deferred_help': False, u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'disable_vendor_agent ...

(more)
edit retag flag offensive close merge delete

Comments

This is definitely not a quota issue. I am able to create hundreds of volumes but can just attach a max of 5 to a single vm.

schegi gravatar imageschegi ( 2016-09-15 02:03:32 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
1

answered 2016-09-15 06:17:45 -0500

schegi gravatar image

Found a solution to my problem. https://bugzilla.redhat.com/show_bug....

Thx again Stefan

edit flag offensive delete link more

Comments

but remember when you are using virtio-blk device, the maximum number of volumes is more or less the maximum number of PCI device free -> ~20 volumes

amedeo-salvati gravatar imageamedeo-salvati ( 2016-09-15 10:44:35 -0500 )edit
0

answered 2016-09-14 13:20:13 -0500

Scott Machtmes gravatar image

updated 2016-09-14 13:20:37 -0500

I'm not aware of any per-vm limitations but perhaps first look at any quotas:

cinder quota-show PROJECT_ID

cinder quota-usage PROJECT_ID

There are per-project limits and gigabytes limits possible.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-09-14 11:29:37 -0500

Seen: 933 times

Last updated: Sep 15 '16