Ask Your Question

Revision history [back]

Question 1

For example I have created a cinder-volume group with 500gb Does it mean this is the maximum size of volume that are able to be created by a user or this is the actual volume size allotted for volume creation.

When using LVM as the storage backend the size of the volume group(s) (by default cinder-volumes) determines the total size allocated for volume storage. In your example where cinder-volumes is 500 GB in size:

  • If 5 volumes are created by different users (or the same user) with a size of 100 GB each then no space is left for new volumes.
  • If 1 volume is created with a size of 500 GB then no space is left for new volumes.

Question 2:

Where do the cinder volume consumed disk space, Is it in the controller node or in the compute node?

The space is consumed on the node that hosts volume group. In a setup with one controller node servicing multiple compute nodes this will typically be the controller. There is however nothing stopping the use of multiple instances of the cinder-volumes service on separate storage nodes, each with their own backend storage.

Note that compute nodes must still have some storage available for the software itself, logs, and ephemeral storage. See http://docs.openstack.org/essex/openstack-compute/admin/content/storage-and-openstack-compute.html for more information on ephermeral storage and the difference between it versus volume storage.

Question 1

For example I have created a cinder-volume group with 500gb Does it mean this is the maximum size of volume that are able to be created by a user or this is the actual volume size allotted for volume creation.

When using LVM as the storage backend the size of the volume group(s) (by default cinder-volumes) determines the total size allocated for volume storage. In your example where cinder-volumes is 500 GB in size:

  • If 5 volumes are created by different users (or the same user) with a size of 100 GB each then no space is left for new volumes.
  • If 1 volume is created with a size of 500 GB then no space is left for new volumes.

Question 2:2

Where do the cinder volume consumed disk space, Is it in the controller node or in the compute node?

The space is consumed on the node that hosts volume group. In a setup with one controller node servicing multiple compute nodes this will typically be the controller. There is however nothing stopping the use of multiple instances of the cinder-volumes service on separate storage nodes, each with their own backend storage.

Note that compute nodes must still have some storage available for the software itself, logs, and ephemeral storage. See http://docs.openstack.org/essex/openstack-compute/admin/content/storage-and-openstack-compute.html for more information on ephermeral storage and the difference between it versus volume storage.

Question 1

For example I have created a cinder-volume group with 500gb Does it mean this is the maximum size of volume that are able to be created by a user or this is the actual volume size allotted for volume creation.

When using LVM as the storage backend the size of the volume group(s) (by default cinder-volumes) determines the total size allocated for volume storage. In your example where cinder-volumes is 500 GB in size:

  • If 5 volumes are created by different users (or the same user) with a size of 100 GB each then no space is left for new volumes.
  • If 1 volume is created by a user with a size of 500 GB then no space is left for new volumes.

Applying restrictions to how much volume storage each user or tenant can use is handled separately using quotas.

Question 2

Where do the cinder volume consumed disk space, Is it in the controller node or in the compute node?

The space is consumed on the node that hosts volume group. In a setup with one controller node servicing multiple compute nodes this will typically be the controller. There is however nothing stopping the use of multiple instances of the cinder-volumes service on separate storage nodes, each with their own backend storage.

Note that compute nodes must still have some storage available for the software itself, logs, and ephemeral storage. See http://docs.openstack.org/essex/openstack-compute/admin/content/storage-and-openstack-compute.html for more information on ephermeral storage and the difference between it versus volume storage.

Question 1

For example I have created a cinder-volume group with 500gb Does it mean this is the maximum size of volume that are able to be created by a user or this is the actual volume size allotted for volume creation.

When using LVM as the storage backend the size of the volume group(s) (by default cinder-volumes) determines the total size allocated for volume storage. In your example where cinder-volumes is 500 GB in size:

  • If 5 volumes are created by different users (or the same user) with a size of 100 GB each then no space is left for new volumes.
  • If 1 volume is created by a user with a size of 500 GB then no space is left for new volumes.

Applying restrictions to how much volume storage each user or tenant can use is handled separately using quotas.

Question 2

Where do the cinder volume consumed disk space, Is it in the controller node or in the compute node?

The space is consumed on the node that hosts the volume group. In a setup with one controller node servicing multiple compute nodes this will typically be the controller. There is however nothing stopping the use of multiple instances of the cinder-volumes service on separate storage nodes, nodes (or even on a single storage node), each with their own backend storage.

Note that compute nodes must still have some storage available for the software itself, logs, and ephemeral storage. See http://docs.openstack.org/essex/openstack-compute/admin/content/storage-and-openstack-compute.html for more information on ephermeral storage and the difference between it versus volume storage.