Increasing cinder-volume capacity
Hey all ,
So when I was setting up OpenStack I used packstack but left the default 20GB cinder-volumes size. It's coming back to bite me in the ass since I can basically only have 2 10GB VMs before it stops me creating more.
overall filesystem
[root@openstack ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_openstack-root 50G 9.6G 41G 20% /
devtmpfs 126G 0 126G 0% /dev
tmpfs 126G 4.0K 126G 1% /dev/shm
tmpfs 126G 146M 126G 1% /run
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/nvme0n1p2 497M 165M 332M 34% /boot
/dev/nvme0n1p1 200M 9.5M 191M 5% /boot/efi
/dev/loop0 1.9G 6.1M 1.7G 1% /srv/node/swiftloopback
tmpfs 26G 0 26G 0% /run/user/0
/dev/mapper/centos_openstack-home 5.5T 33M 5.5T 1% /home
Logical and physical volumes
[root@openstack ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/loop1 cinder-volumes lvm2 a-- 20.60g 612.00m
/dev/nvme0n1p3 centos_openstack lvm2 a-- 1.82t 60.00m
/dev/nvme1n1p1 centos_openstack lvm2 a-- 1.82t 0
/dev/nvme2n1p1 centos_openstack lvm2 a-- 1.82t 0
[root@openstack ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home centos_openstack -wi-ao---- 5.40t
root centos_openstack -wi-ao---- 50.00g
swap centos_openstack -wi-ao---- 4.00g
volume-1c95b499-b084-4ea0-959e-7be4f2f559d8 cinder-volumes -wi-ao---- 10.00g
volume-57dab2d5-b65b-4f20-bcf3-8e5213442fb8 cinder-volumes -wi-ao---- 10.00g
[root@openstack ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos_openstack 3 3 0 wz--n- 5.46t 60.00m
cinder-volumes 1 2 0 wz--n- 20.60g 612.00m
So my goal was to reduce some of the size in centos_openstack and allow more to cinder-volumes so I can have more machines set up
Attempt 1
From looking at this previous question asked https://ask.openstack.org/en/question/53325/how-to-increase-volume-capacity/ (here)
However on following the advise I was warned
the guide told you to partition the loopback device. In this step, they overwrite the partition table and make the loopback device itself an LVM PV. I'm only bothering to point this out because it illustrates that the author of that guide is giving you dangerous advice because he doesn't understand what he's doing.
Anyway following the steps even with the warning after a system reboot I got this
[root@openstack ~(keystone_admin)]# pvs
WARNING: Device for PV XwMJfT-bpgu-f2fi-S1qR-ECQN-n3VS-JJU6xt not found or rejected by a filter.
PV VG Fmt Attr PSize PFree
/dev/loop1 cinder-volumes lvm2 a-- 20.60g 10.60g
/dev/nvme0n1p3 centos_openstack lvm2 a-- 1.82t 60.00m
/dev/nvme1n1p1 centos_openstack lvm2 a-- 1.82t 0
/dev/nvme2n1p1 centos_openstack lvm2 a-- 1.82t 0
[unknown] cinder-volumes lvm2 a-m 4.88t 4.88t
[root@openstack ~(keystone_admin)]# vgs
WARNING: Device for PV XwMJfT-bpgu-f2fi-S1qR-ECQN-n3VS-JJU6xt not found or rejected by a filter.
VG #PV #LV #SN Attr VSize VFree
centos_openstack 3 3 0 wz--n- 5.46t 60.00m
cinder-volumes 2 1 0 wz-pn- 4.90t 4.89t
Attempt 2
So trying a different approach
[root@openstack ~]# losetup -l
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE ...
The loop device is just a trick by Packstack to ensure a known storage configuration on any Packstack host. You can easily add other storage backends, be they loop-based or real disks or LUNs; see http://docs.openstack.org/admin-guide....
Was looking at my cinder.conf file and I notice I have this
and then below it I've the [lvm] section ... Would I but able to just change this to
and then basically copy/paste [lvm] part but put in centos_openstack VG ?
Yes, you just copy the existing [lvm] section to [lvm2] section in cinder.conf and use a different VG. The root VG would probably work, or use a free disk partition to create a new PV, then VG, or use another loop file to create a new PV, then VG.