Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

clarifications about storage and backends

Hi,
   I managed to setup an openstack enveironment on 7 nodes= 2 controllers, 3 compute, 1 cinder storage ans 1 swift node. I'm a little confused about storage types. I understand swift as its a object storage node and used as backend for glance images storage.The cinder node provides block storage as volumes for the VM's. And the storage on the coompute nodes is used as root volumes for spawning the vm's. On the dashboard, under hypervisors, i can see the available space on each compute node, the free space under /root directory. When i create a new instance, based on the flavor i choose, the vm start eating space from the compute nodes, and after terminating the instance the space is freed up(i guess this is the ephemeral storage, although under details it says it's root storage). 
   The cinder-volume node sits on a software raid 10 array with lvm on top. Recently i started reading about ceph, that it can be used as a backend for cinder, glance and swift. I am happy with swift the way it is, but are there any advantages for using ceph instead of LVM? Does Ceph have any kind of fault tolerance,or should it sit on raid array aswell?
   As for the hypervisor storage, is there any way of making it a shared filesystem so i can take advantage of live migration? I guess it wouldn't be that simple as adding the /root directory under the same LV on each compute node? Can i use Ceph for this?

I hope i made myself pretty clear!

clarifications about storage and backends

Hi,
   

I managed to setup an openstack enveironment environment on 7 nodes= 2 controllers, 3 compute, 1 cinder storage ans 1 swift node. I'm a little confused about storage types. I understand swift as its a object storage node and used as backend for glance images storage.The cinder node provides block storage as volumes for the VM's. And the storage on the coompute compute nodes is used as root volumes for spawning the vm's. On the dashboard, under hypervisors, i can see the available space on each compute node, the free space under /root directory. When i create a new instance, based on the flavor i choose, the vm start eating space from the compute nodes, and after terminating the instance the space is freed up(i up (i guess this is the ephemeral storage, although under details it says it's root storage).

The cinder-volume node sits on a software raid 10 array with lvm on top. Recently i started reading about ceph, that it can be used as a backend for cinder, glance and swift. I am happy with swift the way it is, but are there any advantages for using ceph instead of LVM? Does Ceph have any kind of fault tolerance,or should it sit on raid array aswell? aswell?

As for the hypervisor storage, is there any way of making it a shared filesystem so i can take advantage of live migration? I guess it wouldn't be that simple as adding the /root directory under the same LV on each compute node? Can i use Ceph for this? I hope i made myself pretty clear!

this?