Ask Your Question

Revision history [back]

The size of your network connectivity really depends on the scale of your compute environment, Ceph cluster and your rate of change against that Ceph cluster which will increase the amount of replication between the Ceph OSD nodes. 10GB might work on a small cluster/compute environment but as your environment grows, you may find that you are saturating your backend switches. One way to prepare for scale would be to ensure you have your replication traffic on your Ceph cluster on a dedicated 10GB link with a physically separate switch from the 10GB front side client traffic to the OpenStack compute nodes. Ensure your switches can support up to 40GB if again you plan on growing the environment very large.

With regards to your question about configuring Ceph as a standby in the event that the local disks on a compute node are not available, I have not seen anything that would support this configuration. First, the local disk as it was changing for a compute instance would somehow need to be replicating from a block level over to the Ceph cluster to ensure any writes were kept in sync from the local disk to the Ceph disk. Along with that, OpenStack would have to have some kind of understanding/configuration that indicates that when local compute disk are down request storage from the Ceph cluster. Niether of these configuration items exist today.