CEPH backend and compute disk usage
I have a CEPH backend for my Mitaka cinder and glance, and everything works perfectly fine. When I create a new instance, I have to select some flavor with, let's say, 50GB root disk. But previously I've already selected an option to create a new disk and specified a size for it - this disk will be created in CEPH cluster. So now I have a running VM with it's disk in CEPH cluster, but nova thinks that I use 50GB of storage on a compute node(in fact I'm not). Eventually, when you create enough VMs, scheduler will think that storage space on a compute node is exhausted(it's not), and fail to create a new VM.
My question is what's the best practice approach to resolve this issue? Currently I'm creating flavors with 0GB root disk, but this doesn't seem right to me. Maybe there's an existing solution to let nova know that local storage on a compute node is not used with CEPH?