Ask Your Question
0

CEPH backend and compute disk usage

asked 2016-08-10 06:14:08 -0500

sherv gravatar image

I have a CEPH backend for my Mitaka cinder and glance, and everything works perfectly fine. When I create a new instance, I have to select some flavor with, let's say, 50GB root disk. But previously I've already selected an option to create a new disk and specified a size for it - this disk will be created in CEPH cluster. So now I have a running VM with it's disk in CEPH cluster, but nova thinks that I use 50GB of storage on a compute node(in fact I'm not). Eventually, when you create enough VMs, scheduler will think that storage space on a compute node is exhausted(it's not), and fail to create a new VM.

My question is what's the best practice approach to resolve this issue? Currently I'm creating flavors with 0GB root disk, but this doesn't seem right to me. Maybe there's an existing solution to let nova know that local storage on a compute node is not used with CEPH?

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2016-08-10 07:21:25 -0500

As you said, you are using Ceph as Cinder and Glance backend. But what about Nova? In default configuration Nova will use local storage for ephemeral disks.

You could perfectly achieve that result either configuring Nova to use Ceph or booting the instance from volumes.

edit flag offensive delete link more
1

answered 2016-08-10 08:59:33 -0500

sherv gravatar image

I've done as you suggested, and configured Nova to use CEPH as it's backend. It worked in a way - now dashboard says that each of my two compute nodes has 3.6TB of storage. I see how this can introduce some issues in the future, as this number, obviously, taken from CEPH. I have 4 OSDs, 1TB each, plus some overhead, results in CEPH stating it has 3724GB of storage. I guess this is the number Nova gets from CEPH monitors as it's available storage. But I also have pool size of 2, which means every 1GB of data takes 2GB+overhead of storage in CEPH, and Nova is not aware of that. Of course, situation is eased a little bit by the fact that CEPH thin-provisions block devices, so 100GB disk only takes as much storage as there's really needed. But anyway, this can possibly lead to a situation where there's no storage left in CEPH, but Nova doesn't know this. So I'm not sure this is the end solution.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-08-10 06:14:08 -0500

Seen: 563 times

Last updated: Aug 10 '16