Ask Your Question
1

Poor ceph block device performance on VM's

asked 2017-03-14 11:02:07 -0600

I am seeing low throughput on my openstack VM's block storage as compared with mounting ceph block storage directly to same compute host as VM is running on.

I have tested different guest OS's all had same results.
results from running DD on VM /mnt/scratch is attached volume from Ceph.
[root@host-10-49-16-127 ~]# dd if=/dev/zero of=/mnt/scratch/output bs=1000k count=1k; rm -f /mnt/scratch/output;
1024+0 records in
1024+0 records out
1048576000 bytes (1.0 GB) copied, 5.60444 s, 187 MB/s

results from running DD directly on compute host.
root@node-18:~# dd if=/dev/zero of=/mnt/ceph-block-device/output bs=1000k count=5k; rm -f /mnt/ceph-block-device/output;
5120+0 records in
5120+0 records out
5242880000 bytes (5.2 GB) copied, 3.63551 s, 1.4 GB/s

Is there any reason I would be getting such drastic difference in performance? Any ideas to help debug this would be helpful.

edit retag flag offensive close merge delete

Comments

I have been facing same problem . I did the test using bonnie++ and I monitor the network . The first assumption that I have is that network limitation , cause it hits 600Mpbs of link utilization ( 1Gbps card ) directly from host and less than 100 Mbps from VM . Did you discover anything else ?

sauloaugustosilva gravatar imagesauloaugustosilva ( 2018-03-05 08:12:25 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-03-07 06:52:22 -0600

sauloaugustosilva gravatar image

I solved my problem doing follow configuration on ceph.conf file

[client] rbd cache = true rbd cache size = 268435456 rbd cache max dirty = 0 rbd cache target dirty = 134217728 rbd cache max dirty age = 2 rbd cache writethrough until flush = true rbd concurrent management ops = 20 admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /va/rlog/qemu/qemu-guest-$pid.log

and

public network = 10.10.18.0/24 cluster network = 10.10.19.0/24

at global section .

And the most valuable change was setup one network interface for each ceph node . Now my ceph nodes hace 3 interfaces each .

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2017-03-14 11:02:07 -0600

Seen: 770 times

Last updated: Mar 07 '18