Ask Your Question

Revision history [back]

Poor ceph block device performance on VM's

I am seeing low throughput on my openstack VM's block storage as compared with mounting ceph block storage directly to same compute host as VM is running on.

I have tested different guest OS's all had same results.
results from running DD on VM /mnt/scratch is attached volume from Ceph.
[root@host-10-49-16-127 ~]# dd if=/dev/zero of=/mnt/scratch/output bs=1000k count=1k; rm -f /mnt/scratch/output;
1024+0 records in
1024+0 records out
1048576000 bytes (1.0 GB) copied, 5.60444 s, 187 MB/s

results from running DD directly on compute host.
root@node-18:~# dd if=/dev/zero of=/mnt/ceph-block-device/output bs=1000k count=5k; rm -f /mnt/ceph-block-device/output;
5120+0 records in
5120+0 records out
5242880000 bytes (5.2 GB) copied, 3.63551 s, 1.4 GB/s

Is there any reason I would be getting such drastic difference in performance? Any ideas to help debug this would be helpful.