Poor ceph block device performance on VM's
I am seeing low throughput on my openstack VM's block storage as compared with mounting ceph block storage directly to same compute host as VM is running on.
I have tested different guest OS's all had same results.
results from running DD on VM /mnt/scratch is attached volume from Ceph.
[root@host-10-49-16-127 ~]# dd if=/dev/zero of=/mnt/scratch/output bs=1000k count=1k; rm -f /mnt/scratch/output;
1024+0 records in
1024+0 records out
1048576000 bytes (1.0 GB) copied, 5.60444 s, 187 MB/s
results from running DD directly on compute host.
root@node-18:~# dd if=/dev/zero of=/mnt/ceph-block-device/output bs=1000k count=5k; rm -f /mnt/ceph-block-device/output;
5120+0 records in
5120+0 records out
5242880000 bytes (5.2 GB) copied, 3.63551 s, 1.4 GB/s
Is there any reason I would be getting such drastic difference in performance? Any ideas to help debug this would be helpful.
I have been facing same problem . I did the test using bonnie++ and I monitor the network . The first assumption that I have is that network limitation , cause it hits 600Mpbs of link utilization ( 1Gbps card ) directly from host and less than 100 Mbps from VM . Did you discover anything else ?