cinder block slow performance [closed]

asked 2018-05-07 04:14:15 -0600

AB239 gravatar image

updated 2018-05-08 02:14:53 -0600

Hello all,

Setup information:

Openstack Version: Newton

1 KVM Controller on Server 1

1 KVM Cinder on Server 2. Server 2 has attached SSD of 120G as a block device(haven't created any file-system on SSD )

1 Compute node, Server 3

I have a pass-through configuration for SSD on Cinder VM and I can see the drive mounted as /dev/vda I have created an OpenStack instance on Compute node, let's call it VM-1. A Cinder volume is being attached to it and its mounted as /dev/vdb on VM-1.

Issue: I am seeing a very low disk Read/writes numbers for the mounted volume on VM-1.

Troubleshooting and HDPARM numbers till now:

1) SERVER 2(host for Cinder VM):

hdparm -Tt /dev/sda (SSD mount point)

Timing cached reads:   14808 MB in 
2.00 seconds = 7411.80 MB/sec

Timing buffered disk reads: 1208 MB in  3.00 seconds = 402.03 MB/sec

2) CINDER VM (on Server 2):

hdparm -t -direct /dev/vda 

 Timing cached reads:   14136 MB in  1.98 seconds = 7125.59 MB/sec 
 Timing buffered disk reads: 1196 MB in  3.00 seconds = 398.15 MB/sec

3) VM-1 (on Server 3):

hdparm -t --direct **/dev/vdb**

 Timing O_DIRECT disk reads: 208 MB in  3.01 seconds =  69.10 MB/sec

You can see there is a huge difference in disk reads in #2 and #3. Both the servers (server 2 and 3) are connected by 1000Mbps link. But, I found something strange when I checked network interfaces for the VM-1 on compute node(server 3).

ETHTOOL for eth0 (physical interface on Server3)

            Advertised pause frame use: Symmetric
            Advertised auto-negotiation: Yes
            *Speed: 1000Mb/s*
            Duplex: Full
            Port: Twisted Pair

ETHTOOL for TAP interface attached to VM-1:

            Supports auto-negotiation: No
            Advertised link modes:  Not reported
            Advertised pause frame use: No
            Advertised auto-negotiation: No
            **Speed: 10Mb/s**
            Duplex: Full
            Port: Twisted Pair
            PHYAD: 0
            Transceiver: internal
            Auto-negotiation: off

Interface speeds show huge difference here as well. Please suggest if there is any way I can change this speed on tap interface. Is there any configuration file or anything where I can specify this?

Looking forward for expert advice.

EDIT 1: Corrected VM-1's device name for Cinder Volume.


edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by AB239
close date 2018-05-14 06:44:21.228335

1 answer

Sort by » oldest newest most voted

answered 2018-05-07 08:32:31 -0600

updated 2018-05-07 09:01:59 -0600

Hardly expert advice, but let me try.

First, the data between VM-1 and the volume doesn’t travel through the VM’s TAP interface, but through the virtio disk driver.

You don’t mention the Cinder backend, but let’s assume an iSCSI connection between compute host and Cinder host. If so, the data continues through the host’s operating system, then via the gigabit network to the host that runs the Cinder VM, through that host’s operating system, then through the VM’s operating system and another virtio driver to the SSD. And vice-versa.

The gigabit network has a theoretical maximum throughput of 125 MB/sec. You get about half of that, which I think is not bad at all. The main bottleneck is the network, I would think.

EDIT: I just notice that you haven’t provided the Cinder volume throughput. The 69MB/sec are for VM-1’s root disk.

edit flag offensive delete link more


Just updated the question with the correct device name on VM-1.

AB239 gravatar imageAB239 ( 2018-05-08 02:15:21 -0600 )edit

Well, my comment doesn't change: The network allows a theoretical throughput of 125MB/sec, and you get a bit more than half that throughput. Not bad at all, and definitely not surprising. If you want more, upgrade the network.

Bernd Bausch gravatar imageBernd Bausch ( 2018-05-08 07:30:09 -0600 )edit

Thanks. Planning to move the network between Cinder and Compute nodes to 10Gbps. Can u please guide me about the configuration changes that will be needed in my environment now?

AB239 gravatar imageAB239 ( 2018-05-14 02:05:42 -0600 )edit

I am not a performance specialist. In any case, to tell you what changes are required, one would need to know the current environment. All I know is that you run Cinder in a VM and that you have a 1GB network connection. The VM is unlikely to be a problem, but I don't know your Cinder backend.

Bernd Bausch gravatar imageBernd Bausch ( 2018-05-14 02:30:22 -0600 )edit

Cinder backend is ISCSI. I have explained about the environment in my original query.

AB239 gravatar imageAB239 ( 2018-05-14 04:26:20 -0600 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2018-05-07 04:14:15 -0600

Seen: 991 times

Last updated: May 08 '18