Ask Your Question
0

network performance vm to vm

asked 2017-11-30 02:25:58 -0600

masber gravatar image

updated 2017-12-04 22:58:38 -0600

Hi,

I am running a test opentack environment with 2 compute nodes each one of them with and MTU 9000.

Compute nodes:

  • hercules-21 (10.0.32.21): 64cpus, 512GB RAM and 2x 25Gbps bond network
  • hercules-22 (10.0.32.22): 64cpus, 512GB RAM and 2x 25Gbps bond network

VMs:

  • centos (192.168.1.110): 8 vcpus, 16GB RAM
  • centos2 (192.168.1.109): 8 vcpus, 16GB RAM

Network bandwidth test physical host to physical host using iperf3:

[root@hercules-21 ~]# iperf -c 10.0.32.22 -P 4
------------------------------------------------------------
Client connecting to 10.0.32.22, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  5] local 10.0.32.21 port 59014 connected with 10.0.32.22 port 5001
[  3] local 10.0.32.21 port 59008 connected with 10.0.32.22 port 5001
[  4] local 10.0.32.21 port 59010 connected with 10.0.32.22 port 5001
[  6] local 10.0.32.21 port 59012 connected with 10.0.32.22 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  6.91 GBytes  5.94 Gbits/sec
[  3]  0.0-10.0 sec  6.97 GBytes  5.98 Gbits/sec
[  4]  0.0-10.0 sec  6.96 GBytes  5.98 Gbits/sec
[  6]  0.0-10.0 sec  6.77 GBytes  5.82 Gbits/sec
[SUM]  0.0-10.0 sec  27.6 GBytes  23.7 Gbits/sec

Network bandwidth test vm to vm using iperf3 (each vm is running on a differen host):

[centos@centos2 ~]$ iperf -c 192.168.1.110 -P 4
------------------------------------------------------------
Client connecting to 192.168.1.110, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  6] local 192.168.1.109 port 60244 connected with 192.168.1.110 port 5001
[  3] local 192.168.1.109 port 60238 connected with 192.168.1.110 port 5001
[  4] local 192.168.1.109 port 60240 connected with 192.168.1.110 port 5001
[  5] local 192.168.1.109 port 60242 connected with 192.168.1.110 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  2.11 GBytes  1.81 Gbits/sec
[  4]  0.0-10.0 sec  2.12 GBytes  1.82 Gbits/sec
[  5]  0.0-10.0 sec  2.10 GBytes  1.80 Gbits/sec
[  6]  0.0-10.0 sec  2.13 GBytes  1.83 Gbits/sec
[SUM]  0.0-10.0 sec  8.45 GBytes  7.25 Gbits/sec

I am using jumbo frames on physical machines so I did the same on Openstack

MTU on physical host:

[root@hercules-21 ~]# ip a
...
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP qlen 1000
    link/ether 7c:fe:90:12:23:ec brd ff:ff:ff:ff:ff:ff
    inet 10.0.32.21/16 brd 10.0.255.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::b1b0:74dd:8a3:705e/64 scope link
       valid_lft forever preferred_lft forever
...

MTU on VM:

[centos@centos ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc ...
(more)
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2017-12-05 01:36:52 -0600

Moss gravatar image

To make sure iperf is not freaking out you could compare the results using nuttcp via vmtp

Here you have my notes to set this up nice and easy: openstack_network_benchmark_vmtp

Cheers!

edit flag offensive delete link more

Comments

Hi @Moss, thank you for this, I followed your instructions but VMTP gets stuck during the process. http://paste.openstack.org/raw/628566/ any idea?

masber gravatar imagemasber ( 2017-12-10 18:45:28 -0600 )edit

It looks like you can't reach tcp/22 from the box where you installed vmtp. Make sure you have tcp/22 open in access&security for the project you are using for the benchmark.

Moss gravatar imageMoss ( 2017-12-11 03:27:45 -0600 )edit
0

answered 2017-12-04 18:35:29 -0600

SamYaple gravatar image

Make sure you have vxlan offloading configured on your nic. Without offloading, those are about the max speeds i would expect. with offloading youshould be able to get closer to 8gbit in the instance.

I do offloading on my 10gb nics and get close to 8gbit

edit flag offensive delete link more

Comments

Hi @SamYaple, I am far from an expert in this type of configuration but it looks like vxlan offloading is setup

[root@hercules-21 ~]# ethtool -k bond0 | grep udp_tnl
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
masber gravatar imagemasber ( 2017-12-10 18:48:42 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2017-11-30 02:25:58 -0600

Seen: 114 times

Last updated: Dec 05