Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Not getting full bandwidth VXLAN DVR

Hello,

In our OpenStack environment(Newton) we are using 10G network in all our nodes. We are using OVS bridging with VXLAN tunneling and DVR. We also enabled Jumbo frames in NIC and also in physical switches. We also enabled VXLAN offloading in our NIC. irqbalance is running which suppose to distribute the network irqs to all cores of the CPU. But unfortunately we are only getting below 1G bandwidth when communicate with our VM's with floating IP's from compute hosts. We tested it using iperf and results are like below

Connecting to host 10.96.48.29, port 5201
[  4] local 10.96.32.16 port 48464 connected to 10.96.48.29 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   105 MBytes   878 Mbits/sec   14   1.14 MBytes       
[  4]   1.00-2.00   sec   102 MBytes   860 Mbits/sec    0   1.25 MBytes       
[  4]   2.00-3.00   sec   102 MBytes   860 Mbits/sec    0   1.34 MBytes       
[  4]   3.00-4.00   sec   104 MBytes   870 Mbits/sec    0   1.40 MBytes       
[  4]   4.00-5.00   sec   104 MBytes   870 Mbits/sec    0   1.45 MBytes       
[  4]   5.00-6.00   sec   104 MBytes   870 Mbits/sec    3   1.07 MBytes       
[  4]   6.00-7.00   sec   104 MBytes   870 Mbits/sec    0   1.14 MBytes       
[  4]   7.00-8.00   sec   104 MBytes   870 Mbits/sec    0   1.20 MBytes       
[  4]   8.00-9.00   sec   104 MBytes   870 Mbits/sec    0   1.26 MBytes       
[  4]   9.00-10.00  sec   104 MBytes   870 Mbits/sec    0   1.32 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.01 GBytes   869 Mbits/sec   17             sender
[  4]   0.00-10.00  sec  1.01 GBytes   867 Mbits/sec                  receiver

Host to host communication is giving about 9.5G as bandwidth. Any idea to solve this issue is much appreciated. Thanks.

Not getting full bandwidth VXLAN DVR

Hello,

In our OpenStack environment(Newton) we are using 10G network in all our nodes. We are using OVS bridging with VXLAN tunneling and DVR. We also enabled Jumbo frames in NIC and also in physical switches. We also enabled VXLAN offloading in our NIC. irqbalance is running which suppose to distribute the network irqs to all cores of the CPU. But unfortunately we are only getting below 1G bandwidth when communicate with our VM's with floating IP's from compute hosts. We tested it using iperf and results are like below

Connecting to host 10.96.48.29, port 5201
[  4] local 10.96.32.16 port 48464 connected to 10.96.48.29 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   105 MBytes   878 Mbits/sec   14   1.14 MBytes       
[  4]   1.00-2.00   sec   102 MBytes   860 Mbits/sec    0   1.25 MBytes       
[  4]   2.00-3.00   sec   102 MBytes   860 Mbits/sec    0   1.34 MBytes       
[  4]   3.00-4.00   sec   104 MBytes   870 Mbits/sec    0   1.40 MBytes       
[  4]   4.00-5.00   sec   104 MBytes   870 Mbits/sec    0   1.45 MBytes       
[  4]   5.00-6.00   sec   104 MBytes   870 Mbits/sec    3   1.07 MBytes       
[  4]   6.00-7.00   sec   104 MBytes   870 Mbits/sec    0   1.14 MBytes       
[  4]   7.00-8.00   sec   104 MBytes   870 Mbits/sec    0   1.20 MBytes       
[  4]   8.00-9.00   sec   104 MBytes   870 Mbits/sec    0   1.26 MBytes       
[  4]   9.00-10.00  sec   104 MBytes   870 Mbits/sec    0   1.32 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.01 GBytes   869 Mbits/sec   17             sender
[  4]   0.00-10.00  sec  1.01 GBytes   867 Mbits/sec                  receiver

Host to host communication is giving about 9.5G as bandwidth. Currently I didn't configure Jumbo frames in Neutron because the applications running inside the VM uses only 1500MTU. VM to VM(using internal IP not floating) i am getting aroung 2.5G/sec. The above scenario only happens when I do the test with floating ip. Any idea to solve this issue is much appreciated. Thanks. Thanks.

Not getting full bandwidth VXLAN DVR

Hello,

In our OpenStack environment(Newton) we are using 10G network in all our nodes. We are using OVS bridging with VXLAN tunneling and DVR. We also enabled Jumbo frames in NIC and also in physical switches. We also enabled VXLAN offloading in our NIC. irqbalance is running which suppose to distribute the network irqs to all cores of the CPU. But unfortunately we are only getting below 1G bandwidth when communicate with our VM's with floating IP's from compute hosts. We tested it using iperf and results are like below

Connecting to host 10.96.48.29, port 5201
[  4] local 10.96.32.16 port 48464 connected to 10.96.48.29 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   105 MBytes   878 Mbits/sec   14   1.14 MBytes       
[  4]   1.00-2.00   sec   102 MBytes   860 Mbits/sec    0   1.25 MBytes       
[  4]   2.00-3.00   sec   102 MBytes   860 Mbits/sec    0   1.34 MBytes       
[  4]   3.00-4.00   sec   104 MBytes   870 Mbits/sec    0   1.40 MBytes       
[  4]   4.00-5.00   sec   104 MBytes   870 Mbits/sec    0   1.45 MBytes       
[  4]   5.00-6.00   sec   104 MBytes   870 Mbits/sec    3   1.07 MBytes       
[  4]   6.00-7.00   sec   104 MBytes   870 Mbits/sec    0   1.14 MBytes       
[  4]   7.00-8.00   sec   104 MBytes   870 Mbits/sec    0   1.20 MBytes       
[  4]   8.00-9.00   sec   104 MBytes   870 Mbits/sec    0   1.26 MBytes       
[  4]   9.00-10.00  sec   104 MBytes   870 Mbits/sec    0   1.32 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.01 GBytes   869 Mbits/sec   17             sender
[  4]   0.00-10.00  sec  1.01 GBytes   867 Mbits/sec                  receiver

Host to host communication is giving about 9.5G as bandwidth. Currently I didn't configure Jumbo frames in Neutron because the applications running inside the VM uses only 1500MTU. VM to VM(using internal IP not floating) i am getting aroung around 2.5G/sec. The above scenario only happens when I do the test with floating ip. Any idea to solve this issue is much appreciated. Thanks.