Quantum (neutron) gre slow performance [closed]
I got two nodes Grizzly installation on CentOS 6.4: tn0(192.168.250.20) All openstack services tn1(192.168.250.21) quantum-openvswitch-agent, compute
My problem is that Iperf between tn1 VM's and tn0 VM's or any machines at my subnet (192.168.250.0/23) shows 140 Kb/s
Iperf between tn0 VM's and 192.168.250.0/23 shows 900 Mb/s
Can someone please explain why performance is so slow? Is it mtu problem or something? There are my configs:
[root@tn0 ~]# quantum agent-list
+--------------------------------------+--------------------+-----------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------+-------+----------------+
| 55225703-f6d8-4a15-8632-83c0dd45ff2c | DHCP agent | tn1.stu.neva.ru | xxx | True |
| 9ffbe892-42cd-409a-a6b2-199946c4def0 | DHCP agent | tn0.stu.neva.ru | :-) | True |
| b261ed57-441e-4489-9fb7-a66c7e410126 | Open vSwitch agent | tn0.stu.neva.ru | :-) | True |
| c0d20a83-a5b9-4b39-b063-752a664c3511 | Open vSwitch agent | tn1.stu.neva.ru | :-) | True |
| d5dd7b83-10d1-422a-a30a-231368d26e96 | L3 agent | tn0.stu.neva.ru | :-) | True |
+--------------------------------------+--------------------+-----------------+-------+----------------+
[root@tn0 ~]# ifconfig
br-ex Link encap:Ethernet HWaddr 00:1E:67:04:28:0B
inet addr:192.168.250.20 Bcast:192.168.251.255 Mask:255.255.254.0
inet6 addr: fe80::5c64:77ff:fe27:3260/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:1383023 errors:0 dropped:0 overruns:0 frame:0
TX packets:1177310 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:23644682530 (22.0 GiB) TX bytes:313492896 (298.9 MiB)
br-int Link encap:Ethernet HWaddr C2:28:DC:EE:AB:47
inet6 addr: fe80::641b:4dff:fe87:aec3/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:18442 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:804564 (785.7 KiB) TX bytes:468 (468.0 b)
br-tun Link encap:Ethernet HWaddr F6:5D:52:AA:3D:41
inet6 addr: fe80::a458:cfff:fec3:1dee/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:8180 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:353164 (344.8 KiB) TX bytes:468 (468.0 b)
eth0 Link encap:Ethernet HWaddr 00:1E:67:04:28:0B
inet6 addr: fe80::21e:67ff:fe04:280b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1546 Metric:1
RX packets:1055723 errors:49 dropped:0 overruns:0 frame:49
TX packets:880976 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:329252619 (313.9 MiB) TX bytes:294162404 (280.5 MiB)
Interrupt:16 Memory:b1a00000-b1a20000
eth1 Link encap:Ethernet HWaddr 00:1E:67:04:28:0A
inet addr:192.168.250.22 Bcast:192.168.251.255 Mask:255.255.254.0
inet6 addr: fe80::21e:67ff:fe04:280a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1546 Metric:1
RX packets:12007 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes ...
You might be experiencing the same issue that is discussed in the thread on mailing list: http://lists.openstack.org/pipermail/openstack/2013-October/001778.html It seems like there is no known solution and it is a new problem.