Ask Your Question

Neutron gre with mtu 1454 cause windows network speed too low

asked 2014-03-31 21:40:55 -0500

jiangwt100 gravatar image

It seems windows image can't get mtu info from dnsmasq, so every time i start a machine i need to change the mtu, is there better way to slove this problem.

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2014-07-14 06:26:18 -0500

Mathias Ewald gravatar image

The solution proposed in the varios documentations of settings the guests MTU to 1400 is complete non-sense in my opinion. Rather than being even more invasive into the guest operating system than we already are (cloud-init, etc) we should move the solution of this to the networking infrastructure: Increasing the MTU of the tunnel / transport VLAN should fix it without lowering the guest MTU. I am still trying to figure out which interfaces and bridges need to be reconfigured.

edit flag offensive delete link more


With jumbo frames (mtu 8000+), which are recommended with the higher throughput networking, on the backend this becomes less and less an issue. You can tweak the mtu up and down the network stack to the appropriate settings. I accomadate my guests as you suggest, not the other way around.

SamYaple gravatar imageSamYaple ( 2014-07-14 08:55:01 -0500 )edit

Hi SamYaple, first I'd like to note that anything beyond 1500 Bytes MTU size is considered a jumbo frame - not only 8000 and above. Secondly, and more importantly, there is no use in configuring a 10Gbit or high connection with a higher MTU than 1500 by default. The setting itself is useless as long as nobody on the network actually makes use of it. The one exception is with overlay networking as GRE or VXLAN when normally-sized traffic for example from VMs has to be encapsulated without the knowledge of those. In this case additional header information has to be put in place increasing the MTU size of the physically transported frame.

The problem I am having is one of more specific nature. I am running Neutron with ML2 (OVS and GRE) and experience a complete loss of connectivity once I put some load on the line. Reestablishing the ...(more)

Mathias Ewald gravatar imageMathias Ewald ( 2014-07-14 10:03:03 -0500 )edit

The 8000+ was referring to getting any benefit out of jumbo frames. And we are speaking of bundling traffic in neutron so the underlying interfaces will not take advantage of the 1500+ MTU, but the large payload will still benefit them by being enable up the chain when multiple interfaces traffic gets bundled. I believe we are speaking the same language here.

Try to set the mtu with ip command. ip link set mtu 9000 mtu dev br-transport I have had trouble with ifconfig as of late.

I have seen similar issues with double vlans before. If you can figure out the HIGHEST mtu you can set, this may give you an idea of what layer encapsulation is causing this.

SamYaple gravatar imageSamYaple ( 2014-07-14 10:33:18 -0500 )edit

I had issues with Windows images previously but the fix was actually turning off GRO on the physical interfaces. This greatly increased the network speed on the Windows instances as well as the linux ones. I spent a week hunting down MTU issues, but you don't really want to use that as a fix.

mpetason gravatar imagempetason ( 2014-07-14 10:34:46 -0500 )edit

Unfortunately, ip doesnt to the job either:

root@node00:~# ip link set mtu 9000 br-transport
root@node00:~# ip lin show br-transport
7: br-transport: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 4e:41:9e:77:58:82 brd ff:ff:ff:ff:ff:ff
root@node00:~# ip lin show ovs-infra
9: ovs-infra: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 00:25:90:9a:c6:54 brd ff:ff:ff:ff:ff:ff
root@node00:~# ip link set mtu 9000 ovs-infra
root@node00:~# ip link show ovs-infra
9: ovs-infra: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 00:25:90:9a:c6:54 brd ff:ff:ff:ff:ff:ff

As br-transport is a fake bridge I though I'd give ovs-infra (the parent) a try but ...(more)

Mathias Ewald gravatar imageMathias Ewald ( 2014-07-14 10:47:18 -0500 )edit

answered 2014-08-25 15:46:22 -0500

Abyee gravatar image

Hello All,

I am facing a Issue that in Windows VM using Icehouse GRE Ml2 config , I am able to ping , RDP my VM using external network properly using 1464 MTU in the dnsmasq file , but on the browser of WINDOWS VM's , i am getting Waiting for the sites meanwhile windows OS setting related to DNS , Firewall are all set fine , what could be the issue , is it related to MTU only or something else.

edit flag offensive delete link more


I'm having the same issue right now with Windows.

TheKid-89 gravatar imageTheKid-89 ( 2015-01-02 01:21:26 -0500 )edit


I was able to fix this issue by doing the following.

Open CMD as Admin netsh interface ipv4 show subinterfaces netsh interface ipv4 set subinterface "Local Area Connection" mtu=1400 store=persistent

Replace "Local Area Connection" with the interface you want to set the MTU for.

TheKid-89 gravatar imageTheKid-89 ( 2015-01-02 03:59:55 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2014-03-31 21:40:55 -0500

Seen: 1,861 times

Last updated: Jul 14 '14