Ask Your Question

how do I configure mitaka to support jumbo frames

asked 2017-02-02 06:10:16 -0500

schmilmonderer gravatar image


I would like to be able to send large packets between two VMs. I've configured the following in my environment as described in (

  1. In the neutron.conf file: [DEFAULT] global_physnet_mtu = 9000
  2. In the ml2_conf.ini file: [ml2] path_mtu = 9000

But I'm still unable to send large packets between VMs The tap interfaces and integration bridge MTU is still 1500


edit retag flag offensive close merge delete


Hi did you resolve this issue after all? I'm running into the same issue.....Thx.

Kent Wu gravatar imageKent Wu ( 2017-02-15 13:09:15 -0500 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2017-02-24 19:39:42 -0500

prakashrao gravatar image

i updated the in /usr/lib/python2.7/dist-packages/neutron/agent/common dir file with i got after downloading the patch for bug

Then I have set global_physnet_mtu as 9000 in neutron.conf on all nodes and path_mtu as 9000 and physical_network_mtus as 9000 under [ml2] in ml2_conf.ini file

I also had to recreate my virtual networks and instances for the mtu to take into effect in instances.

The mtu of the instances and tap* have 8950 mtu but the virbr0 which seems to be the linuxintergration bridge still has 1500 mtu.I am still unable to ping larger sized packets from vm to vm.

In compute nodes i see that virbr0 is still 1500 and i am not sure why tap8eca5f1d-bb Link encap:Ethernet HWaddr fe:16:3e:d5:46:35 inet6 addr: fe80::fc16:3eff:fed5:4635/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:8950 Metric:1 RX packets:45 errors:0 dropped:0 overruns:0 frame:0 TX packets:52 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3694 (3.6 KB) TX bytes:4967 (4.9 KB)

virbr0 Link encap:Ethernet HWaddr 52:54:00:d9:25:2d inet addr: Bcast: Mask: UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

Also for some some strange reason i am unable to ssh to my instances using the floating ip after i updated mtu values.I am able to ping the floating ips though?The instances mgmt interfaces have come up with 8950 as mtu

edit flag offensive delete link more


virbr0 is created by libvirt but not used by OpenStack. If you deploy ML2 with the Linuxbridge mechanism driver, the bridges are named brq..., as seen on In case of the OVS Driver, it's br-int (configurable).

Bernd Bausch gravatar imageBernd Bausch ( 2017-02-24 19:54:01 -0500 )edit

Compute nodes brq* & tap* interfaces have mtu 8950.Phy nics have mtu 9000.Controller node brq* interface mtu is 9000.I recreated all the provider network ,self service n/w and mgmt-router.The instances themselves come up with 8950 mtu .but still cant ping instances with jumbo packets

prakashrao gravatar imageprakashrao ( 2017-02-27 13:35:20 -0500 )edit

Also after this change the instances are unable to reach public network.I can reach the compute nodes using the provider network but cant ssh into instances using floating ip.(provider).Before this change i was able to ssh into instances using provider n/w

prakashrao gravatar imageprakashrao ( 2017-02-27 13:35:55 -0500 )edit

Hi Guys

I have same problem,

I have deployed openstack "ocata" using ubuntu-16.04, MAAS 2.1.3 and JUJU charm,

I have set configuration flags in neutron-api charm to reflect MTU size, yet when o build overlay networks i see that MTU is only 1458, Did you guys have any success running jubmo frames?

cbspidi gravatar imagecbspidi ( 2017-03-20 05:42:06 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2017-02-02 06:10:16 -0500

Seen: 728 times

Last updated: Feb 24 '17