Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

network performance vm to vm

Hi,

I am running a test opentack environment with 2 compute nodes each one of them with and MTU 9000.

Compute nodes:

  • hercules-21 (10.0.32.21): 64cpus, 512GB RAM and 2x 25Gbps bond network
  • hercules-22 (10.0.32.22): 64cpus, 512GB RAM and 2x 25Gbps bond network

VMs:

  • centos (192.168.1.110): 8 vcpus, 16GB RAM
  • centos2 (192.168.1.109): 8 vcpus, 16GB RAM

Network bandwidth test physical host to physical host using iperf3:

[root@hercules-21 ~]# iperf -c 10.0.32.22 -P 4
------------------------------------------------------------
Client connecting to 10.0.32.22, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  5] local 10.0.32.21 port 59014 connected with 10.0.32.22 port 5001
[  3] local 10.0.32.21 port 59008 connected with 10.0.32.22 port 5001
[  4] local 10.0.32.21 port 59010 connected with 10.0.32.22 port 5001
[  6] local 10.0.32.21 port 59012 connected with 10.0.32.22 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  6.91 GBytes  5.94 Gbits/sec
[  3]  0.0-10.0 sec  6.97 GBytes  5.98 Gbits/sec
[  4]  0.0-10.0 sec  6.96 GBytes  5.98 Gbits/sec
[  6]  0.0-10.0 sec  6.77 GBytes  5.82 Gbits/sec
[SUM]  0.0-10.0 sec  27.6 GBytes  23.7 Gbits/sec

Network bandwidth test vm to vm using iperf3 (each vm is running on a differen host):

[centos@centos2 ~]$ iperf -c 192.168.1.110 -P 4
------------------------------------------------------------
Client connecting to 192.168.1.110, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  6] local 192.168.1.109 port 60244 connected with 192.168.1.110 port 5001
[  3] local 192.168.1.109 port 60238 connected with 192.168.1.110 port 5001
[  4] local 192.168.1.109 port 60240 connected with 192.168.1.110 port 5001
[  5] local 192.168.1.109 port 60242 connected with 192.168.1.110 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  2.11 GBytes  1.81 Gbits/sec
[  4]  0.0-10.0 sec  2.12 GBytes  1.82 Gbits/sec
[  5]  0.0-10.0 sec  2.10 GBytes  1.80 Gbits/sec
[  6]  0.0-10.0 sec  2.13 GBytes  1.83 Gbits/sec
[SUM]  0.0-10.0 sec  8.45 GBytes  7.25 Gbits/sec

I am using jumbo frames on physical machines so I did the same on Openstack

MTU on physical host:

[root@hercules-21 ~]# ip a
...
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP qlen 1000
    link/ether 7c:fe:90:12:23:ec brd ff:ff:ff:ff:ff:ff
    inet 10.0.32.21/16 brd 10.0.255.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::b1b0:74dd:8a3:705e/64 scope link
       valid_lft forever preferred_lft forever
...

MTU on VM:

[centos@centos ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:39:41:08 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.110/24 brd 192.168.1.255 scope global dynamic eth0
       valid_lft 85658sec preferred_lft 85658sec
    inet6 fe80::f816:3eff:fe39:4108/64 scope link
       valid_lft forever preferred_lft forever

NOTES:

  • I am only running this 2 vms on the hosts so I have plenty of resources
  • I monitores the cpus on the vms during tests and they are not throttling the neetwork test

NOTES 2*: I am not sure whether this is important to mention but according to ovs, the ports are 10Gbps:

[root@hercules-21 ~]# docker exec -itu 0 openvswitch_vswitchd ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000b6d41e15d246
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_    tp_src mod_tp_dst
 1(patch-tun): addr:2e:f1:69:9c:6b:01
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 2(qvodaf83835-28): addr:96:47:72:b2:4d:12
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:b6:d4:1e:15:d2:46
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

QUESTION: I would like to know why my vms are not fully utilising the network and what can I do to fix it

Envinronment details:

  • Openstack version: Pike
  • Deployment: kolla-ansible
  • Hypervisor: KVM
  • Network setup: neturon + ovs + vxlan

Thank you very much

network performance vm to vm

Hi,

I am running a test opentack environment with 2 compute nodes each one of them with and MTU 9000.

Compute nodes:

  • hercules-21 (10.0.32.21): 64cpus, 512GB RAM and 2x 25Gbps bond network
  • hercules-22 (10.0.32.22): 64cpus, 512GB RAM and 2x 25Gbps bond network

VMs:

  • centos (192.168.1.110): 8 vcpus, 16GB RAM
  • centos2 (192.168.1.109): 8 vcpus, 16GB RAM

Network bandwidth test physical host to physical host using iperf3:

[root@hercules-21 ~]# iperf -c 10.0.32.22 -P 4
------------------------------------------------------------
Client connecting to 10.0.32.22, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  5] local 10.0.32.21 port 59014 connected with 10.0.32.22 port 5001
[  3] local 10.0.32.21 port 59008 connected with 10.0.32.22 port 5001
[  4] local 10.0.32.21 port 59010 connected with 10.0.32.22 port 5001
[  6] local 10.0.32.21 port 59012 connected with 10.0.32.22 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  6.91 GBytes  5.94 Gbits/sec
[  3]  0.0-10.0 sec  6.97 GBytes  5.98 Gbits/sec
[  4]  0.0-10.0 sec  6.96 GBytes  5.98 Gbits/sec
[  6]  0.0-10.0 sec  6.77 GBytes  5.82 Gbits/sec
[SUM]  0.0-10.0 sec  27.6 GBytes  23.7 Gbits/sec

Network bandwidth test vm to vm using iperf3 (each vm is running on a differen host):

[centos@centos2 ~]$ iperf -c 192.168.1.110 -P 4
------------------------------------------------------------
Client connecting to 192.168.1.110, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  6] local 192.168.1.109 port 60244 connected with 192.168.1.110 port 5001
[  3] local 192.168.1.109 port 60238 connected with 192.168.1.110 port 5001
[  4] local 192.168.1.109 port 60240 connected with 192.168.1.110 port 5001
[  5] local 192.168.1.109 port 60242 connected with 192.168.1.110 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  2.11 GBytes  1.81 Gbits/sec
[  4]  0.0-10.0 sec  2.12 GBytes  1.82 Gbits/sec
[  5]  0.0-10.0 sec  2.10 GBytes  1.80 Gbits/sec
[  6]  0.0-10.0 sec  2.13 GBytes  1.83 Gbits/sec
[SUM]  0.0-10.0 sec  8.45 GBytes  7.25 Gbits/sec

I am using jumbo frames on physical machines so I did the same on Openstack

MTU on physical host:

[root@hercules-21 ~]# ip a
...
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP qlen 1000
    link/ether 7c:fe:90:12:23:ec brd ff:ff:ff:ff:ff:ff
    inet 10.0.32.21/16 brd 10.0.255.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::b1b0:74dd:8a3:705e/64 scope link
       valid_lft forever preferred_lft forever
...

MTU on VM:

[centos@centos ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:39:41:08 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.110/24 brd 192.168.1.255 scope global dynamic eth0
       valid_lft 85658sec preferred_lft 85658sec
    inet6 fe80::f816:3eff:fe39:4108/64 scope link
       valid_lft forever preferred_lft forever

NOTES:

  • I am the system overall is not busy as it is only running this 2 vms on the hosts so I have plenty of resourcesvms
  • I monitores monitored the cpus on the vms during tests and they are not throttling the neetwork network test

NOTES 2*: I am not sure whether this is important to mention but according to ovs, the ports are 10Gbps:

[root@hercules-21 ~]# docker exec -itu 0 openvswitch_vswitchd ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000b6d41e15d246
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_    tp_src mod_tp_dst
 1(patch-tun): addr:2e:f1:69:9c:6b:01
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 2(qvodaf83835-28): addr:96:47:72:b2:4d:12
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:b6:d4:1e:15:d2:46
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

QUESTION: I would like to know why my vms are not fully utilising the network and what can I do to fix it

Envinronment details:

  • Openstack version: Pike
  • Deployment: kolla-ansible
  • Hypervisor: KVM
  • Network setup: neturon + ovs + vxlan

Thank you very much

network performance vm to vm

Hi,

I am running a test opentack environment with 2 compute nodes each one of them with and MTU 9000.

Compute nodes:

  • hercules-21 (10.0.32.21): 64cpus, 512GB RAM and 2x 25Gbps bond network
  • hercules-22 (10.0.32.22): 64cpus, 512GB RAM and 2x 25Gbps bond network

VMs:

  • centos (192.168.1.110): 8 vcpus, 16GB RAM
  • centos2 (192.168.1.109): 8 vcpus, 16GB RAM

Network bandwidth test physical host to physical host using iperf3:

[root@hercules-21 ~]# iperf -c 10.0.32.22 -P 4
------------------------------------------------------------
Client connecting to 10.0.32.22, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  5] local 10.0.32.21 port 59014 connected with 10.0.32.22 port 5001
[  3] local 10.0.32.21 port 59008 connected with 10.0.32.22 port 5001
[  4] local 10.0.32.21 port 59010 connected with 10.0.32.22 port 5001
[  6] local 10.0.32.21 port 59012 connected with 10.0.32.22 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  6.91 GBytes  5.94 Gbits/sec
[  3]  0.0-10.0 sec  6.97 GBytes  5.98 Gbits/sec
[  4]  0.0-10.0 sec  6.96 GBytes  5.98 Gbits/sec
[  6]  0.0-10.0 sec  6.77 GBytes  5.82 Gbits/sec
[SUM]  0.0-10.0 sec  27.6 GBytes  23.7 Gbits/sec

Network bandwidth test vm to vm using iperf3 (each vm is running on a differen host):

[centos@centos2 ~]$ iperf -c 192.168.1.110 -P 4
------------------------------------------------------------
Client connecting to 192.168.1.110, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  6] local 192.168.1.109 port 60244 connected with 192.168.1.110 port 5001
[  3] local 192.168.1.109 port 60238 connected with 192.168.1.110 port 5001
[  4] local 192.168.1.109 port 60240 connected with 192.168.1.110 port 5001
[  5] local 192.168.1.109 port 60242 connected with 192.168.1.110 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  2.11 GBytes  1.81 Gbits/sec
[  4]  0.0-10.0 sec  2.12 GBytes  1.82 Gbits/sec
[  5]  0.0-10.0 sec  2.10 GBytes  1.80 Gbits/sec
[  6]  0.0-10.0 sec  2.13 GBytes  1.83 Gbits/sec
[SUM]  0.0-10.0 sec  8.45 GBytes  7.25 Gbits/sec

I am using jumbo frames on physical machines so I did the same on Openstack

MTU on physical host:

[root@hercules-21 ~]# ip a
...
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP qlen 1000
    link/ether 7c:fe:90:12:23:ec brd ff:ff:ff:ff:ff:ff
    inet 10.0.32.21/16 brd 10.0.255.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::b1b0:74dd:8a3:705e/64 scope link
       valid_lft forever preferred_lft forever
...

MTU on VM:

[centos@centos ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:39:41:08 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.110/24 brd 192.168.1.255 scope global dynamic eth0
       valid_lft 85658sec preferred_lft 85658sec
    inet6 fe80::f816:3eff:fe39:4108/64 scope link
       valid_lft forever preferred_lft forever

NOTES:

  • the system overall is not busy as it is only running this 2 vms
  • I monitored the cpus on the vms during tests and they are not throttling the network test

NOTES 2*: I am not sure whether this is important to mention but according to ovs, the ports are 10Gbps:

[root@hercules-21 ~]# docker exec -itu 0 openvswitch_vswitchd ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000b6d41e15d246
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_    tp_src mod_tp_dst
 1(patch-tun): addr:2e:f1:69:9c:6b:01
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 2(qvodaf83835-28): addr:96:47:72:b2:4d:12
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:b6:d4:1e:15:d2:46
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

QUESTION: I would like to know why my vms are not fully utilising the network and what can I do to fix it

Envinronment details:

  • Openstack version: Pike
  • Deployment: kolla-ansible
  • Hypervisor: KVM
  • Network setup: neturon + ovs + vxlan

QUESTIONS:

  • What can I do next in order to give my vms full bandwidth?

  • Am I violating the way Openstack works and should I just leave the way it is?

Thank you very much

network performance vm to vm

Hi,

I am running a test opentack environment with 2 compute nodes each one of them with and MTU 9000.

Compute nodes:

  • hercules-21 (10.0.32.21): 64cpus, 512GB RAM and 2x 25Gbps bond network
  • hercules-22 (10.0.32.22): 64cpus, 512GB RAM and 2x 25Gbps bond network

VMs:

  • centos (192.168.1.110): 8 vcpus, 16GB RAM
  • centos2 (192.168.1.109): 8 vcpus, 16GB RAM

Network bandwidth test physical host to physical host using iperf3:

[root@hercules-21 ~]# iperf -c 10.0.32.22 -P 4
------------------------------------------------------------
Client connecting to 10.0.32.22, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  5] local 10.0.32.21 port 59014 connected with 10.0.32.22 port 5001
[  3] local 10.0.32.21 port 59008 connected with 10.0.32.22 port 5001
[  4] local 10.0.32.21 port 59010 connected with 10.0.32.22 port 5001
[  6] local 10.0.32.21 port 59012 connected with 10.0.32.22 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  6.91 GBytes  5.94 Gbits/sec
[  3]  0.0-10.0 sec  6.97 GBytes  5.98 Gbits/sec
[  4]  0.0-10.0 sec  6.96 GBytes  5.98 Gbits/sec
[  6]  0.0-10.0 sec  6.77 GBytes  5.82 Gbits/sec
[SUM]  0.0-10.0 sec  27.6 GBytes  23.7 Gbits/sec

Network bandwidth test vm to vm using iperf3 (each vm is running on a differen host):

[centos@centos2 ~]$ iperf -c 192.168.1.110 -P 4
------------------------------------------------------------
Client connecting to 192.168.1.110, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  6] local 192.168.1.109 port 60244 connected with 192.168.1.110 port 5001
[  3] local 192.168.1.109 port 60238 connected with 192.168.1.110 port 5001
[  4] local 192.168.1.109 port 60240 connected with 192.168.1.110 port 5001
[  5] local 192.168.1.109 port 60242 connected with 192.168.1.110 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  2.11 GBytes  1.81 Gbits/sec
[  4]  0.0-10.0 sec  2.12 GBytes  1.82 Gbits/sec
[  5]  0.0-10.0 sec  2.10 GBytes  1.80 Gbits/sec
[  6]  0.0-10.0 sec  2.13 GBytes  1.83 Gbits/sec
[SUM]  0.0-10.0 sec  8.45 GBytes  7.25 Gbits/sec

I am using jumbo frames on physical machines so I did the same on Openstack

MTU on physical host:

[root@hercules-21 ~]# ip a
...
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP qlen 1000
    link/ether 7c:fe:90:12:23:ec brd ff:ff:ff:ff:ff:ff
    inet 10.0.32.21/16 brd 10.0.255.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::b1b0:74dd:8a3:705e/64 scope link
       valid_lft forever preferred_lft forever
...

MTU on VM:

[centos@centos ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:39:41:08 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.110/24 brd 192.168.1.255 scope global dynamic eth0
       valid_lft 85658sec preferred_lft 85658sec
    inet6 fe80::f816:3eff:fe39:4108/64 scope link
       valid_lft forever preferred_lft forever

VXLan offload is enabled on physical NICs:

[root@hercules-21 ~]# ethtool -k p2p1
Features for p2p1:
rx-checksumming: on
tx-checksumming: on
        tx-checksum-ipv4: on
        tx-checksum-ip-generic: off [fixed]
        tx-checksum-ipv6: on
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: off [fixed]
        tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: on
tx-mpls-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
busy-poll: off [fixed]
tx-sctp-segmentation: off [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off [fixed]

NOTES:

  • the system overall is not busy as it is only running this 2 vms
  • I monitored the cpus on the vms during tests and they are not throttling the network test

Envinronment details:

  • Openstack version: Pike
  • Deployment: kolla-ansible
  • Hypervisor: KVM
  • Network setup: neturon + ovs + vxlan

QUESTIONS:

  • What can I do next in order to give my vms full bandwidth?

  • Am I violating the way Openstack works and should I just leave the way it is?

Thank you very much