Irregular Network Latency to Instances
Hi, I noticed repeating high network latencies to my instances:
64 bytes from XXX.XXX.XXX.XXX: icmp_req=44 ttl=56 time=1387 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=45 ttl=56 time=380 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=46 ttl=56 time=4.79 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=47 ttl=56 time=4.64 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=48 ttl=56 time=5.07 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=49 ttl=56 time=4.61 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=50 ttl=56 time=4.77 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=51 ttl=56 time=5.21 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=52 ttl=56 time=4.73 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=53 ttl=56 time=4.54 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=54 ttl=56 time=1596 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=55 ttl=56 time=597 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=56 ttl=56 time=4.82 ms
64 bytes from XXX.XXX.XXX.XXX: icmp_req=57 ttl=56 time=4.78 ms
^C
--- XXX.XXX.XXX.XXX ping statistics ---
65 packets transmitted, 65 received, 0% packet loss, time 64109ms
rtt min/avg/max/mdev = 4.441/75.162/1596.820/277.746 ms, pipe 2
This ping goes through a router VM, a neutron network node to an instance via floating IP. I am not seeing this to the router-VM (392ms happend only once):
49 packets transmitted, 49 received, 0% packet loss, time 48080ms
rtt min/avg/max/mdev = 3.131/11.272/392.985/55.095 ms
But I can even see this between instances on the same GRE-based network:
64 bytes from 192.168.10.2: icmp_req=15 ttl=64 time=0.604 ms
64 bytes from 192.168.10.2: icmp_req=16 ttl=64 time=584 ms
64 bytes from 192.168.10.2: icmp_req=17 ttl=64 time=0.622 ms
64 bytes from 192.168.10.2: icmp_req=18 ttl=64 time=0.714 ms
64 bytes from 192.168.10.2: icmp_req=19 ttl=64 time=0.687 ms
64 bytes from 192.168.10.2: icmp_req=20 ttl=64 time=0.676 ms
64 bytes from 192.168.10.2: icmp_req=21 ttl=64 time=0.635 ms
64 bytes from 192.168.10.2: icmp_req=22 ttl=64 time=0.612 ms
64 bytes from 192.168.10.2: icmp_req=23 ttl=64 time=1458 ms
64 bytes from 192.168.10.2: icmp_req=24 ttl=64 time=464 ms
64 bytes from 192.168.10.2: icmp_req=25 ttl=64 time=0.631 ms
64 bytes from 192.168.10.2: icmp_req=26 ttl=64 time=1074 ms
64 bytes from 192.168.10.2: icmp_req=27 ...
I observe exactly the same behavior. Did you find a solution for your problem?
This is driving me nuts, because this also happens vor vrrp keep alive message which leads to flapping of the VIP assignment.