Temporary packet drop after starting/stopping instance

asked 2014-02-04 23:12:11 -0600

DevEight gravatar image

updated 2014-02-06 23:05:18 -0600

I'm currently running a multi-node havana devstack environment where only the controller is running nova-network. I am able to launch and remove instances, but every time I do that I experience 90% packet drop to/from instances for 10-40 minutes. After some time everything works as expected until another instance is launched/removed.

The environment consists of a controller/compute node and one additional compute node. Both physical hosts are on the same subnet. I've configured iptables and nova network such that all VMs/physical hosts can contact each other. One unexpected detail is that even though there's severe packet drop between physical hosts and VMs the console in Horizon works perfectly fine.

Has anyone experienced anything similar or might have any pointers as to what I should look at?

Devstack localrc:



flat_interface = eth0
flat_network_bridge = br100
vlan_interface = eth0
public_interface = br100
network_manager = nova.network.manager.FlatDHCPManager
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
compute_driver = libvirt.LibvirtDriver
use_usb_tablet = False
libvirt_cpu_mode = none
libvirt_type = kvm
glance_api_servers = XXXXXXXXXXXXXXXXXX:9292
rabbit_password = XXXXXXXXXXXXXXXXXX
rabbit_host = localhost
rpc_backend = nova.openstack.common.rpc.impl_kombu
ec2_dmz_host = 143.248.X.X
vncserver_proxyclient_address =
vncserver_listen =
vnc_enabled = true
xvpvncproxy_base_url = http://143.248.X.X:6081/console
novncproxy_base_url = http://143.248.X.X:6080/vnc_auto.html
logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s %(instance)s
logging_debug_format_suffix = from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d
logging_default_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [-%(color)s] %(instance)s%(color)s%(message)s
logging_context_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [%(request_id)s %(user_name)s %(project_name)s%(color)s] %(instance)s%(color)s%(message)s
send_arp_for_ha = True
multi_host = True
instances_path = /opt/stack/data/nova/instances
lock_path = /opt/stack/data/nova
state_path = /opt/stack/data/nova
volume_api_class = nova.volume.cinder.API
enabled_apis = ec2,osapi_compute,metadata
instance_name_template = instance-%08x
sql_connection = mysql://root:XXXXXXXXXXXXXXXXXX@
metadata_workers = 4
ec2_workers = 4
osapi_compute_workers = 4
my_ip = 143.248.X.X
osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
s3_port = 3333
s3_host = 143.248.X.X
default_floating_pool = public
fixed_range = 
force_dhcp_release = True
dhcpbridge_flagfile = /etc/nova/nova.conf
scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
rootwrap_config = /etc/nova/rootwrap.conf
api_paste_config = /etc/nova/api-paste.ini
allow_resize_to_same_host = True
auth_strategy = keystone
debug = True
verbose = True

enabled = True

signing_dir = /var/cache/nova
admin_user = nova
admin_tenant_name = service
auth_host = 143.248.X.X

enabled = false
html5proxy_base_url = http://143.248.X.X:6082/spice_auto.html
edit retag flag offensive close merge delete


edit your question to add more details about your testing environment as the possible causes are almost infinite :)

smaffulli gravatar imagesmaffulli ( 2014-02-06 17:35:03 -0600 )edit

I've added some more details. I'm not sure what more information could be useful. Is there anything specific you think would help?

DevEight gravatar imageDevEight ( 2014-02-06 23:06:27 -0600 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2014-04-15 22:46:40 -0600

Can you recheck this? FIXED_RANGE= FIXED_NETWORK_SIZE=4096

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2014-02-04 23:12:11 -0600

Seen: 169 times

Last updated: Apr 15 '14