Ask Your Question
0

VMs on Controller Node can't properly send/recieve broadcasts

asked 2013-07-22 14:23:12 -0600

jzwiep gravatar image

On our private OpenStack essex cloud (single host, flatDHCP networking), VMs spawned on the controller can't properly send or receive broadcasts (looks like a NAT), while VMs spawned on the compute nodes don't experience any issues at all.

We have two independent networks:

Network 1: 10.10.32.0/21

Network 2: 10.10.20.0/23

and the VMs are all connected to 'network 1' and are given addresses in the subnet 10.10.38.0/24. When a broadcast is sent from a VM on the controller node, machines on 'network 1' see it as being sent from the controller's 'network 2' interface address. Simlararly, when a broadcast is sent from a machine on 'network 1' a VM on the controller node will see it as being sent from the controller's 'network 2' interface address.

This does not happen at all on the compute node guests.

Our config setup:

/etc/network/interfaces (eth0 is connected to 'network 1', eth1 to 'network 2')

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

auto lo
iface lo inet loopback

# eth0 is not defined here, because it is used in the bridge
auto eth1
iface eth1 inet dhcp

auto eth0
iface eth0 inet dhcp

auto br100
iface br100 inet dhcp
        bridge_ports eth0
        bridge_stp off
        bridge_maxwait 0
        bridge_fd 0

/etc/nova/nova.conf

--sql_connection=mysql://novadbadmin:novasecret@10.10.33.233/nova
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/run/lock/nova
--allow_admin_api=true
--use_deprecated_auth=false
--auth_strategy=keystone
--scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
--scheduler_available_filters=nova.scheduler.filters.standard_filters
--scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter,CoreFilter
--cpu_allocation_ratio=1.0
--s3_host=10.10.33.233
--ec2_host=10.10.33.233
--cc_host=10.10.33.233
--nova_url=http://10.10.33.233:8774/v1.1/
--glance_api_servers=10.10.33.233:9292
--image_service=nova.image.glance.GlanceImageService
--ec2_url=http://10.10.33.233:8773/services/Cloud
--keystone_ec2_url=http://10.10.33.233:5000/v2.0/ec2tokens
--api_paste_config=/etc/nova/api-paste.ini
--multi_host=true
--iscsi_helper=tgtadm
--root_helper=sudo nova-rootwrap
--verbose
--connection_type=libvirt
--libvirt_type=kvm
--start_guests_on_host_boot=true
--resume_guests_state_on_host_boot=true
--rpc_response_timeout=3600

# vnc specific configuration
--novnc_enabled=true
--novncproxy_base_url=http://10.10.33.233:6080/vnc_auto.html
--vncserver_proxyclient_address=10.10.33.233
--vncserver_listen=10.10.33.233

# network specific settings
--network_manager=nova.network.manager.FlatDHCPManager
--libvirt_use_virtio_for_bridges=true
--fixed_range=10.10.32.0/21
--flat_interface=eth0
--flat_network_bridge=br100
--flat_injected=False

# RabbitMQ settings
--rabbit_host=10.10.33.233

# Metadata settings
--metadatahost=10.10.33.233

iptables for instance on controller

Chain nova-compute-inst-229 (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             state INVALID
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
nova-compute-provider  all  --  anywhere             anywhere
ACCEPT     udp  --  10.10.38.1           anywhere             udp spt:bootps dpt:bootpc
ACCEPT     all  --  10.10.38.0/24        anywhere
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:3389
ACCEPT     icmp --  anywhere             anywhere
ACCEPT     tcp  --  anywhere             anywhere             multiport dports 50080:50110
ACCEPT     udp ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2013-07-24 20:47:41 -0600

jzwiep gravatar image

Managed to fix it.

I corrected the fixed_range in nova.conf to match nova's network, and added this config option:

--dmz_cidr=10.10.32.0/21

This stops VM traffic from getting SNATTED (https://answers.launchpad.net/nova/+question/185826)

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-07-22 14:23:12 -0600

Seen: 231 times

Last updated: Jul 24 '13