Ask Your Question
1

Devstack all in one + compute node. Neutron configuration

asked 2015-07-01 04:56:13 -0500

tyyko gravatar image

Hello, can someone tell me if these configuration (networking part) is right or not? I'm experiencing problem with fragmented packets on instances in the compute node. (I cant SSH VMs on compute node, sometimes works, but mostly not, without any change)

Setup: all in one node (with compute) only compute node

ping -s 4000 INSTANCE_FLOAT_IP_ALLINONE -> works 

ping -s 4000 INSTANCE_FLOAT_IP_COMPUTE -> sometimes works if done from a VM in the ALLINONE node, from elsewere does not work at all

I'd prefer to change config 'live' on neutron and restarting from the screen processes instead of losing all the work done. I dont care to change the tunneling mode, i chose this one copying from a guide, but i just need that networking works on both nodes, with a private 10.0.0.0/24 and a public 192.168.10.0/24 network (the public one have a gateway to access to the internet which is the same machine on that acts as ALLINONE node, with nat rules on iptables.

I already changed the mtu on dnsmasq to 1400, and ubuntucloud is set on 1400 when i log in, still have the same problems.

on both nodes I have 2 network cards: eth0 on 192.168.10.0/24 and another one on a public ip only for direct internet connection and ssh management from internet

ALLINONE NODE:

cat local.conf
[[local|localrc]]

#OFFLINE=True

SERVICE_TOKEN=x
ADMIN_PASSWORD=x
MYSQL_PASSWORD=x
RABBIT_PASSWORD=x
SERVICE_PASSWORD=$ADMIN_PASSWORD

LIBVIRT_TYPE=kvm
HOST_IP=192.168.10.250

FIXED_RANGE=10.0.0.0/24
FLOATING_RANGE=192.168.10.0/24
PUBLIC_NETWORK_GATEWAY=192.168.10.250
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.10,end=192.168.10.150
Q_L3_ENABLED=True
PUBLIC_INTERFACE=eth0
Q_USE_PROVIDERNET_FOR_PUBLIC=True

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-meta
enable_service q-l3
enable_service neutron

enable_service n-novnc
enable_service n-cauth
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://192.168.10.250:6080/vnc_auto.html"
VNCSERVER_LISTEN=192.168.10.250
VNCSERVER_PROXYCLIENT_ADDRESS=192.168.10.250

Q_USE_SECGROUP=True
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=3001:4000
PHYSICAL_NETWORK=default
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

#Q_USE_PROVIDER_NETWORKING=True

MULTI_HOST=1
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=14
VOLUME_BACKING_FILE_SIZE=409600M

#enable_service s-proxy s-object s-container s-account
#SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
#SWIFT_REPLICAS=1
#SWIFT_DATA_DIR=$DEST/data

enable_service tempest

COMPUTE NODE:

cat local.conf
[[local|localrc]]
#OFFLINE=True

SERVICE_TOKEN=x
ADMIN_PASSWORD=x
MYSQL_PASSWORD=x
RABBIT_PASSWORD=x
SERVICE_PASSWORD=$ADMIN_PASSWORD

DATABASE_TYPE=mysql

SERVICE_HOST=192.168.10.250
MYSQL_HOST=192.168.10.250
RABBIT_HOST=192.168.10.250
GLANCE_HOSTPORT=192.168.10.250:9292
Q_HOST=192.168.10.250
#CINDER_SERVICE_HOST=192.168.10.250

enable_service n-cauth
enable_service n-novnc

NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://192.168.10.250:6080/vnc_auto.html"
VNCSERVER_LISTEN=192.168.10.251
VNCSERVER_PROXYCLIENT_ADDRESS=192.168.10.251

#FIXED_RANGE="10.0.0.0/24"
#FLOATING_RANGE="192.168.10.0/24"
#PUBLIC_NETWORK_GATEWAY="192.168.10.250"

LIBVIRT_TYPE=kvm

ENABLED_SERVICES=n-cpu,q-agt,rabbit,neutron


HOST_IP=192.168.10.251
MULTI_HOST=1

LOGFILE=$DEST/logs/stack.sh.log

LOGDAYS=14

#enable_service s-proxy s-object s-container s-account
#SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
#SWIFT_REPLICAS=1
#SWIFT_DATA_DIR=$DEST/data

#enable_service tempest
edit retag flag offensive close merge delete

Comments

If it's not clear, the goal is to stop needing to change MTU on the VMs, because in some cases is not possible, so I just want the network to work with the defaul mtu of the instances (1500)

tyyko gravatar imagetyyko ( 2015-07-01 07:38:04 -0500 )edit

I solved hard-coding a script to change the MTU value on boot in the VMs with a fixed-ip, any better solution is welcome

tyyko gravatar imagetyyko ( 2015-07-06 02:47:21 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-07-08 16:15:24 -0500

Try reducing MTU of the instances launched if you can access them from console if it works make the solution permanent .

From console type in this command

sudo ip link set eth0 mtu 1400

if ssh works it fine if not reduce the mtu further. Once ssh works do the following to make it permanent.

Make dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf in /etc/neutron/dhcp_agent.ini

Then create a file /etc/neutron/dnsmasq-neutron.conf with content

dhcp-option-force=26,1400

Then restart all neutron agents and dnsmasq

Hope it works.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-07-01 04:56:13 -0500

Seen: 693 times

Last updated: Jul 08 '15