Ask Your Question
0

vm connection error between vms on different hosts (vxlan tunnels)

asked 2015-03-06 08:07:09 -0500

agti gravatar image

updated 2015-03-09 09:03:27 -0500

Hi, I recently added a new compute node through packstack (RDO)

It seems that vms started on the new host are able to communicate with vms on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration (no DVR).

How can i check the iptables configuration in order to be sure that all is OK? Is there any rule that packstack inserts in iptables on controller node to allow inter-compute communications?

Thanks in advance!

UPDATE

No way to let it work, but I don't really know how to search. No errors in log files!

UPDATE 2

How could I check inter-host iptables rules allowing tunnel traffic? Are there such rules?

UPDATE 3

In /etc/sysctl.conf I see this on the working host

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1

while i see this on the not-working one

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

Could this be the issue? From the official installation guide seems that the latter is the right configuration...

UPDATE 4

Host1 working
Host4 Not working
VM1 on Host1, VM4 on Host4

Trying to solve the problem I see that traffic (ping test) from VM1 to VM4 arrives to physical interface on Host4 but I never see a reply. Opposite test, pinging from VM4 to VM1, I see traffing arriving at VM1, that correctly reply, than I see the reply on Host1 interface and Host4 interface, but it never reaches VM4.

Disabling iptables on Host4 communication between VMs correctly works.

How could I check iptable configuration and enforce correct configuration through neutron?

Here iptable-save command result on Host4 and here the working host's one. They are slightly different...

edit retag flag offensive close merge delete

Comments

it looks like the interface that is used to configure communication between compute nodes are not configured correctly. You should try and look at the neutron configuration on the new compute node and see if it matches the controller node

foster gravatar imagefoster ( 2015-03-09 03:14:58 -0500 )edit

I already compared neutron configuration between another compute host and the not working one. The only difference I see is the local_ip in ovs_neutron_plugin.ini, but I think this is correct. Thanks anyway for your answer

agti gravatar imageagti ( 2015-03-09 07:50:25 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-06-19 06:27:54 -0500

ihar-hrachyshka gravatar image

Have you checked that your tunnels between Host1 and Host4 are up and point to correct IP endpoints? When you add a new OVS agent, it should register to controller, and the controller propagates tunnel updates to all other compute nodes to create tunnels for communication with the new agent.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-03-06 08:07:09 -0500

Seen: 1,225 times

Last updated: Jun 19 '15