Revision history [back]

vm connection error between vms on different hosts

Hi, I recently added a new compute node through packstack (RDO)

It seems that vm started on the new host are able to communicate with vm on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node.

How can i check the iptables configuration in order to be sure that all is OK?

vm connection error between vms on different hosts

Hi, I recently added a new compute node through packstack (RDO)

It seems that vm started on the new host are able to communicate with vm on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node.node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration.

How can i check the iptables configuration in order to be sure that all is OK?

vm connection error between vms on different hosts

Hi, I recently added a new compute node through packstack (RDO)

It seems that vm started on the new host are able to communicate with vm on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration.

How can i check the iptables configuration in order to be sure that all is OK?OK? Is there any rule that packstack inserts in iptables on controller node to allow inter-compute communications?

vm connection error between vms on different hosts

Hi, I recently added a new compute node through packstack (RDO)

It seems that vm vms started on the new host are able to communicate with vm vms on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration.

How can i check the iptables configuration in order to be sure that all is OK? Is there any rule that packstack inserts in iptables on controller node to allow inter-compute communications?

vm connection error between vms on different hosts

Hi, I recently added a new compute node through packstack (RDO)

It seems that vms started on the new host are able to communicate with vms on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration.configuration (no DVR).

How can i check the iptables configuration in order to be sure that all is OK? Is there any rule that packstack inserts in iptables on controller node to allow inter-compute communications?

vm connection error between vms on different hosts

Hi, I recently added a new compute node through packstack (RDO)

It seems that vms started on the new host are able to communicate with vms on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration (no DVR).

How can i check the iptables configuration in order to be sure that all is OK? Is there any rule that packstack inserts in iptables on controller node to allow inter-compute communications?

UPDATE

No way to let it work, but I don't really know how to search. No errors in log files!

vm connection error between vms on different hosts

Hi, I recently added a new compute node through packstack (RDO)

It seems that vms started on the new host are able to communicate with vms on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration (no DVR).

How can i check the iptables configuration in order to be sure that all is OK? Is there any rule that packstack inserts in iptables on controller node to allow inter-compute communications?

UPDATE

No way to let it work, but I don't really know how to search. No errors in log files!

UPDATE 2

How could I check inter-host iptables rules allowing tunnel traffic? Are there such rules?

vm connection error between vms on different hosts

Hi, I recently added a new compute node through packstack (RDO)

It seems that vms started on the new host are able to communicate with vms on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration (no DVR).

How can i check the iptables configuration in order to be sure that all is OK? Is there any rule that packstack inserts in iptables on controller node to allow inter-compute communications?

UPDATE

No way to let it work, but I don't really know how to search. No errors in log files!

UPDATE 2

How could I check inter-host iptables rules allowing tunnel traffic? Are there such rules?

UPDATE 3

In /etc/sysctl.conf I see this on the working host

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1


while i see this on the not-working one

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0


Could this be the issue?

vm connection error between vms on different hostshosts (vxlan tunnels)

Hi, I recently added a new compute node through packstack (RDO)

It seems that vms started on the new host are able to communicate with vms on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration (no DVR).

How can i check the iptables configuration in order to be sure that all is OK? Is there any rule that packstack inserts in iptables on controller node to allow inter-compute communications?

UPDATE

No way to let it work, but I don't really know how to search. No errors in log files!

UPDATE 2

How could I check inter-host iptables rules allowing tunnel traffic? Are there such rules?

UPDATE 3

In /etc/sysctl.conf I see this on the working host

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1


while i see this on the not-working one

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0


Could this be the issue?

vm connection error between vms on different hosts (vxlan tunnels)

Hi, I recently added a new compute node through packstack (RDO)

It seems that vms started on the new host are able to communicate with vms on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration (no DVR).

How can i check the iptables configuration in order to be sure that all is OK? Is there any rule that packstack inserts in iptables on controller node to allow inter-compute communications?

UPDATE

No way to let it work, but I don't really know how to search. No errors in log files!

UPDATE 2

How could I check inter-host iptables rules allowing tunnel traffic? Are there such rules?

UPDATE 3

In /etc/sysctl.conf I see this on the working host

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1


while i see this on the not-working one

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0


Could this be the issue?issue? From the official installation guide seems that the latter is the right configuration...

vm connection error between vms on different hosts (vxlan tunnels)

Hi, I recently added a new compute node through packstack (RDO)

It seems that vms started on the new host are able to communicate with vms on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration (no DVR).

How can i check the iptables configuration in order to be sure that all is OK? Is there any rule that packstack inserts in iptables on controller node to allow inter-compute communications?

UPDATE

No way to let it work, but I don't really know how to search. No errors in log files!

UPDATE 2

How could I check inter-host iptables rules allowing tunnel traffic? Are there such rules?

UPDATE 3

In /etc/sysctl.conf I see this on the working host

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1


while i see this on the not-working one

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0


Could this be the issue? From the official installation guide seems that the latter is the right configuration...

UPDATE 4

Host1 working
Host4 Not working
VM1 on Host1, VM4 on Host4


Trying to solve the problem I see that traffic (ping test) from VM1 to VM4 arrives to physical interface on Host4 but I never see a reply. Opposite test, pinging from VM4 to VM1, I see traffing arriving at VM1, that correctly reply, than I see the reply on Host1 interface and Host4 interface, but it never reaches VM4.

Disabling iptables on Host4 communication between VMs correctly works.

How could I check iptable configuration and enforce correct configuration through neutron?

vm connection error between vms on different hosts (vxlan tunnels)

Hi, I recently added a new compute node through packstack (RDO)

It seems that vms started on the new host are able to communicate with vms on the controller node (it is also a compute), but not with vms on other compute nodes. The problem is only related with the new node (communication between vm on two different hosts correctly works).

I'm using RDO packstack in vxlan configuration (no DVR).

How can i check the iptables configuration in order to be sure that all is OK? Is there any rule that packstack inserts in iptables on controller node to allow inter-compute communications?

UPDATE

No way to let it work, but I don't really know how to search. No errors in log files!

UPDATE 2

How could I check inter-host iptables rules allowing tunnel traffic? Are there such rules?

UPDATE 3

In /etc/sysctl.conf I see this on the working host

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1


while i see this on the not-working one

net.ipv4.tcp_keepalive_intvl=1
net.ipv4.tcp_keepalive_time=5
net.ipv4.tcp_keepalive_probes=5
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0


Could this be the issue? From the official installation guide seems that the latter is the right configuration...

UPDATE 4

Host1 working
Host4 Not working
VM1 on Host1, VM4 on Host4


Trying to solve the problem I see that traffic (ping test) from VM1 to VM4 arrives to physical interface on Host4 but I never see a reply. Opposite test, pinging from VM4 to VM1, I see traffing arriving at VM1, that correctly reply, than I see the reply on Host1 interface and Host4 interface, but it never reaches VM4.

Disabling iptables on Host4 communication between VMs correctly works.

How could I check iptable configuration and enforce correct configuration through neutron?

Here iptable-save command result on Host4 and here the working host's one. They are slightly different...