Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

How how does icmp packet travel across two compute nodes without br-int and br-tun

Hi All,

I have a very peculiar problem here. There is no problem with my cloud as such but with my understanding. And I need your help to understand this. I am 15 months old in openstack now.

I have 4 -node setup of openstack RDO kilo version. Details are: one controller, one network and two compute nodes.
I am using rhel 7.1 VMs. This I have set up on my laptop just to understand the packet flow. All VMs has only one nic (vnic). I was trying to create private network so i have configured br-ex and pointed eth0 to that. This I have done on network and each compute node. I have created an instance on each compute node. These instances have different network. instance 1 has 30.30.30.0/28 and instance 2 has 40.40.40.0/28. the tunneling protocol is vxlan. Both instances are pinging.

My query is, according to this guide "http://docs.openstack.org/kilo/networking-guide/scenario_legacy_ovs.html#packet-flow" the icmp packets should follow as:
VM1==>tap device==>linux bridge==>br-int ==> br-tun ==> eth0(br-ex) and then it should go to network node's br-tun==>br-int here it should get routed and should go to compute node2 in the same but reverse sequence.
But here in my case, both br-tun and br-int are down on each node (compute1, compute2 and network node) and VMs are still pinging. When I tried to trace the icmp (tcpdump -i any ) on network node, nothing appears. Similiarly, tcpdumping on physical interfaces of compute nodes does not show anything either. 

Does anyone have any idea on this. Am I missing anything here.

I have attached all necessary files here but please feel free to ask for anything if required.
[C:\fakepath\ml2_conf.png](/upfiles/14763636582999503.png) [C:\fakepath\ovs_neutron_plugin_networkNode.png(/upfiles/1476363674919064.png)
[C:\fakepath\ovs_neutron_plugin_compute1.png](/upfiles/1476363705464284.png)
[C:\fakepath\ovs_neutron_plugin_compute2.png](/upfiles/14763637202517886.png)
[C:\fakepath\config_file.png](/upfiles/14763637484449715.png)

How how does icmp packet travel across two compute nodes without br-int and br-tun

Hi All,

I have a very peculiar problem here. There is no problem with my cloud as such but with my understanding. And I need your help to understand this. I am 15 months old in openstack now.

I have 4 -node setup of openstack RDO kilo version. Details are: one controller, one network and two compute nodes.
I am using rhel 7.1 VMs. This I have set up on my laptop just to understand the packet flow. All VMs has only one nic (vnic). I was trying to create private network so i have configured br-ex and pointed eth0 to that. This I have done on network and each compute node. I have created an instance on each compute node. These instances have different network. instance 1 has 30.30.30.0/28 and instance 2 has 40.40.40.0/28. the tunneling protocol is vxlan. Both instances are pinging.

My query is, according to this guide "http://docs.openstack.org/kilo/networking-guide/scenario_legacy_ovs.html#packet-flow" the icmp packets should follow as:
VM1==>tap device==>linux bridge==>br-int ==> br-tun ==> eth0(br-ex) and then it should go to network node's br-tun==>br-int here it should get routed and should go to compute node2 in the same but reverse sequence.
But here in my case, both br-tun and br-int are down on each node (compute1, compute2 and network node) and VMs are still pinging. When I tried to trace the icmp (tcpdump -i any ) on network node, nothing appears. Similiarly, tcpdumping on physical interfaces of compute nodes does not show anything either. 

Does anyone have any idea on this. Am I missing anything here.

I have attached all necessary files here but please feel free to ask for anything if required.
[C:\fakepath\ml2_conf.png](/upfiles/14763636582999503.png) [C:\fakepath\ovs_neutron_plugin_networkNode.png(/upfiles/1476363674919064.png)
[C:\fakepath\ovs_neutron_plugin_compute1.png](/upfiles/1476363705464284.png)
[C:\fakepath\ovs_neutron_plugin_compute2.png](/upfiles/14763637202517886.png)
[C:\fakepath\config_file.png](/upfiles/14763637484449715.png)

How how does icmp packet travel across two compute nodes without br-int and br-tunbr-tun running?

Hi All, All,

I have a very peculiar problem here. There is no problem with my cloud as such but with my understanding. And I need your help to understand this. I am 15 months old in openstack now. now.

I have 4 -node setup of openstack RDO kilo version. Details are: one controller, one network and two compute nodes. I am using rhel 7.1 VMs. This I have set up on my laptop just to understand the packet flow. All VMs has only one nic (vnic). I was trying to create private network so i have configured br-ex and pointed eth0 to that. This I have done on network and each compute node. I have created an instance on each compute node. These instances have different network. instance 1 has 30.30.30.0/28 and instance 2 has 40.40.40.0/28. the tunneling protocol is vxlan. Both instances are pinging. pinging.

My query is, according to this guide "http://docs.openstack.org/kilo/networking-guide/scenario_legacy_ovs.html#packet-flow" the icmp packets should follow as: VM1==>tap device==>linux bridge==>br-int ==> br-tun ==> eth0(br-ex) and then it should go to network node's br-tun==>br-int here it should get routed and should go to compute node2 in the same but reverse sequence. But here in my case, both br-tun and br-int are down on each node (compute1, compute2 and network node) and VMs are still pinging. When I tried to trace the icmp (tcpdump -i any ) on network node, nothing appears. Similiarly, tcpdumping on physical interfaces of compute nodes does not show anything either.

Does anyone have any idea on this. Am I missing anything here. here.

I have attached all necessary files here but please feel free to ask for anything if required. [C:\fakepath\ml2_conf.png](/upfiles/14763636582999503.png) [C:\fakepath\ovs_neutron_plugin_networkNode.png(/upfiles/1476363674919064.png) [C:\fakepath\ovs_neutron_plugin_compute1.png](/upfiles/1476363705464284.png) [C:\fakepath\ovs_neutron_plugin_compute2.png](/upfiles/14763637202517886.png) [C:\fakepath\config_file.png](/upfiles/14763637484449715.png)

required.

C:\fakepath\ml2_conf.png C:\fakepath\ovs_neutron_plugin_networkNode.png C:\fakepath\ovs_neutron_plugin_compute1.png C:\fakepath\ovs_neutron_plugin_compute2.png C:\fakepath\config_file.png

How how does icmp packet travel across two compute nodes without br-int and br-tun running?br-tun

Hi All,

I have a very peculiar problem here. There is no problem with my cloud as such but with my understanding. And I need your help to understand this. I am 15 months old in openstack now.

I have 4 -node setup of openstack RDO kilo version. Details are: one controller, one network and two compute nodes. I am using rhel 7.1 VMs. This I have set up on my laptop just to understand the packet flow. All VMs has only one nic (vnic). I was trying to create private network so i have configured br-ex and pointed eth0 to that. This I have done on network and each compute node. I have created an instance on each compute node. These instances have different network. instance 1 has 30.30.30.0/28 and instance 2 has 40.40.40.0/28. the tunneling protocol is vxlan. Both instances are pinging.

My query is, according to this guide "http://docs.openstack.org/kilo/networking-guide/scenario_legacy_ovs.html#packet-flow" guide the icmp packets should follow as: VM1==>tap device==>linux bridge==>br-int ==> br-tun ==> eth0(br-ex) eth0(br-ex) and then it should go to network node's br-tun==>br-int br-tun==>br-int here it should get routed and should go to compute node2 in the same but reverse sequence. But here in my case, both br-tun and br-int are down on each node (compute1, compute2 and network node) and VMs are still pinging. When I tried to trace the icmp (tcpdump (tcpdump -i any any ) on network node, nothing appears. Similiarly, tcpdumping on physical interfaces of compute nodes does not show anything either.

Does anyone have any idea on this. Am I missing anything here.

I have attached all necessary files here but please feel free to ask for anything if required.

C:\fakepath\ml2_conf.png C:\fakepath\ovs_neutron_plugin_networkNode.png C:\fakepath\ovs_neutron_plugin_compute1.png C:\fakepath\ovs_neutron_plugin_compute2.png C:\fakepath\config_file.png

How how does icmp packet travel across two compute nodes without br-int and br-tun

Hi All,

I have a very peculiar problem here. There is no problem with my cloud as such but with my understanding. And I need your help to understand this. I am 15 months old in openstack now.

I have 4 -node setup of openstack RDO kilo version. Details are: one controller, one network and two compute nodes. I am using rhel 7.1 VMs. This I have set up on my laptop just to understand the packet flow. All VMs has only one nic (vnic). I was trying to create private network so i have configured br-ex and pointed eth0 to that. This I have done on network and each compute node. I have created an instance on each compute node. These instances have different network. instance 1 has 30.30.30.0/28 and instance 2 has 40.40.40.0/28. the tunneling protocol is vxlan. Both instances are pinging.

My query is, according to this guide the icmp packets should follow as: VM1==>tap device==>linux bridge==>br-int ==> br-tun ==> eth0(br-ex) and then it should go to network node's br-tun==>br-int here it should get routed and should go to compute node2 in the same but reverse sequence. But here in my case, both br-tun and br-int are down on each node (compute1, compute2 and network node) and VMs are still pinging. When I tried to trace the icmp (tcpdump -i any ) on network node, nothing appears. Similiarly, tcpdumping on physical interfaces of compute nodes does not show anything either.

Does anyone have any idea on this. Am I missing anything here.

I have attached all necessary files here but please feel free to ask for anything if required.

C:\fakepath\ml2_conf.png C:\fakepath\ovs_neutron_plugin_networkNode.png C:\fakepath\ovs_neutron_plugin_compute1.png C:\fakepath\ovs_neutron_plugin_compute2.png C:\fakepath\config_file.png