Ask Your Question
0

Can't ping gateway of tenant router....

asked 2012-11-26 04:02:06 -0500

dsmith4546 gravatar image

Just installed folsom on ubuntu 12.04, using this guide. http://docs.openstack.org/folsom/basic-install/content/basic-install_intro.html (http://docs.openstack.org/folsom/basi...)

I have one controller and one compute node. I used the script provided with the install and created a private network within a tenant. L3 agent is on the controller.

When I try to ping the gateway address of the private subnet, the arp gets to eth1 on the controller, and then gets to the bridge that eth1 is in, but the "qr-1549a07f-3a" interface never responds to the arp. This probably is something simple, but I am out of ideas for tonight. qr-1549a07f-3a is up Any help would be appreciated.

root@controller:~# ifconfig qr-1549a07f-3a qr-1549a07f-3a Link encap:Ethernet HWaddr 32:a4:d3:c9:72:e8
inet addr:10.5.5.1 Bcast:10.5.5.255 Mask:255.255.255.0 inet6 addr: fe80::30a4:d3ff:fec9:72e8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:468 (468.0 B)

root@controller:~# ovs-vsctl show 195ff79e-e01a-4e93-84bc-868d8782f284 Bridge br-ex Port "qg-2b9a29bb-4b" Interface "qg-2b9a29bb-4b" type: internal Port "eth2" Interface "eth2" Port br-ex Interface br-ex type: internal Bridge br-int Port "tap4265fa27-08" tag: 1 Interface "tap4265fa27-08" type: internal Port "eth1" Interface "eth1" Port br-int Interface br-int type: internal Port "qr-1549a07f-3a" tag: 1 Interface "qr-1549a07f-3a" type: internal ovs_version: "1.4.0+build0"

edit retag flag offensive close merge delete

19 answers

Sort by ยป oldest newest most voted
0

answered 2012-11-26 04:16:49 -0500

danwent gravatar image

are you running the quantum-ovs-agent on both the compute and the controller node?

if you are using tunnel mode, I would expect to see a tunnel port in your ovs-vsctl output.

edit flag offensive delete link more
0

answered 2012-11-26 04:50:31 -0500

dsmith4546 gravatar image

Ya, I am using tunneling. You are right. I do not see a tunnel established. I'm at a loss. Not seeing anything useful in /var/log/quantum/openvswitch-agent.log on either the controller or compute node.

quantum 2842 1 1 14:55 ? 00:05:57 python /usr/bin/quantum-openvswitch-agent --config-file=/etc/quantum/quantum.conf --config-file=/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini --log-file=/var/log/quantum/openvswitch-agent.log root 3852 6831 0 23:30 pts/2 00:00:00 grep --color=auto ovs

^Thats running on the controller

root@ComputeNode1:~# service quantum-plugin-openvswitch-agent start quantum-plugin-openvswitch-agent start/running, process 26738

^Compute Node


[OVS]

network_vlan_ranges =

tunnel_id_ranges = 1:1000 integration_bridge = br-int tunnel_bridge = br-tun local_ip = 10.1.1.2 enable_tunneling = True

[AGENT]

root_helper = sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf

^Output from /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini on (Compute Node)

edit flag offensive delete link more
0

answered 2012-11-26 05:08:19 -0500

dsmith4546 gravatar image

[OVS]

tenant_network_type = gre

network_vlan_ranges =

tunnel_id_ranges = 1:1000 integration_bridge = br-int tunnel_bridge = br-tun local_ip = 10.1.1.1 enable_tunneling = True

[AGENT]

root_helper = sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf

^Output from /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini on (Controller)

Local IP was wrong, I changed it restarted services on both compute and controller, still not seeing it up. I'm at a loss. Getting close though.

edit flag offensive delete link more
0

answered 2012-11-26 08:50:07 -0500

gongysh gravatar image

Bridge br-int on control node should have patch-tun or something in br-int. In addition, we should see br-tun bridge too on your control node.

we need to create br-tun manually ovs-vsctl add-br br-tun

edit flag offensive delete link more
0

answered 2012-11-28 01:29:33 -0500

dsmith4546 gravatar image

Alright, that didn't work. But that's ok. Before I go adding things again. I have a question. If I were to ping the gateway that is on the layer 3 agent. What is the actual path of interfaces both physical and logical that data will go through? Is the br-tun in both bridges on the ComputeNode and on the node running the layer 3 agent? I tried adding it and it didn't work. Restarted agents and it still didn't work. If someone can amend the diagram I have below to reflect the correct flow of traffic I would appreciate it.

ComputeNode Node running Layer3 agent (vNic------->br-int------>eth1)-----GRE TUNNEL-----------(eth1-----br-int------qr-1549a07f-3a)

edit flag offensive delete link more
0

answered 2012-11-28 01:34:27 -0500

dsmith4546 gravatar image

The interfaces on the left side are on the ComputeNode, and on the right side are the interfaces on the layer 3 agent.

edit flag offensive delete link more
0

answered 2012-11-28 07:56:14 -0500

I am facing the same problem here. I also use the gre and tunnel approach, but I have a controller node and a network node separately, the vm can ping neither the gateway nor external ip.

even if I get into the subnet, it's impossible to ping vms:

root@netnode:/etc/quantum# ip netns exec qrouter-a6298ee9-2857-4067-81a8-f050fea7e24d bash root@netnode:/etc/quantum# ifconfig lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:784 (784.0 B) TX bytes:784 (784.0 B)

qg-3aa46fc9-6a Link encap:Ethernet HWaddr fa:16:3e:db:06:1f
inet addr:192.168.3.163 Bcast:192.168.3.175 Mask:255.255.255.240 inet6 addr: fe80::f816:3eff:fedb:61f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1511 errors:0 dropped:53 overruns:0 frame:0 TX packets:14 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:161443 (161.4 KB) TX bytes:1152 (1.1 KB)

qr-4d2e8b60-33 Link encap:Ethernet HWaddr fa:16:3e:f2:1c:48
inet addr:10.0.20.1 Bcast:10.0.20.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fef2:1c48/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:308 (308.0 B) TX bytes:468 (468.0 B)

qr-cd43bdb6-1c Link encap:Ethernet HWaddr fa:16:3e:11:52:0b
inet addr:10.0.10.1 Bcast:10.0.10.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe11:520b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:308 (308.0 B) TX bytes:468 (468.0 B)

root@netnode:/etc/quantum# ping 10.0.20.3 PING 10.0.20.3 (10.0.20.3) 56(84) bytes of data. From 10.0.20.1 icmp_seq=1 Destination Host Unreachable From 10.0.20.1 icmp_seq=2 Destination Host Unreachable From 10.0.20.1 icmp_seq=3 Destination Host Unreachable From 10.0.20.1 icmp_seq=4 Destination Host Unreachable From 10.0.20.1 icmp_seq=5 Destination Host Unreachable

edit flag offensive delete link more
0

answered 2012-11-28 12:37:03 -0500

i fixed my problem by removing a incorrect ip "10.0.0.3" from br-tun root@node1:~# ovs-vsctl show 3b78ade3-870b-40d1-80cb-dc8be59a2b85 Bridge br-tun Port "gre-2" Interface "gre-2" type: gre options: {in_key=flow, out_key=flow, remote_ip="10.0.0.3"} Port "gre-9" Interface "gre-9" type: gre options: {in_key=flow, out_key=flow, remote_ip="192.168.3.142"} Port patch-int Interface patch-int type: patch options: {peer=patch-tun}

the following command fixes the problem: ovs-vsctl del-port br-tun gre-2

edit flag offensive delete link more
0

answered 2012-11-28 12:54:20 -0500

gongysh gravatar image

it should be: vNic->br-int->patch-tun ->patch-int ->gre gre -> patch-int->patch-tun->br-int->qr-1549a07f-3a

edit flag offensive delete link more
0

answered 2012-11-30 20:41:00 -0500

dsmith4546 gravatar image

Ok, this is something with the agent on the compute node. To install the openvswitch plugin on the compute node is this the only package I'm installing ---"apt-get install -y quantum-plugin-openvswitch-agent", because when I installed other quantum packages I noticed a python-pyparsing package was installed. Do I need that to successfully install and run the openvswitch agent? For some reason when I do a "ps -ef | grep quantum" on the compute node I'm not seeing it running.....

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2012-11-26 04:02:06 -0500

Seen: 502 times

Last updated: Dec 05 '12