Ask Your Question
0

Externel networking problem

asked 2013-06-13 17:03:24 -0500

diul gravatar image

Se qualcuno è italiano può leggere del problema in italiano dopo la descrizione in inglese :D

hi guys, i'm doing the work of an openstack installation for my University (Federico II di Napoli) as last test for my graduation. I encountered a problem with quantum and connectivity, I need help. My setup is based on three physical server, all with CentOS 6.3, with this roles: -1 Server ( Controller node) offering all types of services with Cinder and Quantum-Server+OVS plugin. On this server I have 4 NICs, of which I use ---eth0 with public IP and as management network too, that is this PublicIP is also used to be reachable from the other nodes ---eth1 configured with no IP but tagged with a pool of 16 VLAN ID, to use, in my intention, for the "local" networking between VMs ---eth2 bridged with br-ex and no IP manually configured -2 Servers (Compute nodes) offering nova-compute service and Quantum+OVS plugin, each one with 2 NICs ---eth0 setted with PublicIP ---eth1 configured as Controller, tagged with the same 16 VLAN ID I would also use quantum-l3-agent to route VMs on internet, with no quantum-dhcp-agent because of problem with "use namespace" that I'm forced to set to "False" on CentOS, and that lets use on the same node only one service between l3 and dhcp. My problem is that I'm not able to send VMs on internet, they can't ping from and to internet, but they can ping between them. I have used Quantum and l3-agent in this way: -"Private network" of kind 10.0.0.0/24, with a router setted with "router-inderface-add Router Private network", and which takes 10.0.0.1 IP, and "router-gateway-set Router Public network" -"Public network" of the same type of physical net of the three servers. I mean: controller and compute nodes on their eth0 have (for example) PublicIP.3, PubblicIP.4, PubblicIP.5 of the subnet PublicIP.0/24 with gateway PublicIP.1. The allocation pool for "Pubblic network" is "start=PubblicIP.8,end=PubblicIP.23" with "gateway PubblicIP.1 PubblicIP.0/24" The problem is that the pool of PublicIP used for VMs are is reachable from internet and the VMs can't ping, i.e. google DNS (8.8.8.8). I have tried to create "Public network" in quantum with "-pysical_network" both on eth1 or eth2 on the Controller, and both "local" both "VLAN" type. I have also tried to use only one network that assigns to VMs an IP taken from the pool of "start=PubblicIP.8,end=PubblicIP.23", setting on the VMs (because of no dhcp service) with "ifconfig eth0 PublicIP.9 netmask 255.255.255.0" and "route add default gw PublicIP.1 netmask 255.255.255.0", but the best result that i have obtained is that the VMs can ping from and to the physical network, but I can't go further than the physical gateway "PublicIP.1" Probably I do ... (more)

edit retag flag offensive close merge delete

11 answers

Sort by » oldest newest most voted
0

answered 2013-06-14 13:45:26 -0500

diul gravatar image

I'm able to ping the router interface from the VMs (10.0.0.1), but i can't see if SNAT rules are ok, i have no such experience. Can you tell me how to check that?

For more information this is my current situation, with the "ovs-vsctl show" as shown in the previous post:

nova list +--------------------------------------+--------+--------+------------------+ | ID | Name | Status | Networks | +--------------------------------------+--------+--------+------------------+ | 99914396-8c52-4800-a542-209d64aa8d39 | gfgfgf | ACTIVE | Privata=10.0.0.2 | | b1d49c90-bd2a-4fd8-8838-20e97310508c | gfgfgf | ACTIVE | Privata=10.0.0.3 | +--------------------------------------+--------+--------+------------------+

quantum net-list +--------------------------------------+---------+--------------------------------------+ | id | name | subnets | +--------------------------------------+---------+--------------------------------------+ | 8c7fb762-eddd-4dd6-945c-7282d8982a1e | Rete | 7b4cf538-4c68-4c64-a70c-368d3fffd3eb | | d8a6bef4-38e0-45b2-807b-490071ebe908 | Privata | cd07ba12-acdf-412d-aeef-ec4b3222deb9 | +--------------------------------------+---------+--------------------------------------+ quantum subnet-list +--------------------------------------+------+-----------------+------------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+------+-----------------+------------------------------------------------------+ | cd07ba12-acdf-412d-aeef-ec4b3222deb9 | | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} | | 7b4cf538-4c68-4c64-a70c-368d3fffd3eb | | PublicIP.0/24 | {"start": "PublicIP.208", "end": "PublicIP.223"} | +--------------------------------------+------+-----------------+------------------------------------------------------+

quantum router-list +--------------------------------------+---------+--------------------------------------------------------+ | id | name | external_gateway_info | +--------------------------------------+---------+--------------------------------------------------------+ | 013f03fa-e51d-42a9-8e17-1b1d475b1558 | RouterP | {"network_id": "8c7fb762-eddd-4dd6-945c-7282d8982a1e"} | +--------------------------------------+---------+--------------------------------------------------------+

quantum port-list +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+ | d2665361-33b4-4cf4-8f06-a53722abf03a | | fa:16:3e:b0:d6:37 | {"subnet_id": "cd07ba12-acdf-412d-aeef-ec4b3222deb9", "ip_address": "10.0.0.1"} | | 6ad484d8-337a-4adb-a8d3-536da9bb5325 | | fa:16:3e:ed:82:59 | {"subnet_id": "7b4cf538-4c68-4c64-a70c-368d3fffd3eb", "ip_address": "PublicIP.208"} | | ee457aec-13a6-4266-89c6-8adc2def4304 | | fa:16:3e:9c:b2:5e | {"subnet_id": "cd07ba12-acdf-412d-aeef-ec4b3222deb9", "ip_address": "10.0.0.2"} | | 3e2ca6a4-9147-4972-b2f2-eabf9cd9f05b | | fa:16:3e:44:e8:6d | {"subnet_id": "cd07ba12-acdf-412d-aeef-ec4b3222deb9", "ip_address": "10.0.0.3"} | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+

quantum net-show Rete +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 8c7fb762-eddd-4dd6-945c-7282d8982a1e | | name | Rete | | provider:network_type | flat | | provider:physical_network | eth2 | | provider:segmentation_id | | | router:external | True | | shared | False | | status | ACTIVE | | subnets | 7b4cf538-4c68-4c64-a70c-368d3fffd3eb | | tenant_id | fd547431c9b14ae387ecfc563f72ce5d | +---------------------------+--------------------------------------+

quantum net-show Privata +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | d8a6bef4-38e0-45b2-807b-490071ebe908 | | name | Privata | | provider:network_type | vlan | | provider:physical_network | eth1 | | provider:segmentation_id | 101 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | cd07ba12-acdf-412d-aeef-ec4b3222deb9 | | tenant_id | fd547431c9b14ae387ecfc563f72ce5d | +---------------------------+--------------------------------------+

quantum router-show RouterP +-----------------------+--------------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------------+ | admin_state_up | True | | external_gateway_info | {"network_id": "8c7fb762-eddd-4dd6-945c-7282d8982a1e"} | | id | 013f03fa-e51d-42a9-8e17-1b1d475b1558 | | name | RouterP | | status | ACTIVE | | tenant_id | fd547431c9b14ae387ecfc563f72ce5d | +-----------------------+--------------------------------------------------------+

l3-agent.ini with: interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver use_namespaces = False router_id = 013f03fa-e51d-42a9-8e17-1b1d475b1558 gateway_external_network_id = 8c7fb762-eddd-4dd6-945c-7282d8982a1e handle_internal_only_routers = True external_network_bridge = br-ex

Ifconfig returns: qr-d2665361-33 Link encap:Ethernet HWaddr FA:16:3E:B0:D6:37
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0 qg-6ad484d8-33 Link encap:Ethernet HWaddr FA:16:3E:ED:82:59
inet addr:PublicIP.208 Bcast:PublicIP.255 Mask:255.255.255.0 eth2 and br-ex with noIP

The "tap" virtual interfaces are not listed becouse the VMs actually run on the compute nodes,

route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface PublicIP.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-6ad484d8-33 PublicIP.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-d2665361-33 0.0.0.0 PublicIP.1 0.0.0.0 UG 0 0 0 qg-6ad484d8-33 0.0.0.0 PublicIP.1 0.0.0.0 UG 0 0 0 eth0

Obviously l3-agent runnig, without dhcp-agent

edit flag offensive delete link more
0

answered 2013-06-14 16:30:18 -0500

diul gravatar image

Changing the management network is the only solution? Could i solve the proble bridging br-ex with eth0 (Public and managemente network) instead than with eht2?

edit flag offensive delete link more
0

answered 2013-06-14 02:26:56 -0500

gongysh gravatar image

have u enabled the security group, if enabled, u can first disable it and have a try. if not, can u paste out the ovs-vsctrl show on both compute and controller nodes?

edit flag offensive delete link more
0

answered 2013-06-14 08:09:45 -0500

diul gravatar image

I have a "default" security group with these rules: +-------------+-----------+---------+-----------------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------------+--------------+ | icmp | -1 | -1 | | default | | icmp | -1 | -1 | 10.0.0.0/24 | | | icmp | -1 | -1 | 10.0.1.0/24 | | | icmp | -1 | -1 | PublicIP.0/24 | | +-------------+-----------+---------+-----------------+--------------+

ovs-vsctl show results: -Controller node: Bridge "br-eth1" Port "eth1" Interface "eth1" Port "phy-br-eth1" Interface "phy-br-eth1" Port "br-eth1" Interface "br-eth1" type: internal Bridge br-int Port "int-br-eth1" Interface "int-br-eth1" Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex Bridge br-ex Port br-ex Interface br-ex type: internal Port "eth2" Interface "eth2" Port phy-br-ex Interface phy-br-ex

-Compute1 node: Bridge "br-eth1" Port "eth1" Interface "eth1" Port "phy-br-eth1" Interface "phy-br-eth1" Port "br-eth1" Interface "br-eth1" type: internal Bridge br-int Port "int-br-eth1" Interface "int-br-eth1" Port br-int Interface br-int type: internal

-Compute2 node: Bridge "br-eth1" Port "eth1" Interface "eth1" Port "phy-br-eth1" Interface "phy-br-eth1" Port "br-eth1" Interface "br-eth1" type: internal Bridge "virbr0" Port "virbr0" Interface "virbr0" type: internal Port "virbr0-nic" Interface "virbr0-nic" Bridge br-int Port "int-br-eth1" Interface "int-br-eth1" Port br-int Interface br-int type: internal

edit flag offensive delete link more
0

answered 2013-06-14 12:10:28 -0500

salvatore-orlando gravatar image

Hi diul,

Your networking might be broken in several points:

1) If you're unable to ping 10.0.0.1 from your instances then perhaps your l3 agent is not working properly. There might be multiple reasons for this. It might not be receiving notifications from the quantum server, or quantum-rootwrap is complaining, for instance. The logs will be able to tell you more. 2) If you're not able to ping 10.0.0.1 but you can see the tap interface on the network node then the issue might be on the wire. If you're doing isolation with VLANs, ensure the physical switch ports are trunked. If you're doing GRE, ensure the local ip for each host is correctly configured (this is the IP where the GRE tunnels are terminated). Also, it won't harm checking IP configuration for the tap interface corresponding to the internal router interface. 3) If you can ping 10.0.0.1 then the first thing to check is the SNAT rule for going to your external gateway. This rule should SNAT 10.0.0.0/24 to the IP address on the external network which has been picked for your router. 4) If you can ping 10.0.0.1, and the SNAT rules are fine, then perhaps the issue is in br-ex or in the nexthop for 0.0.0.0/0. One test you can do is to manually configure an IP in the public network range on some other host and try and ping the external gateway of your router. If it does not reply, you might want to check again the connection between br-ex and the physical network. If you can reach your external network in this way, you should probably check that the route for 0.0.0.0/0 is correctly configured and its nexthop is reachable.

Hope this helps, Salvatore

edit flag offensive delete link more
0

answered 2013-06-14 14:21:54 -0500

diul gravatar image

Update to the prev msg

On the controller i obtain this:

Chain quantum-l3-agent-snat (1 references) pkts bytes target prot opt in out source destination
20 1265 quantum-l3-agent-float-snat all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 SNAT all -- * * 10.0.0.0/24 0.0.0.0/0 to:PublicIP.208

And that PublicIP.208 is pingable from one of the compute node, but not from internet. After l3-agent start, also the PublicIp on eth0 is no more reachable from internet.

edit flag offensive delete link more
0

answered 2013-06-14 15:56:25 -0500

salvatore-orlando gravatar image

It seems the problem then should be researched into host networking. It seems that you're using the same IP address space for management and data network.

This might confuse the network node, as it would not know where the packets need to be forwarded. For instance a packet to http://www.google.com might be sent either to qg-xxxx or eth0. I think you always want it forwarded to qg-xxxx.

This is evident from your routing table:

route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface PublicIP.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-6ad484d8-33 PublicIP.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-d2665361-33 0.0.0.0 PublicIP.1 0.0.0.0 UG 0 0 0 qg-6ad484d8-33 0.0.0.0 PublicIP.1 0.0.0.0 UG 0 0 0 eth0

I totally understand you are not able to partition the public address space. It's already a good thing your advisor gave you a few to play with. Perhaps moving your management network to a private address space (something like 192.168.0.0/24) might be a good solution.

This might also explain why the public IP can't be pinged anymore after you start the l3 agent.

edit flag offensive delete link more
0

answered 2013-06-14 16:36:08 -0500

salvatore-orlando gravatar image

I think yes, provided that you assign a CIDR to the quantum external network which does not overlap with the IP addresses of the management network. Otherwise, you might end up creating a floating IP which will 'steal' one of your management addresses.

However, it is common practice to keep management and tenant traffic networks separated.

edit flag offensive delete link more
0

answered 2013-06-14 19:02:25 -0500

diul gravatar image

I know that, but when I started I have no experience with Linux and Openstack, and this setup has been suggested to me.

Other questions before try: -if I bredge eth0 with br-ex instead than eth2, should I set any PublicIp before, or let eth0 rather than br-ex without IP? What I mean is that I want to keep the Controller node reachable on the current eth0 IP, but this eth0 IP is out of the range I reserved for Quantum, that is eth0 has PublicIP.31, while the range for Quantum is from PublicIP.208 to PublicIP.223. If I do not set the PublicIP.31, this will never be assegned by Quantum, and then the Compute will no more reach the Controller. -I don't need to keep eth0 on internet, I need only that the current eth0 IP is reachable from compute node. Could I simply solve deleting same rule, so that Quantum is reachable from internet, and the qg-6ad484d8-33 has no conflicts with eth0? Hope I'm clear Thanks for the help

In italiano se non sono stato chiaro in inglese xD -Se collego il br-ex a eth0 invece che eth2, devo settare qualche ip su br-ex o su eth0 prima di avviare l'l3-agent? Perchè l'indirizzo attuale dell'eth0 è al di fuori del range riservato per la rete di Quantum, di conseguenza non sarà mai assegnato e il Controller non sarà più raggiungibile dai Compute node -E se provassi a cancellare le regole di instradamento per la eth0 in modo che resti raggiungibile dai Compute node, ma non più da internet? In questo modo dovrei eliminare il conflitto con la qg-6ad484d8-33 , che ovviamente dovrebbe restare l'unica raggiungibile da internet.

edit flag offensive delete link more
0

answered 2013-06-14 23:51:16 -0500

salvatore-orlando gravatar image

If you connect br-ex to eth0 you should be able to assign your 'management' public ip to br-ex directly. You will then probably need to remove the default route for eth0 as well.

The bottom line however is: if you don't need to keep eth0 on the internet why are you insisting on giving it a publicly routable ip address?

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-06-13 17:03:24 -0500

Seen: 322 times

Last updated: Jun 15 '13