Ask Your Question
0

Neutron, GRE tunnels, non-tenant IP address routing from VMs

asked 2013-09-12 10:32:05 -0500

slogan621 gravatar image

updated 2013-09-16 11:59:42 -0500

Update (response to first answer):

I gave this a quick try:

quantum net-create myNet --provider:network_type gre --provider:segmentation_id 2
quantum subnet-create b4673aff-999d-4e44-b679-ae112e7d0a46 10.0.0.0/20

(Note, I wanted to use a segmentation_id of 1 but the command failed and reported that 1 was already in use. I don't guess it matters much.)

I still am unable to escape the tenant VM. The route table in the VM was changed with a default route to gateway 10.0.0.1 added. Interestingly, a wireshark trace shows the VM arping unsuccessfully for this gateway at 10.0.0.1. Perhaps 10.0.0.1 did not get created for some reason (I can't ping 10.0.0.1 from a namespace, which confirms it was not created). Anyway,do you see anything wrong with the way I am creating the network and subnet? It is pretty much straight off of the documentation you pointed at.

After deleting the default route table entry, and adding a default route in the VM:

$ sudo route add default eth0

The pings worked. So this issue is, it seems, still about getting the route table properly setup.

Original:

I have a question related to routing from tenant VMs that I've not been able to answer for myself.

Assume a simple 2-node (Compute/Controller) Grizzly cluster that is configured with one private network that has a subnet of 10.0.0.0/20 and is running neutron. Further, assume that GRE tunnels are used to provide isolation. OVS is the neutron plugin in use.

Let's say I add a third node to the cluster. It is not an openstack node (e.g., not a compute node, network node, or controller node), but a computer running Linux. Further, assume it has the ability to create and terminate GRE tunnels to the other cluster members. Finally, assume that it acts as a gateway to a server with a non-tenant IP address (e.g., 192.168.4.15), that it, the compute and controller are in the 192.168.3.0/24 subnet, and that on the compute and controller nodes, we've established GRE tunnels towards the gateway computer:

$ sudo ovs-vsctl add-port br-tun gre-4 -- set Interface gre-4 type=gre options:remote_ip=192.168.3.15 options:in_key=flow options:out_key=flow

Also, tunnels exist from the gateway computer to the controller and compute nodes. Thus, with regard to tunnels, we are full mesh. You can also assume that the device at 192.168.4.15 is sitting on a separate nic on the gateway server, and that mechanisms for taking packets from 192.168.4.15, and encapsulating them and pushing them out to the rest of the cluster via the GRE tunnels, as well as taking traffic inbound from the cluster via the GRE tunnels, de-encapsulating it, and routing it to 192.168.4.15 are in place.

I've been using a combination of devstack and command line to realize the ... (more)

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2013-09-13 07:06:08 -0500

darragh-oreilly gravatar image

updated 2013-09-14 02:42:44 -0500

The provider network extension allows instances to be attached to physical vlans via bridges. Then you do not need the l3-agent (floating ips or the SNAT gateway).

Regarding host routes: a Cirros instance will not process any pushed out by DHCP - see https://bugs.launchpad.net/cirros/+bug/1190372 .

edit flag offensive delete link more

Comments

Thanks for the quick response! However, it didn't change things. The routing table didn't change, so the packets still have no way out. Updated the question with more details of the results.

slogan621 gravatar imageslogan621 ( 2013-09-13 16:08:58 -0500 )edit

I was answering the specfic question at the end of you post. By provider network I mean type vlan or flat. Type gre is not a provider network. It was just a suggestion because you do not seem to want to use the l3-agent for some reason.

darragh-oreilly gravatar imagedarragh-oreilly ( 2013-09-14 02:44:39 -0500 )edit

Thanks. Do you think http://docs.openstack.org/trunk/openstack-network/admin/content/provider_attributes.html has a documentation bug (it documents using gre as a provider extension)? Same with http://docs.openstack.org/trunk/openstack-network/admin/content/provider_api_workflow.html%3C/p%3E (http://docs.openstack.org/trunk/openstack-network/admin/content/provider_api_workflow.html)

slogan621 gravatar imageslogan621 ( 2013-09-16 11:57:10 -0500 )edit

I think so, but I'm not sure. The definiton of provider networks here makes sense: http://docs.openstack.org/trunk/openstack-network/admin/content/provider_networks.html . I don't know why a provider would ever need to pick a particular segmentation_id for a gre network.

darragh-oreilly gravatar imagedarragh-oreilly ( 2013-09-17 04:06:25 -0500 )edit

I don't think networks of type gre are provider networks because they do not map directly to physical networks - they are overlay networks on the L3 network. I think the attribute prefix "provider:" is a poor choice because it suggests all types are provider networks.

darragh-oreilly gravatar imagedarragh-oreilly ( 2013-09-17 04:07:50 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-09-12 10:32:05 -0500

Seen: 1,488 times

Last updated: Sep 16 '13