Ask Your Question
0

Connect to tenant subnets outside OpenStack

asked 2016-11-03 08:23:37 -0600

anonymous user

Anonymous

updated 2016-11-07 07:52:13 -0600

rbowen gravatar image

Hi!

I have a question regarding networking and tenant networks in OpenStack. We are using tenant networks (with GRE) in some cases. One example is between application and database servers where the traffic and the database instances should not be accessible from the outside. We are also trying to deploy a LBaaS with Octavia and the management network is going to be a tenant network aswell.

The issue is that we are provisioning our instances with Ansible. This works great when an instance has an IP address in a provider network. We can SSH into the instance from a remote machine outside OpenStack and do whatever we want. But this doesn't work on instances that only has a interface in a tenant network since it is "virtual". Same thing goes for the loadbalancer with Octavia. An Octavia process need to talk to an API on the loadbalancer instance but it can't since it is only reachable in the management network.

I was looking how Octavia was implemented in devstack (https://github.com/openstack/octavia/...) and they are creating a port in both neutron and openvswitch. So the machine got an interface that gets an IP address from a dhcp agent and it can then communicate within the tenant subnet. Does this mean that I have to run everything on the networking node (where openvswitch is running)?

Are there other ways to talk within a tenant subnet "outside" OpenStack?

I'm sorry if this is a stupid question, but the whole "virtual network" thing can be a little bit confusing in the beginning.

Thanks in advance, Daniel

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2016-11-17 11:51:05 -0600

johnsom gravatar image

I am slightly confused by what you are asking, so if I answer the wrong question please ask again.

Octavia uses a private network for management traffic. This is one TCP port out to the REST API in the amphora from the three controller processes (controller worker, housekeeping, and health manager) and one UDP port back to the health manager(s). These connections can be route accessible if needed. This network should be only used for management/control traffic and not tenant traffic. This management network is created by the operator (or devstack plugin.sh) and specified in the octavia.conf. The use of openvswitch you saw in our plugin.sh is only there to provide the devstack host, which has the controller processes running on it, access to the management network. This can be implemented in other ways in your deployment. There is no requirement with Octavia to run on the network nodes.

Tenant traffic (data path) is in a network namespace separate from the above REST API and management network inside the amphora. This isolates tenant traffic and processes from the management network/processes.

As a user creates a load balancer, octavia plugs a neutron port into the amphora for the subnet the user specified. It can also, optionally accept a pre-created neutron port. The same happens when a user adds a member (backend server) to the load balancer if that network has not already been plugged into the amphora inside the network namespace. If your tenant networks are physical networks, you would need to set them up as provider networks in neutron so that the users can specify those networks when creating their load balancer.

I hope that helps clarify how octavia works and helps you come up with a solution for your needs, Michael

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-11-03 08:23:37 -0600

Seen: 83 times

Last updated: Nov 07 '16