# Neutron networking for compute node(s).

Hello,

Let's assume that I have the following setup for my test OpenStack:

• 10.0.0.0/24 - Public Network (used for floating IPs) Gateway is 10.0.0.1
• 192.168.0.0/24 - Management Network (and everything else really) Gateway is 192.168.0.1

Now I currently have two nodes as such:

Controller Node

• eth0: 192.168.0.40
• eth1 (br-ex): 10.0.0.40

Compute Node:

• eth0: 192.168.0.41
• eth1: ??

My question to you is, do I need to configure eth1 on the Compute Node as a bridge similar to br-ex on the Controller Node? Do I even need the Compute Node to be connected to the Public Network or can it pass the networking traffic over the Management Network? What is the best practice here as I cannot seem to wrap my head around this? The goal here is to begin to segregate the network traffic for performance and security reasons. Any advice?

edit retag close merge delete

Sort by » oldest newest most voted

Question

My question to you is, do I need to configure eth1 on the Compute Node as a bridge similar to br-ex on the Controller Node? Do I even need the Compute Node to be connected to the Public Network or can it pass the networking traffic over the Management Network?


In general answer is NO, unless you deploy DVR with bridged networking.

However, your design is missing third network connecting Compute and Controller. It might be VXLAN or GRE tunnel either VLAN tagged vm/data network. This third network is responsible for routing SOUTH-NORTH && EAST-WEST traffic via Controller/Network Node, unless DVR has been setup


MGMT network might be used as VXLAN tunnel , what could cause network congestion and is not recommended for production deployments.

more

Thank you for the diagram. Puts things into perspective for me. So does the eth1 interface in this diagram have an IP address? Or do you just let openvswitch take care of it through the neutron config?

( 2016-04-27 09:07:14 -0500 )edit

Eth1's up here should have IPs as VTEPs , say 12.0.0.137 and 12.0.0.147

( 2016-04-27 10:06:04 -0500 )edit

Makes sense. Now configuring that will be the fun part!

( 2016-04-27 15:27:49 -0500 )edit

Maybe i am wrong but how are you going to use 10.0.xx ips for floating network? Their are private range ips and hence not usable from the outside

more

It is a private cloud so it is fine for my usage.

( 2016-04-27 09:07:50 -0500 )edit

do I need to configure eth1 on the Compute Node as a bridge similar to br-ex on the Controller Node? Normally you don't have to. You would want to do it if that is your only way to access the outside word (for yum, ssh, etc.), but you could use your 192.168.0.1 to NAT to/from the outside.

Do I even need the Compute Node to be connected to the Public Network or can it pass the networking traffic over the Management Network? Answered above. Ideally you would want your management network to have access to the outside world directly to avoid the NAT configuration. Maybe, that's just me...

What is the best practice here as I cannot seem to wrap my head around this?I don't know what are the best practices, but this should be a good start for you: http://docs.openstack.org/kilo/networking-guide/scenario_provider_ovs.html (http://docs.openstack.org/kilo/networ...)

more