What other steps should I take with networking after adding a compute node?

asked 2013-08-02 13:19:07 -0600

badkarma gravatar image

updated 2013-08-02 18:37:57 -0600

smaffulli gravatar image


I've been unable to find a good tutorial on basic setups with quantum networking, seems everything involves gre tunnels and vlans. Just looking for a simple "cookbook" type tutorial for different setups and it seems nothing like that exists.

I started out with the allinone solution (RDO on CentOS 6.4 with modified kernel for namepaces/etc) and it worked great (using quantum). We got another server available as a compute node and I followed the adding a compute node instructions from their site. Seems like everything was installed ok to the new node as I'm able to deploy to it and the original compute node.

Originally I had eth0 and eth1 cabled to the same network with br-ex added to eth1 which had no ip address. Per the instructions I had to modify packstack and remove "lo" from a couple private interface settings and replace it with the internal network eth2

So at this point I have 3 NiCs and the cloud/network controller which is also the original compute node (still want to use compute on it)

eth0 - External network

eth1 - cabled to the external network but no IP

eth2 - Internal network

New compute node:

eth0 - External network (I know this is not needed for a compute node but it's the only way I can ssh to this server remotely)

eth2 - Internal network

I would like all networking from the new compute node to go through the network node if possible.

With one internal network, router and external network, my bridges look like this:

ovs-vsctl show


Bridge br-int

    Port "tap0e3dbbc6-96"

        tag: 2

        Interface "tap0e3dbbc6-96"

    Port "tap60caeb19-66"

        tag: 2

        Interface "tap60caeb19-66"

    Port br-int

        Interface br-int

            type: internal

Bridge br-ex

    Port "tap957318d9-c8"

        Interface "tap957318d9-c8"

    Port br-ex

        Interface br-ex

            type: internal

ovs_version: "1.10.0"

On the new compute node:


Bridge br-int

    Port "qvo0bd3407c-c7"

        tag: 3

        Interface "qvo0bd3407c-c7"

    Port br-int

        Interface br-int

            type: internal

ovs_version: "1.10.0"

My VMs are no longer getting IP Addresses on either compute nodes. I made sure I put eth2 on both servers in promiscuous mode. Does this bridge setup look correct? It doesn't look like any of the bridges are attached to NICs Bear in mind that I'm a Linux networking newbie. :)

Thanks for any help.

edit retag flag offensive close merge delete


Hello!!! I'm having what sounds like a fairly similar problem. Have you found a resolution yet? If so, I'd like to pick your brain and if not, maybe we could chat!

nko321 gravatar imagenko321 ( 2013-09-04 16:45:40 -0600 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2013-12-09 11:05:48 -0600

dradez gravatar image

It sounds like what you're facing is your plugin tenent type needing to be updated.

If you did an all in one installation then your networking plugin tenant_network_type was set to local. If you want to break out your compute node and network node then you can't use local. Local assumes that everything is just that... local.

You references gre and vlans. You have to use one of these (there are a couple other options too) I've found gre to be the friendliest to setup. gre or vlans establish the transport type between your different nodes. With out them there is no communication between your different nodes.

check out the rdo web site docs: http://openstack.redhat.com/Docs there are a couple docs on gre or vlans that can help you move forward.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2013-08-02 13:19:07 -0600

Seen: 390 times

Last updated: Dec 09 '13