Ask Your Question
1

rdo - neutron - multinode - single nic ?

asked 2013-12-11 15:57:00 -0500

anonymous user

Anonymous

Hi all, What i'm trying to do seems to be a tough call though it really shouldn't be. I have found very little information on it and have spent days struggling; it is fair to say that i'm getting slightly frustrated! :(

What I want to do is provision a multi node cluster, utilising neutron and provider networks. Our server nodes have 1 interface (bond0) - it is not possible to change this configuration, so I need to find away to coerce OpenStack to play nicely. I have tried configuring multiple linux bridges and openvswitch bridges, but it seems whatever I do i'm missing something.

Now I have been able to achieve exactly what I want to do using Ubuntu and Chef (following this guide: http://developer.rackspace.com/blog/neutron-networking-vlan-provider-networks.html ) - HOWEVER, our infrastructure is CentOS based and we use puppet already - deploying RDO with packstack for our OpenStack is the logical choice.

Can anyone assist me in getting neutron up and running with a single nic in a multiple host environment with packstack/RDO?

Sample answer files would be most appreciated!!!

Cheers,

Barry O'donnell

edit retag flag offensive close merge delete

5 answers

Sort by ยป oldest newest most voted
1

answered 2014-02-11 08:15:57 -0500

dbaxps gravatar image

updated 2014-03-18 10:39:47 -0500

Please,view http://www.gluster.org/author/boris-derzhavets/ . Stuff in blog is based on original Kashyap's posts. My earliest http://lxer.com/module/newswire/view/197613/index.html appears to be correct. Two Real Node Neutron GRE+OVS+Gluster Havana F20 Cluster works stable enough for about 2 months.

edit flag offensive delete link more
0

answered 2013-12-12 07:17:14 -0500

kashyapc gravatar image

updated 2013-12-12 07:19:00 -0500

Just to note, I have a two node OpenStack RDO Havana set-up configured manually on two Fedora 20 VMs (running Nested KVM on Intel), and it's running pretty solid:

  • Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling).
  • Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

Networking details:

edit flag offensive delete link more

Comments

Hello kashyapc, are you saying your two nodes w/ one nic? so management/tenant/external network all stands on that nic. is it correct?

yixuan178 gravatar imageyixuan178 ( 2015-04-02 10:21:01 -0500 )edit
0

answered 2014-06-11 09:55:42 -0500

Somebody recently wrote an article about this issue. Might be worthy checking it out: http://fosskb.wordpress.com/2014/06/1...

edit flag offensive delete link more
0

answered 2013-12-11 19:57:53 -0500

Wang Jun gravatar image

updated 2013-12-11 20:06:15 -0500

You can read this artical first. It will help you understand Quantum/Neutron networking. http://openstack.redhat.com/Networking_in_too_much_detail

Then go to this artical. http://openstack.redhat.com/Using_GRE_Tenant_Networks

edit flag offensive delete link more
0

answered 2013-12-11 21:29:29 -0500

SamYaple gravatar image

updated 2013-12-12 08:17:12 -0500

We currently are setup with bonded nics. I layered vlans ontop of bond0 and used the vlans as the interfaces for the bridges. Following the examples in the article, a config that looks something like this should work:

RedHat and derivites should work with something like this:

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
#######################
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
#######################
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
#######################
DEVICE=bond0.100
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
VLAN=yes
NM_CONTROLLED=no
BRIDGE=br-eth0
#######################
# Please change bridge name to somethine else
#######################
DEVICE=br-eth0
TYPE=Bridge
IPADDR=10.240.0.10
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
NM_CONTROLLED=no
DELAY=0

I just named the bridge br-eth0 because that is what the guide says, change it to something more meaningful.

This would be for Debian (should work for Ubuntu)

auto eth0
iface eth0 inet manual
  bond-master bond0

auto eth1
iface eth1 inet manual
  bond-master bond0

# Remember this isn't the only way to bond these nics.
# Make sure to use the appropriate bond-mode
auto bond0
iface bond0 inet manual
  bond-mode 802.3ad
  bond-miimon 100
  bond-lacp-rate 1
  bond-slaves eth0 eth1
  up ip link set bond0 promisc on

# This is vlan 100. This requires additional configuration
# on the switch itself. Also install the `vlan` package if you
# haven't done so already.
auto bond0.100
iface bond0.100 inet manual
  up ip addr add 0.0.0.0 dev bond0.100
  up ip link set br-eth0 up
  vlan-raw-device bond0

# I have kept the name as br-eth0 because that is what the
# interface is named in the article. I suggest creating it as
# something less confusing like br-data.
auto br-eth0
iface br-eth0 inet static
  address 10.240.0.10
  netmask 255.255.255.0
  gateway 10.240.0.1
  nameserver 8.8.8.8
  bridge_ports bond0.100
edit flag offensive delete link more

Comments

This is configuration file used in ubuntu, it should not work in RHEL/Fedora.

Wang Jun gravatar imageWang Jun ( 2013-12-11 23:08:16 -0500 )edit

Updated for RHEL instead of Debian.

SamYaple gravatar imageSamYaple ( 2013-12-12 08:17:50 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-12-11 15:57:00 -0500

Seen: 3,393 times

Last updated: Jun 11 '14