Ask Your Question

Storage network interfaces in Neutron

asked 2014-06-12 10:49:57 -0500

marcantonio gravatar image

updated 2014-06-13 17:36:55 -0500

I have a lab setup with Neutron. I'm trying to introduce an NFS mount point to all of my VMs for application data. This NFS mount lives on a storage network and is managed outside of OpenStack. How can I expose this network to my VMs using separate physical NIC on each compute node?

Put another way: I'd like to setup a storage network in Neutron and add an interface to each VM that will only use the physical storage network interfaces on the compute nodes. Does this mean adding a new bridge to each compute node for the physical interfaces?

Hopefully that makes sense.

edit retag flag offensive close merge delete


Would Glusterfs Cluster 3.4.5 (for Controller&&Computes Nodes) as cinder backend for your VMs sufficient for you ?
I have ready solution.

dbaxps gravatar imagedbaxps ( 2014-06-12 10:57:32 -0500 )edit

The storage network already exists. I'm just trying to get my VMs on it so that they can mount an NFS.

marcantonio gravatar imagemarcantonio ( 2014-06-12 11:01:58 -0500 )edit

Would I have to create a new bridge on each compute node to make this work? The network node too?

marcantonio gravatar imagemarcantonio ( 2014-06-12 14:11:07 -0500 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2014-06-12 14:07:20 -0500

darragh-oreilly gravatar image

updated 2014-06-13 16:26:03 -0500

smaffulli gravatar image

You could use the provider network extension to map a neutron network to the storage VLAN. You'll need to create a new bridge on each compute node to make this work and on the network node too.

have a look at and

edit flag offensive delete link more


I don't understand why I'd have to go through the network node. The compute nodes are connected directly to the storage network. There's no routing involved. Can't I just bridge the VMs with this interface?

marcantonio gravatar imagemarcantonio ( 2014-06-13 17:33:40 -0500 )edit

you are right, there is no need to connect this neutron "storage network" to a neutron router, because the vms can access the physical storage VLAN at layer 2 via the bridge. But the bridge on the network node is still needed for the neutron-dhcp-agent.

darragh-oreilly gravatar imagedarragh-oreilly ( 2014-06-14 02:53:42 -0500 )edit

Ah! That makes sense. Would I set this up as a flat network? I currently have an external network and a few tenant networks setup, per the Ubuntu setup instructions, using GRE tunnels. Does that matter? Does the external network count as a provider network? Thanks!

marcantonio gravatar imagemarcantonio ( 2014-06-14 08:19:57 -0500 )edit

I think a flat network should be ok for you for now. If you wanted to do many provider mappings, then VLANs would be the way because of the limited number of physical NICs. The GRE backed networks don't matter - you can use both at the same time. I guess you are using the original way to uplink the router to the external network - the qg-xxxxxxx-xx interface in br-ex? That is not a provider mapping. But the provider extension can be used to do this, but I don't think it is documented yet.

darragh-oreilly gravatar imagedarragh-oreilly ( 2014-06-15 05:51:03 -0500 )edit

Ok, that did it. On each of the nodes running neutron-plugin-openvswitch-agent:

  1. I added a bridge for the storage network
  2. In ml2_conf.ini I added flat_networks = storage to the [ml2_type_flat] section and bridge_mappings = storage:br-storage to the [ovs] section.
marcantonio gravatar imagemarcantonio ( 2014-06-16 08:23:07 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2014-06-12 10:49:57 -0500

Seen: 237 times

Last updated: Jun 13 '14