routing on different interfaces on the network node
Hello,
We have successfully built a 10 nodes Grizzly Openstack (4 computes and 6 storages providing the differents services). Our VMs can access outside through the Network node running quantum with OpenvSwitch plugin.
Here a list of the services running on the network node: - quantum-server - quantum-dhcp-agent - quantum-l3-agent - quantum-metadata-agent - quantum-plugin-openvswitch-agent - openvswitch-switch
On the compute nodes: - quantum-plugin-openvswitch-agent - openvswitch-switch
Each node has: - 1 Gb interface for server management (say eth0 on 10.0.0.0/24) - an Infiniband adapter (IPoIB) for the "Openstack traffic", we don't use the mellanox plugin (yet?) (say ib0 on 172.16.0.0/24) - an other Infiniband adapter for storage (we use Ceph as a backend storage for glance and cinder) (say ib1 on 172.16.1.0/24)
One dedicated 1Gb link is used for the VMs on the network node (eth1 in the bridge br-ex 10.10.0.0/24).
We can assign floating IP to VMs, ssh to them,... The VMs can reach the outside through the dedicated link on the network node.
How ever, we need to reach the storage network (172.16.1.0/24 on ib1) from the VMs.
If I try to ping the storage network, I see all traffic going through eth1 and hitting the default gateway.
We have setup some rules:
ip rule
0: from all lookup local 1000: from 10.10.0.0/24 lookup vm 32766: from all lookup main 32767: from all lookup default
ip route list table vm
default via 10.10.0.1 dev br-ex
If we add a route for 172.16.1.0/24 in that table (or in the default one), the packets are still sent to the default gateway. We also tried to add routes on the virtual router attached to the public network (10.10.0.0/24) and on the subnet without success.
So my question is: How can we reach other networks attached to other interfaces of the network node? Is it a problem of putting the right routes in right place or should we build a new public network (with a new L3 agent, which is less desirable)? In the case that the above is impossible, as compute nodes are also on the storage network, can we simply add a virtual interface on those nodes (as we would do on a standard hypervisor)? Could it be specified during the VM creation?
You may ask why we don't create volumes and attached them to VMs, this is because we need a shared storage that can be accessible (rw) from multiple VMs for distributed computing.
Thanks for your answers, feel free to ask, for more information.