# Revision history [back]

### Neutron, GRE tunnels, non-tenant IP address routing from VMs

I have a question related to routing from tenant VMs that I've not been able to answer for myself.

Assume a simple 2-node (Compute/Controller) Grizzly cluster that is configured with one private network that has a subnet of 10.0.0.0/20 and is running neutron. Further, assume that GRE tunnels are used to provide isolation. OVS is the neutron plugin in use.

Let's say I add a third node to the cluster. It is not an openstack node (e.g., not a compute node, network node, or controller node), but a computer running Linux. Further, assume it has the ability to create and terminate GRE tunnels to the other cluster members. Finally, assume that it acts as a gateway to a server with a non-tenant IP address (e.g., 192.168.4.15), that it, the compute and controller are in the 192.168.3.0/24 subnet, and that on the compute and controller nodes, we've established in OVS tunnels towards the gateway computer:

$sudo ovs-vsctl add-port br-tun gre-4 -- set Interface gre-4 type=gre options:remote_ip=192.168.3.15 options:in_key=flow options:out_key=flow  I've been using a combination of devstack and command line to realize the cluster. Here is the controller localrc: disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-meta enable_service quantum HOST_IP=192.168.3.3 FLAT_INTERFACE=eth1 FIXED_RANGE=10.0.0.0/20 FIXED_NETWORK_SIZE=4096 MULTI_HOST=True LOGFILE=/opt/stack/logs/stack.sh.log ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken SCHEDULER=nova.scheduler.simple.SimpleScheduler Q_PLUGIN=openvswitch ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=True TENANT_VLAN_RANGE=1000:1999 OVS_PHYSICAL_BRIDGE=br-eth1 Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre) Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)  On the compute node, it is nearly the same (likely there are things in here not strictly needed by a compute node, have not had the time to try removing things but instead have tried to keep the two localrc files in more or less sync): HOST_IP=192.168.3.4 FLAT_INTERFACE=eth1 FIXED_RANGE=10.0.0.0/20 FIXED_NETWORK_SIZE=4096 ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt SERVICE_HOST=192.168.3.3 MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST Q_HOST=$SERVICE_HOST
MATCHMAKER_REDIS_HOST=$SERVICE_HOST Q_PLUGIN=openvswitch ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=True TENANT_VLAN_RANGE=1000:1999 OVS_PHYSICAL_BRIDGE=br-eth1 Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre) Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)  I create VMs using the images and network configured by devstack scripts and from a namespace. These VMs get allocated 10.0.0.x addresses, and from within a qdhcp namespace on the controller I can ping VMs and see the GRE encapsulation in wireshark, so all is good. From 192.168.4.15 (the device plugged into my gateway), I can ping (i.e., send ARP/ping packets via GRE to a VM, say 10.0.0.5) running on, say, the compute node, but only if I first ssh into the VM and add a route to the device on the other end of the gateway. The default route table is: root@controller:~/devstack# ssh cirros@10.0.0.5 cirros@10.0.0.5's password:$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        *               255.255.240.0   U     0      0        0 eth0


Meaning only packets addressed to tenant subnet will escape the VM. To reach 192.168.4.15, I must do the folllowing:

$sudo route add 192.168.4.15 eth0  or in a more brute forced way: $ sudo route add default eth0


Once I do that, pings will complete. As an aside, the ARP portion of the ping packet flow does not fail because it appears OVS is responding from its arp cache. Nor is the ingress PING request failing to reach, it's the egress PING reply that requires the route to be added.

This makes sense. I should only be able to reach tenant addresses (10.0.0.x) from a VM in a private network. I can't issue a PING reply without that route.

On the other hand, I can't go adding routes manually to tenant VMs like this for my application.

I spent some time hacking on the devstack lib/quantum script. I tried add_route argument to neutron subnet-create, figuring that would lead to dhcp to add routes but I don't really have a next hop to specify (I tried also configuring a gateway in the subnet (10.0.0.1) but understably, packets addressed to 192.168.4.15 won't escape the gateway, either. In fact, I saw no dnsmasq-related option or configuration that would lead to an appropriate route table change.

Floating IPs are not what I want, I'm not trying to give 10.0.0.0 addresses addresses in some other subnet.

So, the big question is, how to I get routes in a tenant VM other than to a gateway or VM belonging to that VM's subnet? I recognize that the GRE tunnels are there for tenant traffic, but in a gateway scenario like I am playing with, not all tenants will necessarily be in that subnet (and I may want to share that printer, database server, etc.. with multiple tenants).

Alternately, is there some flavor of neutron networking that would lead to an appropriate route in the tenant VMs, and if so, what is it?

Thanks,

syd

### Neutron, GRE tunnels, non-tenant IP address routing from VMs

I have a question related to routing from tenant VMs that I've not been able to answer for myself.

Assume a simple 2-node (Compute/Controller) Grizzly cluster that is configured with one private network that has a subnet of 10.0.0.0/20 and is running neutron. Further, assume that GRE tunnels are used to provide isolation. OVS is the neutron plugin in use.

Let's say I add a third node to the cluster. It is not an openstack node (e.g., not a compute node, network node, or controller node), but a computer running Linux. Further, assume it has the ability to create and terminate GRE tunnels to the other cluster members. Finally, assume that it acts as a gateway to a server with a non-tenant IP address (e.g., 192.168.4.15), that it, the compute and controller are in the 192.168.3.0/24 subnet, and that on the compute and controller nodes, we've established in OVS GRE tunnels towards the gateway computer:

$sudo ovs-vsctl add-port br-tun gre-4 -- set Interface gre-4 type=gre options:remote_ip=192.168.3.15 options:in_key=flow options:out_key=flow  Also, tunnels exist from the gateway computer to the controller and compute nodes. Thus, with regard to tunnels, we are full mesh. You can also assume that the device at 192.168.4.15 is sitting on a separate nic on the gateway server, and that mechanisms for taking packets from 192.168.4.15, and encapsulating them and pushing them out to the rest of the cluster via the GRE tunnels, as well as taking traffic inbound from the cluster via the GRE tunnels, de-encapsulating it, and routing it to 192.168.4.15 are in place. I've been using a combination of devstack and command line to realize the cluster. Here is the controller localrc: disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-meta enable_service quantum HOST_IP=192.168.3.3 FLAT_INTERFACE=eth1 FIXED_RANGE=10.0.0.0/20 FIXED_NETWORK_SIZE=4096 MULTI_HOST=True LOGFILE=/opt/stack/logs/stack.sh.log ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken SCHEDULER=nova.scheduler.simple.SimpleScheduler Q_PLUGIN=openvswitch ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=True TENANT_VLAN_RANGE=1000:1999 OVS_PHYSICAL_BRIDGE=br-eth1 Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre) Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)  On the compute node, it is nearly the same (likely there are things in here not strictly needed by a compute node, have not had the time to try removing things but instead have tried to keep the two localrc files in more or less sync): HOST_IP=192.168.3.4 FLAT_INTERFACE=eth1 FIXED_RANGE=10.0.0.0/20 FIXED_NETWORK_SIZE=4096 ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt SERVICE_HOST=192.168.3.3 MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST Q_HOST=$SERVICE_HOST
MATCHMAKER_REDIS_HOST=$SERVICE_HOST Q_PLUGIN=openvswitch ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=True TENANT_VLAN_RANGE=1000:1999 OVS_PHYSICAL_BRIDGE=br-eth1 Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre) Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)  I create VMs using the images and network configured by devstack scripts and from a namespace. scripts. These VMs get allocated 10.0.0.x addresses, and from within a qdhcp namespace on the controller I can ping VMs and see the GRE encapsulation in wireshark, so all is good. From 192.168.4.15 (the device plugged into my gateway), I can ping (i.e., send ARP/ping packets via GRE to a VM, say 10.0.0.5) running on, say, the compute node, but only if I first ssh into the VM and add a route to the device on the other end of the gateway. The default route table in the VM is: root@controller:~/devstack# ssh cirros@10.0.0.5 cirros@10.0.0.5's password:$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        *               255.255.240.0   U     0      0        0 eth0


Meaning only packets addressed to tenant subnet will escape the VM. To reach 192.168.4.15, I must do the folllowing:

$sudo route add 192.168.4.15 eth0  or in a more brute forced way: $ sudo route add default eth0


Once I do that, pings will complete. As an aside, the ARP portion of the ping packet flow does not fail because it appears OVS is responding from its arp cache. Nor is the ingress PING request failing to reach, it's the egress PING reply that requires the route to be added.

This makes sense. I should only be able to reach tenant addresses (10.0.0.x) from a VM in a private network. I can't issue a PING reply without that route.

On the other hand, I can't go adding routes manually to tenant VMs like this for my application.

I spent some time hacking on the devstack lib/quantum script. I tried add_route argument to neutron subnet-create, figuring that would lead to dhcp to add routes but I don't really have a next hop to specify (I tried also configuring a gateway in the subnet (10.0.0.1) but understably, packets addressed to 192.168.4.15 won't escape the gateway, either. In fact, I saw no dnsmasq-related option or configuration that would lead to an appropriate route table change.

Floating IPs are not what I want, I'm not trying to give 10.0.0.0 addresses addresses in some other subnet.

So, the big question is, how to I get routes in a tenant VM other than to a gateway or VM belonging to that VM's subnet? I recognize that the GRE tunnels are there for tenant traffic, but in a gateway scenario like I am playing with, not all tenants will necessarily be in that subnet (and I may want to share that printer, database server, etc.. with multiple tenants).

Alternately, is there some flavor of neutron networking that would lead to an appropriate route in the tenant VMs, and if so, what is it?

Thanks,

syd

### Neutron, GRE tunnels, non-tenant IP address routing from VMs

I gave this a quick try:

quantum net-create myNet --provider:network_type gre --provider:segmentation_id 2 quantum subnet-create b4673aff-999d-4e44-b679-ae112e7d0a46 10.0.0.0/20

I still am unable to escape the tenant VM. The route table in the VM was changed with a default route to gateway 10.0.0.1 added. Interestingly, a wireshark trace shows the VM arping unsuccessfully for this gateway at 10.0.0.1. Perhaps 10.0.0.1 did not get created for some reason (I can't ping 10.0.0.1 from a namespace, which confirms it was not created). Anyway,do you see anything wrong with the way I am creating the network and subnet? It is pretty much straight off of the documentation you pointed at.

After deleting the default route table entry, and adding a default route in the VM:

$sudo route add default eth0 The pings worked. So this issue is, it seems, still about getting the route table properly setup. Original: I have a question related to routing from tenant VMs that I've not been able to answer for myself. Assume a simple 2-node (Compute/Controller) Grizzly cluster that is configured with one private network that has a subnet of 10.0.0.0/20 and is running neutron. Further, assume that GRE tunnels are used to provide isolation. OVS is the neutron plugin in use. Let's say I add a third node to the cluster. It is not an openstack node (e.g., not a compute node, network node, or controller node), but a computer running Linux. Further, assume it has the ability to create and terminate GRE tunnels to the other cluster members. Finally, assume that it acts as a gateway to a server with a non-tenant IP address (e.g., 192.168.4.15), that it, the compute and controller are in the 192.168.3.0/24 subnet, and that on the compute and controller nodes, we've established GRE tunnels towards the gateway computer: $ sudo ovs-vsctl add-port br-tun gre-4 -- set Interface gre-4 type=gre options:remote_ip=192.168.3.15 options:in_key=flow options:out_key=flow


Also, tunnels exist from the gateway computer to the controller and compute nodes. Thus, with regard to tunnels, we are full mesh. You can also assume that the device at 192.168.4.15 is sitting on a separate nic on the gateway server, and that mechanisms for taking packets from 192.168.4.15, and encapsulating them and pushing them out to the rest of the cluster via the GRE tunnels, as well as taking traffic inbound from the cluster via the GRE tunnels, de-encapsulating it, and routing it to 192.168.4.15 are in place.

I've been using a combination of devstack and command line to realize the cluster. Here is the controller localrc:

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-meta
enable_service quantum
HOST_IP=192.168.3.3
FLAT_INTERFACE=eth1
FIXED_RANGE=10.0.0.0/20
FIXED_NETWORK_SIZE=4096
MULTI_HOST=True
LOGFILE=/opt/stack/logs/stack.sh.log
SERVICE_TOKEN=tokentoken

SCHEDULER=nova.scheduler.simple.SimpleScheduler
Q_PLUGIN=openvswitch

ACTIVE_TIMEOUT=120
ASSOCIATE_TIMEOUT=60
BOOT_TIMEOUT=120
SERVICE_TIMEOUT=120
OFFLINE=True

ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGES=1:1000
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1999

OVS_PHYSICAL_BRIDGE=br-eth1
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)


On the compute node, it is nearly the same (likely there are things in here not strictly needed by a compute node, have not had the time to try removing things but instead have tried to keep the two localrc files in more or less sync):

HOST_IP=192.168.3.4
FLAT_INTERFACE=eth1
FIXED_RANGE=10.0.0.0/20
FIXED_NETWORK_SIZE=4096

SERVICE_TOKEN=tokentoken

ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
SERVICE_HOST=192.168.3.3
MYSQL_HOST=$SERVICE_HOST RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST MATCHMAKER_REDIS_HOST=$SERVICE_HOST
Q_PLUGIN=openvswitch

ACTIVE_TIMEOUT=120
ASSOCIATE_TIMEOUT=60
BOOT_TIMEOUT=120
SERVICE_TIMEOUT=120
OFFLINE=True

ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGES=1:1000
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1999

OVS_PHYSICAL_BRIDGE=br-eth1

Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)


I create VMs using the images and network configured by devstack scripts. These VMs get allocated 10.0.0.x addresses, and from within a qdhcp namespace on the controller I can ping VMs and see the GRE encapsulation in wireshark, so all is good.

From 192.168.4.15 (the device plugged into my gateway), I can ping (i.e., send ARP/ping packets via GRE to a VM, say 10.0.0.5) running on, say, the compute node, but only if I first ssh into the VM and add a route to the device on the other end of the gateway. The default route table in the VM is:

root@controller:~/devstack# ssh cirros@10.0.0.5
$route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.0 * 255.255.240.0 U 0 0 0 eth0  Meaning only packets addressed to tenant subnet will escape the VM. To reach 192.168.4.15, I must do the folllowing: $ sudo route add 192.168.4.15 eth0


or in a more brute forced way:

$sudo route add default eth0  Once I do that, pings will complete. As an aside, the ARP portion of the ping packet flow does not fail because it appears OVS is responding from its arp cache. Nor is the ingress PING request failing to reach, it's the egress PING reply that requires the route to be added. This makes sense. I should only be able to reach tenant addresses (10.0.0.x) from a VM in a private network. I can't issue a PING reply without that route. On the other hand, I can't go adding routes manually to tenant VMs like this for my application. I spent some time hacking on the devstack lib/quantum script. I tried add_route argument to neutron subnet-create, figuring that would lead to dhcp to add routes but I don't really have a next hop to specify (I tried also configuring a gateway in the subnet (10.0.0.1) but understably, packets addressed to 192.168.4.15 won't escape the gateway, either. In fact, I saw no dnsmasq-related option or configuration that would lead to an appropriate route table change. Floating IPs are not what I want, I'm not trying to give 10.0.0.0 addresses addresses in some other subnet. So, the big question is, how to I get routes in a tenant VM other than to a gateway or VM belonging to that VM's subnet? I recognize that the GRE tunnels are there for tenant traffic, but in a gateway scenario like I am playing with, not all tenants will necessarily be in that subnet (and I may want to share that printer, database server, etc.. with multiple tenants). Alternately, is there some flavor of neutron networking that would lead to an appropriate route in the tenant VMs, and if so, what is it? Thanks, syd ### Neutron, GRE tunnels, non-tenant IP address routing from VMs Update (response to first answer) I gave this a quick try: quantum net-create myNet --provider:network_type gre --provider:segmentation_id 2 quantum subnet-create b4673aff-999d-4e44-b679-ae112e7d0a46 10.0.0.0/2010.0.0.0/20  I still am unable to escape the tenant VM. The route table in the VM was changed with a default route to gateway 10.0.0.1 added. Interestingly, a wireshark trace shows the VM arping unsuccessfully for this gateway at 10.0.0.1. Perhaps 10.0.0.1 did not get created for some reason (I can't ping 10.0.0.1 from a namespace, which confirms it was not created). Anyway,do you see anything wrong with the way I am creating the network and subnet? It is pretty much straight off of the documentation you pointed at. After deleting the default route table entry, and adding a default route in the VM: $ sudo route add default eth0eth0


The pings worked. So this issue is, it seems, still about getting the route table properly setup.

Original:

I have a question related to routing from tenant VMs that I've not been able to answer for myself.

Assume a simple 2-node (Compute/Controller) Grizzly cluster that is configured with one private network that has a subnet of 10.0.0.0/20 and is running neutron. Further, assume that GRE tunnels are used to provide isolation. OVS is the neutron plugin in use.

Let's say I add a third node to the cluster. It is not an openstack node (e.g., not a compute node, network node, or controller node), but a computer running Linux. Further, assume it has the ability to create and terminate GRE tunnels to the other cluster members. Finally, assume that it acts as a gateway to a server with a non-tenant IP address (e.g., 192.168.4.15), that it, the compute and controller are in the 192.168.3.0/24 subnet, and that on the compute and controller nodes, we've established GRE tunnels towards the gateway computer:

$sudo ovs-vsctl add-port br-tun gre-4 -- set Interface gre-4 type=gre options:remote_ip=192.168.3.15 options:in_key=flow options:out_key=flow  Also, tunnels exist from the gateway computer to the controller and compute nodes. Thus, with regard to tunnels, we are full mesh. You can also assume that the device at 192.168.4.15 is sitting on a separate nic on the gateway server, and that mechanisms for taking packets from 192.168.4.15, and encapsulating them and pushing them out to the rest of the cluster via the GRE tunnels, as well as taking traffic inbound from the cluster via the GRE tunnels, de-encapsulating it, and routing it to 192.168.4.15 are in place. I've been using a combination of devstack and command line to realize the cluster. Here is the controller localrc: disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-meta enable_service quantum HOST_IP=192.168.3.3 FLAT_INTERFACE=eth1 FIXED_RANGE=10.0.0.0/20 FIXED_NETWORK_SIZE=4096 MULTI_HOST=True LOGFILE=/opt/stack/logs/stack.sh.log ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken SCHEDULER=nova.scheduler.simple.SimpleScheduler Q_PLUGIN=openvswitch ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=True TENANT_VLAN_RANGE=1000:1999 OVS_PHYSICAL_BRIDGE=br-eth1 Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre) Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)  On the compute node, it is nearly the same (likely there are things in here not strictly needed by a compute node, have not had the time to try removing things but instead have tried to keep the two localrc files in more or less sync): HOST_IP=192.168.3.4 FLAT_INTERFACE=eth1 FIXED_RANGE=10.0.0.0/20 FIXED_NETWORK_SIZE=4096 ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt SERVICE_HOST=192.168.3.3 MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST Q_HOST=$SERVICE_HOST
MATCHMAKER_REDIS_HOST=$SERVICE_HOST Q_PLUGIN=openvswitch ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=True TENANT_VLAN_RANGE=1000:1999 OVS_PHYSICAL_BRIDGE=br-eth1 Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre) Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)  I create VMs using the images and network configured by devstack scripts. These VMs get allocated 10.0.0.x addresses, and from within a qdhcp namespace on the controller I can ping VMs and see the GRE encapsulation in wireshark, so all is good. From 192.168.4.15 (the device plugged into my gateway), I can ping (i.e., send ARP/ping packets via GRE to a VM, say 10.0.0.5) running on, say, the compute node, but only if I first ssh into the VM and add a route to the device on the other end of the gateway. The default route table in the VM is: root@controller:~/devstack# ssh cirros@10.0.0.5 cirros@10.0.0.5's password:$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        *               255.255.240.0   U     0      0        0 eth0


Meaning only packets addressed to tenant subnet will escape the VM. To reach 192.168.4.15, I must do the folllowing:

$sudo route add 192.168.4.15 eth0  or in a more brute forced way: $ sudo route add default eth0


Once I do that, pings will complete. As an aside, the ARP portion of the ping packet flow does not fail because it appears OVS is responding from its arp cache. Nor is the ingress PING request failing to reach, it's the egress PING reply that requires the route to be added.

This makes sense. I should only be able to reach tenant addresses (10.0.0.x) from a VM in a private network. I can't issue a PING reply without that route.

On the other hand, I can't go adding routes manually to tenant VMs like this for my application.

I spent some time hacking on the devstack lib/quantum script. I tried add_route argument to neutron subnet-create, figuring that would lead to dhcp to add routes but I don't really have a next hop to specify (I tried also configuring a gateway in the subnet (10.0.0.1) but understably, packets addressed to 192.168.4.15 won't escape the gateway, either. In fact, I saw no dnsmasq-related option or configuration that would lead to an appropriate route table change.

Floating IPs are not what I want, I'm not trying to give 10.0.0.0 addresses addresses in some other subnet.

So, the big question is, how to I get routes in a tenant VM other than to a gateway or VM belonging to that VM's subnet? I recognize that the GRE tunnels are there for tenant traffic, but in a gateway scenario like I am playing with, not all tenants will necessarily be in that subnet (and I may want to share that printer, database server, etc.. with multiple tenants).

Alternately, is there some flavor of neutron networking that would lead to an appropriate route in the tenant VMs, and if so, what is it?

Thanks,

syd

### Neutron, GRE tunnels, non-tenant IP address routing from VMs

I gave this a quick try:

quantum net-create myNet --provider:network_type gre --provider:segmentation_id 2
quantum subnet-create b4673aff-999d-4e44-b679-ae112e7d0a46 10.0.0.0/20


I still am unable to escape the tenant VM. The route table in the VM was changed with a default route to gateway 10.0.0.1 added. Interestingly, a wireshark trace shows the VM arping unsuccessfully for this gateway at 10.0.0.1. Perhaps 10.0.0.1 did not get created for some reason (I can't ping 10.0.0.1 from a namespace, which confirms it was not created). Anyway,do you see anything wrong with the way I am creating the network and subnet? It is pretty much straight off of the documentation you pointed at.

After deleting the default route table entry, and adding a default route in the VM:

$sudo route add default eth0  The pings worked. So this issue is, it seems, still about getting the route table properly setup. Original: I have a question related to routing from tenant VMs that I've not been able to answer for myself. Assume a simple 2-node (Compute/Controller) Grizzly cluster that is configured with one private network that has a subnet of 10.0.0.0/20 and is running neutron. Further, assume that GRE tunnels are used to provide isolation. OVS is the neutron plugin in use. Let's say I add a third node to the cluster. It is not an openstack node (e.g., not a compute node, network node, or controller node), but a computer running Linux. Further, assume it has the ability to create and terminate GRE tunnels to the other cluster members. Finally, assume that it acts as a gateway to a server with a non-tenant IP address (e.g., 192.168.4.15), that it, the compute and controller are in the 192.168.3.0/24 subnet, and that on the compute and controller nodes, we've established GRE tunnels towards the gateway computer: $ sudo ovs-vsctl add-port br-tun gre-4 -- set Interface gre-4 type=gre options:remote_ip=192.168.3.15 options:in_key=flow options:out_key=flow


Also, tunnels exist from the gateway computer to the controller and compute nodes. Thus, with regard to tunnels, we are full mesh. You can also assume that the device at 192.168.4.15 is sitting on a separate nic on the gateway server, and that mechanisms for taking packets from 192.168.4.15, and encapsulating them and pushing them out to the rest of the cluster via the GRE tunnels, as well as taking traffic inbound from the cluster via the GRE tunnels, de-encapsulating it, and routing it to 192.168.4.15 are in place.

I've been using a combination of devstack and command line to realize the cluster. Here is the controller localrc:

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-meta
enable_service quantum
HOST_IP=192.168.3.3
FLAT_INTERFACE=eth1
FIXED_RANGE=10.0.0.0/20
FIXED_NETWORK_SIZE=4096
MULTI_HOST=True
LOGFILE=/opt/stack/logs/stack.sh.log
SERVICE_TOKEN=tokentoken

SCHEDULER=nova.scheduler.simple.SimpleScheduler
Q_PLUGIN=openvswitch

ACTIVE_TIMEOUT=120
ASSOCIATE_TIMEOUT=60
BOOT_TIMEOUT=120
SERVICE_TIMEOUT=120
OFFLINE=True

ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGES=1:1000
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1999

OVS_PHYSICAL_BRIDGE=br-eth1
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)


On the compute node, it is nearly the same (likely there are things in here not strictly needed by a compute node, have not had the time to try removing things but instead have tried to keep the two localrc files in more or less sync):

HOST_IP=192.168.3.4
FLAT_INTERFACE=eth1
FIXED_RANGE=10.0.0.0/20
FIXED_NETWORK_SIZE=4096

SERVICE_TOKEN=tokentoken

ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
SERVICE_HOST=192.168.3.3
MYSQL_HOST=$SERVICE_HOST RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST MATCHMAKER_REDIS_HOST=$SERVICE_HOST
Q_PLUGIN=openvswitch

ACTIVE_TIMEOUT=120
ASSOCIATE_TIMEOUT=60
BOOT_TIMEOUT=120
SERVICE_TIMEOUT=120
OFFLINE=True

ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGES=1:1000
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1999

OVS_PHYSICAL_BRIDGE=br-eth1

Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)


I create VMs using the images and network configured by devstack scripts. These VMs get allocated 10.0.0.x addresses, and from within a qdhcp namespace on the controller I can ping VMs and see the GRE encapsulation in wireshark, so all is good.

From 192.168.4.15 (the device plugged into my gateway), I can ping (i.e., send ARP/ping packets via GRE to a VM, say 10.0.0.5) running on, say, the compute node, but only if I first ssh into the VM and add a route to the device on the other end of the gateway. The default route table in the VM is:

root@controller:~/devstack# ssh cirros@10.0.0.5
$route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.0 * 255.255.240.0 U 0 0 0 eth0  Meaning only packets addressed to tenant subnet will escape the VM. To reach 192.168.4.15, I must do the folllowing: $ sudo route add 192.168.4.15 eth0


or in a more brute forced way:

$sudo route add default eth0  Once I do that, pings will complete. As an aside, the ARP portion of the ping packet flow does not fail because it appears OVS is responding from its arp cache. Nor is the ingress PING request failing to reach, it's the egress PING reply that requires the route to be added. This makes sense. I should only be able to reach tenant addresses (10.0.0.x) from a VM in a private network. I can't issue a PING reply without that route. On the other hand, I can't go adding routes manually to tenant VMs like this for my application. I spent some time hacking on the devstack lib/quantum script. I tried add_route argument to neutron subnet-create, figuring that would lead to dhcp to add routes but I don't really have a next hop to specify (I tried also configuring a gateway in the subnet (10.0.0.1) but understably, packets addressed to 192.168.4.15 won't escape the gateway, either. In fact, I saw no dnsmasq-related option or configuration that would lead to an appropriate route table change. Floating IPs are not what I want, I'm not trying to give 10.0.0.0 addresses addresses in some other subnet. So, the big question is, how to I get routes in a tenant VM other than to a gateway or VM belonging to that VM's subnet? I recognize that the GRE tunnels are there for tenant traffic, but in a gateway scenario like I am playing with, not all tenants will necessarily be in that subnet (and I may want to share that printer, database server, etc.. with multiple tenants). Alternately, is there some flavor of neutron networking that would lead to an appropriate route in the tenant VMs, and if so, what is it? Thanks, syd ### Neutron, GRE tunnels, non-tenant IP address routing from VMs Update (response to first answer): I gave this a quick try: quantum net-create myNet --provider:network_type gre --provider:segmentation_id 2 quantum subnet-create b4673aff-999d-4e44-b679-ae112e7d0a46 10.0.0.0/20  I still am unable to escape the tenant VM. The route table in the VM was changed with a default route to gateway 10.0.0.1 added. Interestingly, a wireshark trace shows the VM arping unsuccessfully for this gateway at 10.0.0.1. Perhaps 10.0.0.1 did not get created for some reason (I can't ping 10.0.0.1 from a namespace, which confirms it was not created). Anyway,do you see anything wrong with the way I am creating the network and subnet? It is pretty much straight off of the documentation you pointed at. After deleting the default route table entry, and adding a default route in the VM: $ sudo route add default eth0


The pings worked. So this issue is, it seems, still about getting the route table properly setup.

Original:

I have a question related to routing from tenant VMs that I've not been able to answer for myself.

Assume a simple 2-node (Compute/Controller) Grizzly cluster that is configured with one private network that has a subnet of 10.0.0.0/20 and is running neutron. Further, assume that GRE tunnels are used to provide isolation. OVS is the neutron plugin in use.

Let's say I add a third node to the cluster. It is not an openstack node (e.g., not a compute node, network node, or controller node), but a computer running Linux. Further, assume it has the ability to create and terminate GRE tunnels to the other cluster members. Finally, assume that it acts as a gateway to a server with a non-tenant IP address (e.g., 192.168.4.15), that it, the compute and controller are in the 192.168.3.0/24 subnet, and that on the compute and controller nodes, we've established GRE tunnels towards the gateway computer:

$sudo ovs-vsctl add-port br-tun gre-4 -- set Interface gre-4 type=gre options:remote_ip=192.168.3.15 options:in_key=flow options:out_key=flow  Also, tunnels exist from the gateway computer to the controller and compute nodes. Thus, with regard to tunnels, we are full mesh. You can also assume that the device at 192.168.4.15 is sitting on a separate nic on the gateway server, and that mechanisms for taking packets from 192.168.4.15, and encapsulating them and pushing them out to the rest of the cluster via the GRE tunnels, as well as taking traffic inbound from the cluster via the GRE tunnels, de-encapsulating it, and routing it to 192.168.4.15 are in place. I've been using a combination of devstack and command line to realize the cluster. Here is the controller localrc: disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-meta enable_service quantum HOST_IP=192.168.3.3 FLAT_INTERFACE=eth1 FIXED_RANGE=10.0.0.0/20 FIXED_NETWORK_SIZE=4096 MULTI_HOST=True LOGFILE=/opt/stack/logs/stack.sh.log ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken SCHEDULER=nova.scheduler.simple.SimpleScheduler Q_PLUGIN=openvswitch ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=True TENANT_VLAN_RANGE=1000:1999 OVS_PHYSICAL_BRIDGE=br-eth1 Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre) Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)  On the compute node, it is nearly the same (likely there are things in here not strictly needed by a compute node, have not had the time to try removing things but instead have tried to keep the two localrc files in more or less sync): HOST_IP=192.168.3.4 FLAT_INTERFACE=eth1 FIXED_RANGE=10.0.0.0/20 FIXED_NETWORK_SIZE=4096 ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt SERVICE_HOST=192.168.3.3 MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST Q_HOST=$SERVICE_HOST
MATCHMAKER_REDIS_HOST=$SERVICE_HOST Q_PLUGIN=openvswitch ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=True TENANT_VLAN_RANGE=1000:1999 OVS_PHYSICAL_BRIDGE=br-eth1 Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre) Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)  I create VMs using the images and network configured by devstack scripts. These VMs get allocated 10.0.0.x addresses, and from within a qdhcp namespace on the controller I can ping VMs and see the GRE encapsulation in wireshark, so all is good. From 192.168.4.15 (the device plugged into my gateway), I can ping (i.e., send ARP/ping packets via GRE to a VM, say 10.0.0.5) running on, say, the compute node, but only if I first ssh into the VM and add a route to the device on the other end of the gateway. The default route table in the VM is: root@controller:~/devstack# ssh cirros@10.0.0.5 cirros@10.0.0.5's password:$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        *               255.255.240.0   U     0      0        0 eth0


Meaning only packets addressed to tenant subnet will escape the VM. To reach 192.168.4.15, I must do the folllowing:

$sudo route add 192.168.4.15 eth0  or in a more brute forced way: $ sudo route add default eth0


Once I do that, pings will complete. As an aside, the ARP portion of the ping packet flow does not fail because it appears OVS is responding from its arp cache. Nor is the ingress PING request failing to reach, it's the egress PING reply that requires the route to be added.

This makes sense. I should only be able to reach tenant addresses (10.0.0.x) from a VM in a private network. I can't issue a PING reply without that route.

On the other hand, I can't go adding routes manually to tenant VMs like this for my application.

I spent some time hacking on the devstack lib/quantum script. I tried add_route argument to neutron subnet-create, figuring that would lead to dhcp to add routes but I don't really have a next hop to specify (I tried also configuring a gateway in the subnet (10.0.0.1) but understably, packets addressed to 192.168.4.15 won't escape the gateway, either. In fact, I saw no dnsmasq-related option or configuration that would lead to an appropriate route table change.

Floating IPs are not what I want, I'm not trying to give 10.0.0.0 addresses addresses in some other subnet.

So, the big question is, how to I get routes in a tenant VM other than to a gateway or VM belonging to that VM's subnet? I recognize that the GRE tunnels are there for tenant traffic, but in a gateway scenario like I am playing with, not all tenants will necessarily be in that subnet (and I may want to share that printer, database server, etc.. with multiple tenants).

Alternately, is there some flavor of neutron networking that would lead to an appropriate route in the tenant VMs, and if so, what is it?

Thanks,

syd

### Neutron, GRE tunnels, non-tenant IP address routing from VMs

I gave this a quick try:

quantum net-create myNet --provider:network_type gre --provider:segmentation_id 2
quantum subnet-create b4673aff-999d-4e44-b679-ae112e7d0a46 10.0.0.0/20


(Note, I wanted to use a segmentation_id of 1 but the command failed and reported that 1 was already in use. I don't guess it matters much.)

I still am unable to escape the tenant VM. The route table in the VM was changed with a default route to gateway 10.0.0.1 added. Interestingly, a wireshark trace shows the VM arping unsuccessfully for this gateway at 10.0.0.1. Perhaps 10.0.0.1 did not get created for some reason (I can't ping 10.0.0.1 from a namespace, which confirms it was not created). Anyway,do you see anything wrong with the way I am creating the network and subnet? It is pretty much straight off of the documentation you pointed at.

After deleting the default route table entry, and adding a default route in the VM:

$sudo route add default eth0  The pings worked. So this issue is, it seems, still about getting the route table properly setup. Original: I have a question related to routing from tenant VMs that I've not been able to answer for myself. Assume a simple 2-node (Compute/Controller) Grizzly cluster that is configured with one private network that has a subnet of 10.0.0.0/20 and is running neutron. Further, assume that GRE tunnels are used to provide isolation. OVS is the neutron plugin in use. Let's say I add a third node to the cluster. It is not an openstack node (e.g., not a compute node, network node, or controller node), but a computer running Linux. Further, assume it has the ability to create and terminate GRE tunnels to the other cluster members. Finally, assume that it acts as a gateway to a server with a non-tenant IP address (e.g., 192.168.4.15), that it, the compute and controller are in the 192.168.3.0/24 subnet, and that on the compute and controller nodes, we've established GRE tunnels towards the gateway computer: $ sudo ovs-vsctl add-port br-tun gre-4 -- set Interface gre-4 type=gre options:remote_ip=192.168.3.15 options:in_key=flow options:out_key=flow


Also, tunnels exist from the gateway computer to the controller and compute nodes. Thus, with regard to tunnels, we are full mesh. You can also assume that the device at 192.168.4.15 is sitting on a separate nic on the gateway server, and that mechanisms for taking packets from 192.168.4.15, and encapsulating them and pushing them out to the rest of the cluster via the GRE tunnels, as well as taking traffic inbound from the cluster via the GRE tunnels, de-encapsulating it, and routing it to 192.168.4.15 are in place.

I've been using a combination of devstack and command line to realize the cluster. Here is the controller localrc:

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-meta
enable_service quantum
HOST_IP=192.168.3.3
FLAT_INTERFACE=eth1
FIXED_RANGE=10.0.0.0/20
FIXED_NETWORK_SIZE=4096
MULTI_HOST=True
LOGFILE=/opt/stack/logs/stack.sh.log
SERVICE_TOKEN=tokentoken

SCHEDULER=nova.scheduler.simple.SimpleScheduler
Q_PLUGIN=openvswitch

ACTIVE_TIMEOUT=120
ASSOCIATE_TIMEOUT=60
BOOT_TIMEOUT=120
SERVICE_TIMEOUT=120
OFFLINE=True

ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGES=1:1000
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1999

OVS_PHYSICAL_BRIDGE=br-eth1
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)


On the compute node, it is nearly the same (likely there are things in here not strictly needed by a compute node, have not had the time to try removing things but instead have tried to keep the two localrc files in more or less sync):

HOST_IP=192.168.3.4
FLAT_INTERFACE=eth1
FIXED_RANGE=10.0.0.0/20
FIXED_NETWORK_SIZE=4096

SERVICE_TOKEN=tokentoken

ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
SERVICE_HOST=192.168.3.3
MYSQL_HOST=$SERVICE_HOST RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST MATCHMAKER_REDIS_HOST=$SERVICE_HOST
Q_PLUGIN=openvswitch

ACTIVE_TIMEOUT=120
ASSOCIATE_TIMEOUT=60
BOOT_TIMEOUT=120
SERVICE_TIMEOUT=120
OFFLINE=True

ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGES=1:1000
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1999

OVS_PHYSICAL_BRIDGE=br-eth1

Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)


I create VMs using the images and network configured by devstack scripts. These VMs get allocated 10.0.0.x addresses, and from within a qdhcp namespace on the controller I can ping VMs and see the GRE encapsulation in wireshark, so all is good.

From 192.168.4.15 (the device plugged into my gateway), I can ping (i.e., send ARP/ping packets via GRE to a VM, say 10.0.0.5) running on, say, the compute node, but only if I first ssh into the VM and add a route to the device on the other end of the gateway. The default route table in the VM is:

root@controller:~/devstack# ssh cirros@10.0.0.5
$route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.0 * 255.255.240.0 U 0 0 0 eth0  Meaning only packets addressed to tenant subnet will escape the VM. To reach 192.168.4.15, I must do the folllowing: $ sudo route add 192.168.4.15 eth0


or in a more brute forced way:

$sudo route add default eth0  Once I do that, pings will complete. As an aside, the ARP portion of the ping packet flow does not fail because it appears OVS is responding from its arp cache. Nor is the ingress PING request failing to reach, it's the egress PING reply that requires the route to be added. This makes sense. I should only be able to reach tenant addresses (10.0.0.x) from a VM in a private network. I can't issue a PING reply without that route. On the other hand, I can't go adding routes manually to tenant VMs like this for my application. I spent some time hacking on the devstack lib/quantum script. I tried add_route argument to neutron subnet-create, figuring that would lead to dhcp to add routes but I don't really have a next hop to specify (I tried also configuring a gateway in the subnet (10.0.0.1) but understably, packets addressed to 192.168.4.15 won't escape the gateway, either. In fact, I saw no dnsmasq-related option or configuration that would lead to an appropriate route table change. Floating IPs are not what I want, I'm not trying to give 10.0.0.0 addresses addresses in some other subnet. So, the big question is, how to I get routes in a tenant VM other than to a gateway or VM belonging to that VM's subnet? I recognize that the GRE tunnels are there for tenant traffic, but in a gateway scenario like I am playing with, not all tenants will necessarily be in that subnet (and I may want to share that printer, database server, etc.. with multiple tenants). Alternately, is there some flavor of neutron networking that would lead to an appropriate route in the tenant VMs, and if so, what is it? Thanks, syd ### Neutron, GRE tunnels, non-tenant IP address routing from VMs Update (response to first answer): I gave this a quick try: quantum net-create myNet --provider:network_type gre --provider:segmentation_id 2 quantum subnet-create b4673aff-999d-4e44-b679-ae112e7d0a46 10.0.0.0/20  (Note, I wanted to use a segmentation_id of 1 but the command failed and reported that 1 was already in use. I don't guess it matters much.) I still am unable to escape the tenant VM. The route table in the VM was changed with a default route to gateway 10.0.0.1 added. Interestingly, a wireshark trace shows the VM arping unsuccessfully for this gateway at 10.0.0.1. Perhaps 10.0.0.1 did not get created for some reason (I can't ping 10.0.0.1 from a namespace, which confirms it was not created). Anyway,do you see anything wrong with the way I am creating the network and subnet? It is pretty much straight off of the documentation you pointed at. After deleting the default route table entry, and adding a default route in the VM: $ sudo route add default eth0


The pings worked. So this issue is, it seems, still about getting the route table properly setup.

Original:

I have a question related to routing from tenant VMs that I've not been able to answer for myself.

Assume a simple 2-node (Compute/Controller) Grizzly cluster that is configured with one private network that has a subnet of 10.0.0.0/20 and is running neutron. Further, assume that GRE tunnels are used to provide isolation. OVS is the neutron plugin in use.

Let's say I add a third node to the cluster. It is not an openstack node (e.g., not a compute node, network node, or controller node), but a computer running Linux. Further, assume it has the ability to create and terminate GRE tunnels to the other cluster members. Finally, assume that it acts as a gateway to a server with a non-tenant IP address (e.g., 192.168.4.15), that it, the compute and controller are in the 192.168.3.0/24 subnet, and that on the compute and controller nodes, we've established GRE tunnels towards the gateway computer:

$sudo ovs-vsctl add-port br-tun gre-4 -- set Interface gre-4 type=gre options:remote_ip=192.168.3.15 options:in_key=flow options:out_key=flow  Also, tunnels exist from the gateway computer to the controller and compute nodes. Thus, with regard to tunnels, we are full mesh. You can also assume that the device at 192.168.4.15 is sitting on a separate nic on the gateway server, and that mechanisms for taking packets from 192.168.4.15, and encapsulating them and pushing them out to the rest of the cluster via the GRE tunnels, as well as taking traffic inbound from the cluster via the GRE tunnels, de-encapsulating it, and routing it to 192.168.4.15 are in place. I've been using a combination of devstack and command line to realize the cluster. Here is the controller localrc: disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-meta enable_service quantum HOST_IP=192.168.3.3 FLAT_INTERFACE=eth1 FIXED_RANGE=10.0.0.0/20 FIXED_NETWORK_SIZE=4096 MULTI_HOST=True LOGFILE=/opt/stack/logs/stack.sh.log ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken SCHEDULER=nova.scheduler.simple.SimpleScheduler Q_PLUGIN=openvswitch ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=True TENANT_VLAN_RANGE=1000:1999 OVS_PHYSICAL_BRIDGE=br-eth1 Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre) Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)  On the compute node, it is nearly the same (likely there are things in here not strictly needed by a compute node, have not had the time to try removing things but instead have tried to keep the two localrc files in more or less sync): HOST_IP=192.168.3.4 FLAT_INTERFACE=eth1 FIXED_RANGE=10.0.0.0/20 FIXED_NETWORK_SIZE=4096 ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt SERVICE_HOST=192.168.3.3 MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST Q_HOST=$SERVICE_HOST
MATCHMAKER_REDIS_HOST=$SERVICE_HOST Q_PLUGIN=openvswitch ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=True TENANT_VLAN_RANGE=1000:1999 OVS_PHYSICAL_BRIDGE=br-eth1 Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre) Q_AGENT_EXTRA_SRV_OPTS=(tenant_network_type=gre)  I create VMs using the images and network configured by devstack scripts. These VMs get allocated 10.0.0.x addresses, and from within a qdhcp namespace on the controller I can ping VMs and see the GRE encapsulation in wireshark, so all is good. From 192.168.4.15 (the device plugged into my gateway), I can ping (i.e., send ARP/ping packets via GRE to a VM, say 10.0.0.5) running on, say, the compute node, but only if I first ssh into the VM and add a route to the device on the other end of the gateway. The default route table in the VM is: root@controller:~/devstack# ssh cirros@10.0.0.5 cirros@10.0.0.5's password:$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        *               255.255.240.0   U     0      0        0 eth0


Meaning only packets addressed to tenant subnet will escape the VM. To reach 192.168.4.15, I must do the folllowing:

$sudo route add 192.168.4.15 eth0  or in a more brute forced way: $ sudo route add default eth0


Once I do that, pings will complete. As an aside, the ARP portion of the ping packet flow does not fail because it appears OVS is responding from its arp cache. Nor is the ingress PING request failing to reach, it's the egress PING reply that requires the route to be added.

This makes sense. I should only be able to reach tenant addresses (10.0.0.x) from a VM in a private network. I can't issue a PING reply without that route.

On the other hand, I can't go adding routes manually to tenant VMs like this for my application.

I spent some time hacking on the devstack lib/quantum script. I tried add_route argument to neutron subnet-create, figuring that would lead to dhcp to add routes but I don't really have a next hop to specify (I tried also configuring a gateway in the subnet (10.0.0.1) but understably, packets addressed to 192.168.4.15 won't escape the gateway, either. In fact, I saw no dnsmasq-related option or configuration that would lead to an appropriate route table change.

Floating IPs are not what I want, I'm not trying to give 10.0.0.0 addresses addresses in some other subnet.

So, the big question is, how to I get routes in a tenant VM other than to a gateway or VM belonging to that VM's subnet? I recognize that the GRE tunnels are there for tenant traffic, but in a gateway scenario like I am playing with, not all tenants will necessarily be in that subnet (and I may want to share that printer, database server, etc.. with multiple tenants).

Alternately, is there some flavor of neutron networking that would lead to an appropriate route in the tenant VMs, and if so, what is it?

Thanks,

syd