Ask Your Question

kevinbenton's profile - activity

2017-05-01 02:29:20 -0500 answered a question Ocata Neutron Error : Interface name must be shorter than IFNAMSIZ (15)

If you're using the openvswitch agent you should be using the OVSHybridIptablesFirewallDriver instead of IptablesFirewallDriver.

2016-11-20 04:36:47 -0500 commented question Newton DHCP Agent Problem

That interface error is a bit misleading. Stevedore generates that when it can't load it by an alias since you are providing a class path. If it actually couldn't load the interface, it would be a fatal condition and exit. The get_value_from_file debugs are also normal. Can your VMs ping DHCP agent?

2016-11-18 01:24:41 -0500 answered a question Networking Configuration

It's not clear to me from your question if vlan 200 on switch3 is the same as vlan 200 on switch4. I'm assuming they are not the same (i.e. no connectivity between the switches on vlan 200) for this answer.

For each unique "vlan domain", you'll want to define a different 'physnet' in Neutron terms. VLANs are scoped to physnets so you'll be able to use VLAN 200 for multiple tenant networks (as long as each is a different physnet).

Assuming the vlans are scoped to a specific switch, then naming the physnet after the switch would probably make sense.

So under your ml2 configuration file, look for the section titled 'ml2_type_vlan' and adjust the 'network_vlan_ranges' value to add in your vlan ranges for each physnet.

[ml2_type_vlan]
network_vlan_ranges = switch2:200:220,switch3:200:220

Then on each compute node's openvswitch agent config, you map bridges to physnets.

[ovs]
bridge_mappings = switch2:br-switch2,switch3:br-switch3

Then add the interfaces that go to switch2 into an OVS bridge named br-switch2 and the interfaces for switch3 into br-switch3.

Now when you create a tenant network, you can specify the physnet and segmentation ID it should use.

neutron net-create switch2-testnet --provider:physical_network switch2 --provider:network_type vlan --provider:segmentation_id 210

The above command will create a neutron network on the switch2 physnet using vlan tag 210. Now whenever a VM is booted to that network, the OVS agent will wire up the VM port to send traffic out the bridge that maps to physnet 'switch2' (in this case br-switch2), with the VLAN tag 210.

2015-07-31 13:28:49 -0500 received badge  Teacher (source)
2015-07-31 13:28:49 -0500 received badge  Necromancer (source)
2015-07-30 17:55:49 -0500 received badge  Supporter (source)
2015-07-30 16:54:35 -0500 answered a question iptables INVALID rule preventing RST packets on closed ports between VMs

Root Cause

This issue is caused by the iptables setup in the reference OVS implementation in Neutron.

Each VM gets its own filtering bridge, so the path of a packet between two VMs on the same host looks like this:

VM1 -> bridge1 (iptables filtering) -> OVS -> bridge2 (iptables filtering) -> VM2

In this setup each packet goes through a conntrack lookup twice (once on each bridge). This would normally not be an issue; however, the conntrack state is shared between the filtering bridges. This is normally not a problem because conntrack is keeping track of both sides of the TCP connection. The issue comes with the RST flag.

When conntrack encounters a TCP packet with a RST flag it immediately destroys the conntrack entry for that connection. This means that once the RST packet reaches the second filtering bridge, the conntrack state has already been removed, so the RST packet is marked as INVALID.

VM1 -> bridge1 (iptables filtering) -> OVS -> bridge2 (iptables filtering) -> VM2
RST >> conntrack destroys conn.     >>>>>>>>> no match, INVALID DROP

If you run conntrack -E -o timestamp while attempting to make a connection that causes a RST, you can see the RST is destroying the state in conntrack:

~$ sudo conntrack -E -o timestamp
[1438290214.284944]     [NEW] tcp      6 120 SYN_SENT src=10.0.0.9 dst=10.0.0.10 sport=36397 dport=99 [UNREPLIED] src=10.0.0.10 dst=10.0.0.9 sport=99 dport=36397 zone=1
[1438290214.285129] [DESTROY] tcp      6 src=10.0.0.9 dst=10.0.0.10 sport=36397 dport=99 [UNREPLIED] src=10.0.0.10 dst=10.0.0.9 sport=99 dport=36397 zone=1

The Fix

There is a bug open for this behavior here: https://bugs.launchpad.net/neutron/+bug/1478925 (https://bugs.launchpad.net/neutron/+b...)

However, it won't be fixed for Icehouse because it's already EOL. It will be fixed in Liberty, but the ability to be back-ported to Juno and Kilo will depend on how complex the fix is.

This can be fixed with conntrack zones, which were only added Kilo so if that's the route taken it won't make it to Juno. It can also be fixed with a hack to get iptables to skip the DESTROY phase, but that will then leave TCP states open that were RST until they expire, so it's not likely that will be an acceptable solution.

2015-05-08 23:27:10 -0500 answered a question Allocate public IP to instances

You can do this by creating the pre-creating the networks as an admin with the --provider attributes that let you specify the VLAN it maps to.[1]

The routing for those networks could then either be performed by physical routers directly on those VLANs or you could create an admin router attached to all of the networks. To disable SNAT on the admin router you would pass the --disable-snat flag to the router-gateway-set command.

  1. http://docs.openstack.org/admin-guide-cloud/content/provider_api_workflow.html (http://docs.openstack.org/admin-guide...)
2014-09-06 18:06:33 -0500 answered a question how is multi-tenancy achieved on networking if v2 API does not include the tenant_id in the URI?

The tenant ID is included in the body of the objects being requested (e.g. [1]).

  1. https://github.com/bigswitch/neutron/blob/158d92e28fe9d7ae77be7f1706f9a34abc42aa26/neutron/api/v2/attributes.py#L633 (https://github.com/bigswitch/neutron/...)