Ask Your Question

Revision history [back]

You specify network_vlan_ranges = physnet1

So it looks like ML2&OVS&VLAN RDO Juno Setup via packstack, however in case of multinode
deployment packstack fails to create [OVS] section in ml2_conf.ini .This procedure was tested OK  and works on VMs running at F21 Qemu/kvm Hypervisor, I mean RDO ML2&OVS&VLAN setup 
instead of standard VXLAN.

View : http://bderzhavets.blogspot.com/2015/05/rdo-kilo-set-up-two-kvms-nodes.html
BTW . Your "network_vlan_ranges = physnet" is incorrect

You specify network_vlan_ranges = physnet1

So it looks like ML2&OVS&VLAN RDO Juno Setup via packstack, however in case of multinode
deployment packstack fails to create [OVS] section in ml2_conf.ini .This procedure was tested OK  and works on VMs running at F21 Qemu/kvm Hypervisor, I mean RDO ML2&OVS&VLAN setup 
instead of standard VXLAN.

I also suggest you run just packstack --allinone and up on completion

1. Drop router1
2. Drop public && private subnets and networks 
3. Recreate Router,public and private networks  as you need them to be ( public is a matter)

View : http://bderzhavets.blogspot.com/2015/05/rdo-kilo-set-up-two-kvms-nodes.html
BTW . Your "network_vlan_ranges = physnet" is incorrect

You specify network_vlan_ranges = physnet1

So it looks like ML2&OVS&VLAN RDO Juno Setup via packstack, however in case of multinode
deployment packstack fails to create [OVS] section in ml2_conf.ini .This procedure was tested OK  and works on VMs running at F21 Qemu/kvm Hypervisor, I mean RDO ML2&OVS&VLAN setup 
instead of standard VXLAN.

I also suggest you run just packstack --allinone and up on completion

1. Drop router1
2. Drop public && private subnets and networks 
3. Recreate Router,public and private networks  as you need them to be ( public is a matter)

View : http://bderzhavets.blogspot.com/2015/05/rdo-kilo-set-up-two-kvms-nodes.html
BTW . Your "network_vlan_ranges = physnet" is incorrect

TCPDUMP Troubleshooting on Compute Node :-

    [root@ip-192-169-142-137 ~]# brctl show
    bridge name bridge id       STP enabled interfaces
    qbr7e94450a-6a      8000.daaf111690c4   no      qvb7e94450a-6a
                                tap7e94450a-6a
    qbrcd244411-ad      8000.7ad4b8e2896b   no      qvbcd244411-ad
    [root@ip-192-169-142-137 ~]# ovs-vsctl show | grep 7e94450a-6a
            Port "qvo7e94450a-6a"
                Interface "qvo7e94450a-6a"
    [root@ip-192-169-142-137 ~]# 

# tcpdump -vv -i tap7e94450a-6a
# tcpdump -vv -i qvo7e94450a-6a

You specify network_vlan_ranges = physnet1

So it looks like ML2&OVS&VLAN RDO Juno Setup via packstack, however in case of multinode
deployment packstack fails to create [OVS] section in ml2_conf.ini .This procedure was tested OK  and works on VMs running at F21 Qemu/kvm Hypervisor, I mean RDO ML2&OVS&VLAN setup 
instead of standard VXLAN.

I also suggest you run just packstack --allinone and up on completion

1. Drop router1
2. Drop public && private subnets and networks 
3. Recreate Router,public and private networks  as you need them to be ( public is a matter)

View : http://bderzhavets.blogspot.com/2015/05/rdo-kilo-set-up-two-kvms-nodes.html
BTW . Your "network_vlan_ranges = physnet" is incorrect

Next step. During pinging floating IP of VM identify tap interface plugged by OVS ( via brctl show) && run tcpdump -vv -i tap-interface. Check would you see ICMP replies from VM on tap plugin. Identify qvo-interface (@br-int) corresponding tap related qvb-interface

TCPDUMP Troubleshooting on Compute Node :-

    [root@ip-192-169-142-137 ~]# brctl show
    bridge name bridge id       STP enabled interfaces
    qbr7e94450a-6a      8000.daaf111690c4   no      qvb7e94450a-6a
                                tap7e94450a-6a
    qbrcd244411-ad      8000.7ad4b8e2896b   no      qvbcd244411-ad
    [root@ip-192-169-142-137 ~]# ovs-vsctl show | grep 7e94450a-6a
            Port "qvo7e94450a-6a"
                Interface "qvo7e94450a-6a"
    [root@ip-192-169-142-137 ~]# 

# tcpdump -vv -i tap7e94450a-6a
# tcpdump -vv -i qvo7e94450a-6a

You specify network_vlan_ranges = physnet1

So it looks like ML2&OVS&VLAN RDO Juno Setup via packstack, however in case of multinode
deployment packstack fails to create [OVS] section in ml2_conf.ini .This procedure was tested OK  and works on VMs running at F21 Qemu/kvm Hypervisor, I mean RDO ML2&OVS&VLAN setup 
instead of standard VXLAN.

I also suggest you run just packstack --allinone and up on completion

1. Drop router1
2. Drop public && private subnets and networks 
3. Recreate Router,public and private networks  as you need them to be ( public is a matter)

View : http://bderzhavets.blogspot.com/2015/05/rdo-kilo-set-up-two-kvms-nodes.html
BTW . Your "network_vlan_ranges = physnet" is incorrect

Next step. During pinging floating IP of VM identify tap interface plugged by OVS ( via brctl show) && run tcpdump -vv -i tap-interface. Check would you see ICMP replies from VM on tap plugin. Identify qvo-interface (@br-int) corresponding tap related qvb-interface
qvb-interface

TCPDUMP Troubleshooting on Compute Node :-

    [root@ip-192-169-142-137 ~]# brctl show
    bridge name bridge id       STP enabled interfaces
    qbr7e94450a-6a      8000.daaf111690c4   no      qvb7e94450a-6a
                                tap7e94450a-6a
    qbrcd244411-ad      8000.7ad4b8e2896b   no      qvbcd244411-ad
    [root@ip-192-169-142-137 ~]# ovs-vsctl show | grep 7e94450a-6a
            Port "qvo7e94450a-6a"
                Interface "qvo7e94450a-6a"
    [root@ip-192-169-142-137 ~]# 

# tcpdump -vv -i tap7e94450a-6a
# tcpdump -vv -i qvo7e94450a-6a

Another check to be done , if you implemented security rules via neutron(nova) CLI or via Horizion, make sure corresponding Neutron chains, actually , making this rules to work are present in /etc/sysconfig/iptables :-

[root@ip-192-169-142-137 ~]# iptables-save | grep 7e94450a-6a
-A neutron-openvswi-FORWARD -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to the security group chain." -j neutron-openvswi-o7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-i7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-o7e94450a-6

Next step. During pinging floating IP of VM identify tap interface plugged by OVS ( via brctl show) && run tcpdump -vv -i tap-interface. Check would you see ICMP replies from VM on tap plugin. Identify qvo-interface (@br-int) corresponding tap related qvb-interface

TCPDUMP Troubleshooting on Compute Node :-

    [root@ip-192-169-142-137 ~]# brctl show
    bridge name bridge id       STP enabled interfaces
    qbr7e94450a-6a      8000.daaf111690c4   no      qvb7e94450a-6a
                                tap7e94450a-6a
    qbrcd244411-ad      8000.7ad4b8e2896b   no      qvbcd244411-ad
    [root@ip-192-169-142-137 ~]# ovs-vsctl show | grep 7e94450a-6a
            Port "qvo7e94450a-6a"
                Interface "qvo7e94450a-6a"
    [root@ip-192-169-142-137 ~]# 

# tcpdump -vv -i tap7e94450a-6a
# tcpdump -vv -i qvo7e94450a-6a

Another check to be done , if you implemented security rules via neutron(nova) CLI or via Horizion, make sure corresponding Neutron chains, actually , making this rules to work are present in /etc/sysconfig/iptables :-

[root@ip-192-169-142-137 ~]# iptables-save | grep 7e94450a-6a
-A neutron-openvswi-FORWARD -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to the security group chain." -j neutron-openvswi-o7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-i7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-o7e94450a-6

192.168.1.0/24 is mgmt && external network

[root@CentOS71WKS ~(keystone_admin)]# route -n
Kernel IP routing table
Destination     Gateway         Genmask     Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0              UG    0      0    0 br-ex
169.254.0.0     0.0.0.0     255.255.0.0        U     1002   0    0 enp2s0  <== Metadata access
169.254.0.0     0.0.0.0     255.255.0.0        U     1005   0    0 br-ex <==  Metadata access 
192.168.1.0     0.0.0.0    255.255.255.0    U     0      0        0 br-ex

Next step. During pinging floating IP of VM identify tap interface plugged by OVS ( via brctl show) && run tcpdump -vv -i tap-interface. Check would you see ICMP replies from VM on tap plugin. Identify qvo-interface (@br-int) corresponding tap related qvb-interface

TCPDUMP Troubleshooting on Compute Node :-

    [root@ip-192-169-142-137 ~]# brctl show
    bridge name bridge id       STP enabled interfaces
    qbr7e94450a-6a      8000.daaf111690c4   no      qvb7e94450a-6a
                                tap7e94450a-6a
    qbrcd244411-ad      8000.7ad4b8e2896b   no      qvbcd244411-ad
    [root@ip-192-169-142-137 ~]# ovs-vsctl show | grep 7e94450a-6a
            Port "qvo7e94450a-6a"
                Interface "qvo7e94450a-6a"
    [root@ip-192-169-142-137 ~]# 

# tcpdump -vv -i tap7e94450a-6a
# tcpdump -vv -i qvo7e94450a-6a

Another check to be done , if you implemented security rules via neutron(nova) CLI or via Horizion, make sure corresponding Neutron chains, actually , making this rules to work are present in /etc/sysconfig/iptables :-

[root@ip-192-169-142-137 ~]# iptables-save | grep 7e94450a-6a
-A neutron-openvswi-FORWARD -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to the security group chain." -j neutron-openvswi-o7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-i7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-o7e94450a-6

192.168.1.0/24 is mgmt && external network

[root@CentOS71WKS ~(keystone_admin)]# route -n
Kernel IP routing table
Destination     Gateway         Genmask     Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0             UG    0      0    0  br-ex
169.254.0.0     0.0.0.0     255.255.0.0        U     1002   0    0 enp2s0  <== Metadata access
169.254.0.0     0.0.0.0     255.255.0.0        U     1005   0    0 br-ex <==  Metadata access 
192.168.1.0     0.0.0.0    255.255.255.0     U     0      0    0 br-ex

Next step. During pinging floating IP of VM identify tap interface plugged by OVS ( via brctl show) && run tcpdump -vv -i tap-interface. Check would you see ICMP replies from VM on tap plugin. Identify qvo-interface (@br-int) corresponding tap related qvb-interface

TCPDUMP Troubleshooting on Compute Node :-

    [root@ip-192-169-142-137 ~]# brctl show
    bridge name bridge id       STP enabled interfaces
    qbr7e94450a-6a      8000.daaf111690c4   no      qvb7e94450a-6a
                                tap7e94450a-6a
    qbrcd244411-ad      8000.7ad4b8e2896b   no      qvbcd244411-ad
    [root@ip-192-169-142-137 ~]# ovs-vsctl show | grep 7e94450a-6a
            Port "qvo7e94450a-6a"
                Interface "qvo7e94450a-6a"
    [root@ip-192-169-142-137 ~]# 

# tcpdump -vv -i tap7e94450a-6a
# tcpdump -vv -i qvo7e94450a-6a

Another check to be done , if you implemented security rules via neutron(nova) CLI or via Horizion, make sure corresponding Neutron chains, actually , making this rules to work are present in /etc/sysconfig/iptables :-

[root@ip-192-169-142-137 ~]# iptables-save | grep 7e94450a-6a
-A neutron-openvswi-FORWARD -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to the security group chain." -j neutron-openvswi-o7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-i7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-o7e94450a-6

192.168.1.0/24 is mgmt && external network

[root@CentOS71WKS ~(keystone_admin)]# route -n
Kernel IP routing table
Destination     Gateway         Genmask     Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0            UG    0      0    0  br-ex
169.254.0.0     0.0.0.0     255.255.0.0       U     1002   0   0 enp2s0  <== Metadata access
169.254.0.0     0.0.0.0     255.255.0.0       U     1005   0   0 br-ex <==  Metadata access 
192.168.1.0     0.0.0.0    255.255.255.0      U     0      0   0 br-ex

Next step. During pinging floating IP of VM identify tap interface plugged by OVS ( via brctl show) && run tcpdump -vv -i tap-interface. Check would you see ICMP replies from VM on tap plugin. Identify qvo-interface (@br-int) corresponding tap related qvb-interface

TCPDUMP Troubleshooting on Compute Node :-

    [root@ip-192-169-142-137 ~]# brctl show
    bridge name bridge id       STP enabled interfaces
    qbr7e94450a-6a      8000.daaf111690c4   no      qvb7e94450a-6a
                                tap7e94450a-6a
    qbrcd244411-ad      8000.7ad4b8e2896b   no      qvbcd244411-ad
    [root@ip-192-169-142-137 ~]# ovs-vsctl show | grep 7e94450a-6a
            Port "qvo7e94450a-6a"
                Interface "qvo7e94450a-6a"
    [root@ip-192-169-142-137 ~]# 

# tcpdump -vv -i tap7e94450a-6a
# tcpdump -vv -i qvo7e94450a-6a

Another check to be done , if you implemented security rules via neutron(nova) CLI or via Horizion, make sure corresponding Neutron chains, actually , making this rules to work are present in /etc/sysconfig/iptables :-

[root@ip-192-169-142-137 ~]# iptables-save | grep 7e94450a-6a
-A neutron-openvswi-FORWARD -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to the security group chain." -j neutron-openvswi-o7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-i7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-o7e94450a-6

192.168.1.0/24 is mgmt && external network

[root@CentOS71WKS ~(keystone_admin)]# route -n
Kernel IP routing table
Destination     Gateway         Genmask     Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0    0  br-ex
169.254.0.0     0.0.0.0     255.255.0.0       U    1002   0   0 enp2s0  <== Metadata access
169.254.0.0     0.0.0.0     255.255.0.0       U     1005   0   0 br-ex <==  Metadata access 
192.168.1.0     0.0.0.0    255.255.255.0      U     0      0   0 br-ex

Next step. During pinging floating IP of VM identify tap interface plugged by OVS ( via brctl show) && run tcpdump -vv -i tap-interface. Check would you see ICMP replies from VM on tap plugin. Identify qvo-interface (@br-int) corresponding tap related qvb-interface

TCPDUMP Troubleshooting on Compute Node :-

    [root@ip-192-169-142-137 ~]# brctl show
    bridge name bridge id       STP enabled interfaces
    qbr7e94450a-6a      8000.daaf111690c4   no      qvb7e94450a-6a
                                tap7e94450a-6a
    qbrcd244411-ad      8000.7ad4b8e2896b   no      qvbcd244411-ad
    [root@ip-192-169-142-137 ~]# ovs-vsctl show | grep 7e94450a-6a
            Port "qvo7e94450a-6a"
                Interface "qvo7e94450a-6a"
    [root@ip-192-169-142-137 ~]# 

# tcpdump -vv -i tap7e94450a-6a
# tcpdump -vv -i qvo7e94450a-6a

Another check to be done , if you implemented security rules via neutron(nova) CLI or via Horizion, make sure corresponding Neutron chains, actually , making this rules to work are present in /etc/sysconfig/iptables :-

[root@ip-192-169-142-137 ~]# iptables-save | grep 7e94450a-6a
-A neutron-openvswi-FORWARD -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to the security group chain." -j neutron-openvswi-o7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-i7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-o7e94450a-6

192.168.1.0/24 is mgmt && external network

[root@CentOS71WKS ~(keystone_admin)]# route -n
Kernel IP routing table
Destination     Gateway         Genmask     Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0        UG    0      0    0  br-ex
169.254.0.0     0.0.0.0     255.255.0.0       U    1002   0   0 enp2s0  <== Metadata access
169.254.0.0     0.0.0.0     255.255.0.0       U     1005   0   0 br-ex <==  Metadata access 
192.168.1.0     0.0.0.0    255.255.255.0      U     0      0   0 br-ex

View also http://www.opencloudblog.com/?p=675

On the network node, a neutron namespace metadata proxy (used by Cloudinit) is running in the network namespace of the router. This proxy listens to Port 8775. All traffic, which reaches the router namespace for the target 169.254.169.254:8775 is redirected to this proxy, forwarded using a unix socket to the neutron metadata proxy in the default network namespace, and is then forwarded to the nova metadata proxy service.

Next step. During pinging floating IP of VM identify tap interface plugged by OVS ( via brctl show) && run tcpdump -vv -i tap-interface. Check would you see ICMP replies from VM on tap plugin. Identify qvo-interface (@br-int) corresponding tap related qvb-interface

TCPDUMP Troubleshooting on Compute Node :-

    [root@ip-192-169-142-137 ~]# brctl show
    bridge name bridge id       STP enabled interfaces
    qbr7e94450a-6a      8000.daaf111690c4   no      qvb7e94450a-6a
                                tap7e94450a-6a
    qbrcd244411-ad      8000.7ad4b8e2896b   no      qvbcd244411-ad
    [root@ip-192-169-142-137 ~]# ovs-vsctl show | grep 7e94450a-6a
            Port "qvo7e94450a-6a"
                Interface "qvo7e94450a-6a"
    [root@ip-192-169-142-137 ~]# 

# tcpdump -vv -i tap7e94450a-6a
# tcpdump -vv -i qvo7e94450a-6a

Another check to be done , if you implemented security rules via neutron(nova) CLI or via Horizion, make sure corresponding Neutron chains, actually , making this rules to work are present in /etc/sysconfig/iptables :-

[root@ip-192-169-142-137 ~]# iptables-save | grep 7e94450a-6a
-A neutron-openvswi-FORWARD -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to the security group chain." -j neutron-openvswi-o7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-out tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-i7e94450a-6
-A neutron-openvswi-sg-chain -m physdev --physdev-in tap7e94450a-6a --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j neutron-openvswi-o7e94450a-6

192.168.1.0/24 is mgmt && external network

[root@CentOS71WKS ~(keystone_admin)]# route -n
Kernel IP routing table
Destination     Gateway         Genmask     Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0        UG    0      0    0  br-ex
169.254.0.0     0.0.0.0     255.255.0.0       U    1002   0   0 enp2s0  <== Metadata access
169.254.0.0     0.0.0.0     255.255.0.0       U     1005   0   0 br-ex <==  Metadata access 
192.168.1.0     0.0.0.0    255.255.255.0      U     0      0   0 br-ex

View also http://www.opencloudblog.com/?p=675:- http://bderzhavets.blogspot.com/2014/11/access-to-metadata-via-qrouter.html

On the network node, a neutron namespace metadata proxy (used by Cloudinit) is running in the network namespace of the router. This proxy listens to Port 8775. All traffic, which reaches the router namespace for the target 169.254.169.254:8775 is redirected to this proxy, forwarded using a unix socket to the neutron metadata proxy in the default network namespace, and is then forwarded to the nova metadata proxy service.