juno packstack instances not receiving traffic on vlan provider network

asked 2015-05-12 04:39:56 -0500

yee379 gravatar image

i have a juno packstack environment where i'm trying to set up a vlan provider network. i'm also using a bonded lacp pair for the ovs networking:

Bridge br-uplink
    Port "bond0"
        Interface "em2"
        Interface "em1"
    Port phy-br-uplink
        Interface phy-br-uplink
            type: patch
            options: {peer=int-br-uplink}
    Port br-uplink
        Interface br-uplink
            type: internal
Bridge br-int
    fail_mode: secure
    Port "qvo4f7c59ab-7b"
        tag: 2
        Interface "qvo4f7c59ab-7b"
    Port br-int
        Interface br-int
            type: internal
    Port int-br-uplink
        Interface int-br-uplink
            type: patch
            options: {peer=phy-br-uplink}

and ip a shows:

4: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
    link/ether ec:f4:bb:bf:be:b8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::eef4:bbff:febf:beb8/64 scope link
       valid_lft forever preferred_lft forever
5: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
    link/ether ec:f4:bb:bf:be:ba brd ff:ff:ff:ff:ff:ff
    inet6 fe80::eef4:bbff:febf:beba/64 scope link
       valid_lft forever preferred_lft forever
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether ba:70:2b:db:e7:e4 brd ff:ff:ff:ff:ff:ff
8: br-int: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 32:30:e1:8e:71:4f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3030:e1ff:fe8e:714f/64 scope link
       valid_lft forever preferred_lft forever
10: br-uplink: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether ec:f4:bb:bf:c1:f0 brd ff:ff:ff:ff:ff:ff
    inet 172.23.99.12/26 brd 172.23.99.63 scope global br-uplink
       valid_lft forever preferred_lft forever
    inet6 fe80::eef4:bbff:febf:c1f0/64 scope link
       valid_lft forever preferred_lft forever
11: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether b6:65:fb:75:87:46 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::b465:fbff:fe75:8746/64 scope link
       valid_lft forever preferred_lft forever

on the (cisco) switch side i have:

interface port-channel912
  description net-pool02
  switchport mode trunk
  switchport trunk native vlan 991
  switchport trunk allowed vlan 831,991,1441

912   Po912(SU)   Eth      LACP      Eth102/1/25(P)  Eth108/1/25(P)

(vlan 991 is the subnet of the compute node itself)

and the local compute neutron openvswitch config is:

# cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini | grep -vE '(^#|^\s*$)'
[ovs]
tenant_network_type = vlan
network_vlan_ranges = vlan:831:831,vlan:1441:1441
enable_tunneling = False
integration_bridge = br-int
bridge_mappings =vlan:br-uplink
[agent]
polling_interval = 2
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

so i create the network and subnet as follows:

$ neutron net-create SERV-PRIVATE --provider:network_type vlan --provider:physical_network vlannet --provider:segmentation_id 831
$ neutron subnet-create SERV-PRIVATE 172.23.66.0/23 --gateway 172.23.66.1 --dns-nameserver 172.23.111.111 --disable-dhcp --allocation-pool start=172.23.67.128,end=172.23.67.131

i don't have the neutron server on the same network, so i boot an ... (more)

edit retag flag offensive close merge delete