Ask Your Question
0

Neutron Router - Not working [closed]

asked 2017-05-02 12:17:51 -0500

nebukazar gravatar image

updated 2017-05-02 12:47:49 -0500

Hi,

I am currently testing out OpenStack using a packstack AIO installation, however, I'm having a hard time to get networking up and running. The packstack instance is running on a CentOS 7.3 hyperv guest with 6x NICs connected:

  • eth0 = openstack data/management network
  • eth1 = controller management ip
  • eth2 = br-public-network
  • eth3 = br-local-network
  • eth4 = br-load-balancing-network
  • eth5 = br-del-corp

I'm using OpenVSwitch and therefore, created the following bridges before running the packstack installer:

ovs-vsctl add-br br-public-network ; ovs-vsctl add-port br-public-network eth2
ovs-vsctl add-br br-local-network ; ovs-vsctl add-port br-local-network eth3
ovs-vsctl add-br br-load-balancing-network ; ovs-vsctl add-port br-load-balancing-network eth4
ovs-vsctl add-br br-del-corp ; ovs-vsctl add-port br-del-corp eth5

My packstack's answer file has the following details in regards to neutron/ovs:

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-public-network
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan,flat
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch,hyperv
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1,physnet2:500:2000,physnet3:2010:3010
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=
CONFIG_NEUTRON_ML2_VXLAN_GROUP=
CONFIG_NEUTRON_ML2_VNI_RANGES=10:100
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-public-network,physnet2:br-local-network,physnet3:br-load-balancing-network
CONFIG_NEUTRON_OVS_EXTERNAL_PHYSNET=physnet1

I've been able to create and use the following networks:

  • public-network, physnet1 (flat), no router, public routeable cidr subnet, no dhcp
  • private-network, physnet2 (vlan id: 501), no router, local non-routeable cidr subnet (10.0.10.0/24), no dhcp

Both networks are working fin. I can spin up instances and they can communicate to either the external network (eg: Internet) or the local network.

Now, when it comes to creating a public network that would be used for load balancing projects and therfore, would be providing a neutron router, this is where it becomes problematic... I'm creating the "load balancing network" using the following commands:

neutron net-create --provider:network_type=vlan --provider:physical_network=physnet3 --router:external=True PUBLIC-CLUSTER-NETWORK
neutron subnet-create PUBLIC-CLUSTER-NETWORK PUBLIC_CIDR_HERE/27 --gateway GW_IP_HERE --allocation-pool start=IP_START_HERE,end=IP_END_HERE --disable-dhcp --name PUBLIC-CLUSTER-SUBNET --dns-nameservers list=true 8.8.8.8 4.2.2.2
neutron router-create ROUTER-PUBLIC-CLUSTER-NETWORK
neutron router-gateway-set ID_ROUTER ID_NETWORK

Then, I'm creating a "local cluster network" that would be using the router created above:

neutron net-create --provider:network_type=vlan --provider:physical_network=physnet2 CLIENT0001-CLUSTER-NETWORK --tenant-id=e0f7fb96271f48588e2aac86d66ae42e
neutron subnet-create CLIENT0001-CLUSTER-NETWORK 192.168.23.0/24 --name CLIENT0001-CLUSTER-SUBNET --dns-nameservers list=true 8.8.8.8 4.2.2.2 --disable-dhcp
neutron router-interface-add ID_ROUTEUR ID_CLIENT_SUBNET

Result in the Neutron router IP configuration is:

# ip netns exec qrouter-78ab8780-6282-4b8b-b840-b92ba0916e62 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
14: qg-0f38cb25-ae: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:76:7d:63 brd ff:ff:ff:ff:ff:ff
    inet 198.xxx.xxx.61/27 brd 198.xxx.xxx.63 scope global qg-0f38cb25-ae
       valid_lft forever preferred_lft forever
    inet6 xxxxxx/64 scope link
       valid_lft forever preferred_lft forever
15: qr-b4c68450-23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:53:48:5f brd ff:ff:ff:ff:ff:ff
    inet ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by nebukazar
close date 2017-05-04 10:54:08.251338

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-05-04 10:53:48 -0500

nebukazar gravatar image

Silly me... ethX devices had to be brought online.. for some reason, they weren't!

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2017-05-02 12:17:51 -0500

Seen: 668 times

Last updated: May 04 '17