Ask Your Question

campee's profile - activity

2016-05-24 20:27:54 -0500 commented answer Unable to ping private network interface on router or any instances attached to router

Do you happen to be running your Neutron machine as a VM in VMware?

2016-05-23 13:27:43 -0500 received badge  Enthusiast
2016-05-21 07:27:02 -0500 received badge  Famous Question (source)
2016-05-20 10:35:01 -0500 commented answer Unable to ping private network interface on router or any instances attached to router

Here is the output you requested:

http://pastebin.com/9SjeTEax

What do you think that the 'physical_interface_mappings' line should look like? Like this?

physical_interface_mappings = public:eth1, private:eth0 (is this supposed to be comma-separated?)

2016-05-20 04:01:17 -0500 received badge  Notable Question (source)
2016-05-20 01:15:11 -0500 received badge  Famous Question (source)
2016-05-20 00:40:03 -0500 received badge  Popular Question (source)
2016-05-19 19:28:23 -0500 asked a question Unable to ping private network interface on router or any instances attached to router

I've set up Neutron public and tenant networking according to the Ubuntu Liberty install guide. I'm having trouble getting tenant networks workingI have created a public and private network and assigned a subnet to each of them. I can create an instance on the public network and everything works. When I create an instance on the private network and assign an IP to it, I can't communicate with it. Also, I can't ping the 'public' side of the router attached to the private subnet and when I VNC to the instance running on the private network, it can only ping as far as its gateway on the private network, nothing past that works.

I've created two networks in Neutron:

$ neutron subnet-list
+--------------------------------------+---------+----------------+--------------------------------------------------+
| id                                   | name    | cidr           | allocation_pools                                 |
+--------------------------------------+---------+----------------+--------------------------------------------------+
| 5ae8659a-525f-4173-b413-e3934428e840 | public  | 172.18.0.0/22  | {"start": "172.18.1.195", "end": "172.18.1.230"} |
| 41805d59-3b7d-4a5f-8cc7-9f795368ea58 | private | 192.168.1.0/24 | {"start": "192.168.1.2", "end": "192.168.1.254"} |
+--------------------------------------+---------+----------------+--------------------------------------------------+

Here are the details of each:

$ neutron subnet-show private

+-------------------+--------------------------------------------------+
| Field             | Value                                            |
+-------------------+--------------------------------------------------+
| allocation_pools  | {"start": "192.168.1.2", "end": "192.168.1.254"} |
| cidr              | 192.168.1.0/24                                   |
| dns_nameservers   | 8.8.8.8                                          |
| enable_dhcp       | True                                             |
| gateway_ip        | 192.168.1.1                                      |
| host_routes       |                                                  |
| id                | 41805d59-3b7d-4a5f-8cc7-9f795368ea58             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | private                                          |
| network_id        | 42067c22-3499-4d92-adc2-017f4085ce9a             |
| subnetpool_id     |                                                  |
| tenant_id         | 5648ab9f768f4fc0afc541371dacf2f9                 |
+-------------------+--------------------------------------------------+

$ neutron subnet-show public

+-------------------+--------------------------------------------------+
| Field             | Value                                            |
+-------------------+--------------------------------------------------+
| allocation_pools  | {"start": "172.18.1.195", "end": "172.18.1.230"} |
| cidr              | 172.18.0.0/22                                    |
| dns_nameservers   | 172.18.1.5                                       |
| enable_dhcp       | True                                             |
| gateway_ip        | 172.18.0.1                                       |
| host_routes       |                                                  |
| id                | 5ae8659a-525f-4173-b413-e3934428e840             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | public                                           |
| network_id        | 11a30862-96f0-40d3-a826-f42924d10086             |
| subnetpool_id     |                                                  |
| tenant_id         | cf7bd0047cec46c9a51675aef6add576                 |
+-------------------+--------------------------------------------------+

My neutron system and my compute node have two network interfaces, one for public networks (eth1) and the other for management/private networks (eth0):

On the NEUTRON/CONTROL system:

$ ifconfig eth0

eth0  Link encap:Ethernet  HWaddr 00:50:56:a3:41:ec  
          inet addr:172.18.1.80  Bcast:172.18.3.255  Mask:255.255.252.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1319835 errors:0 dropped:12079 overruns:0 frame:0
          TX packets:622529 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1804763786 (1.8 GB)  TX bytes:764407696 (764.4 MB)

$ ifconfig eth1

eth1   Link encap:Ethernet  HWaddr 00:50:56:a3:2e:df  
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1462139 errors:9204 dropped:18341 overruns:0 frame:0
          TX packets:78 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:121271062 (121.2 MB)  TX bytes:6141 (6.1 KB)

/etc/nova/nova.conf:

  [neutron]
  url = http://openstack-control1:9696
  auth_url = http://openstack-control1:35357
  auth_plugin = password
  project_domain_id = default
  user_domain_id = default
  region_name = RegionOne
  project_name = service
  username = neutron
  password = XXXXXX

/etc/neutron/plugins/ml2/linuxbridge_agent.ini:

  [linux_bridge]
  physical_interface_mappings = public:eth1
  [vxlan]
  enable_vxlan = True
  local_ip = 172.18.1.80
  l2_population = True
  [agent]
  prevent_arp_spoofing = True
  [securitygroup]
  enable_security_group = True
  firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

/etc/neutron/l3_agent.ini:

  [DEFAULT]
  interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  external_network_bridge ...
(more)
2016-02-08 14:13:11 -0500 received badge  Notable Question (source)
2016-02-08 14:13:11 -0500 received badge  Popular Question (source)
2016-01-13 06:39:01 -0500 asked a question Two-node RDO install with bridged external networking not working

OS=Linux Distro=CentOS7

I used RDO to set up a lab with two systems running inside of KVM; one compute node and a 'control' system running neutron, glance, keystone, horizon, etc. Each system has two network interfaces, eth0 and eth1, both on the same network: 192.168.122.0/24 On each server eth0 is assigned a static IP address and is used for management traffic. eth1 is for Neutron, and currently does not have an IP configuration on either host I want to assign eth1 to an OVS bridge named br-ex, which RDO installed by default, and have instances created on the compute node to be assigned IP addresses by the DHCP server running on my lab network, not by a router within OpenStack. As I understand it, this is referred to as a "service provider" network.

I ran the following on my 'control' node to associate eth1 with br-ex: ovs-vsctl add-port br-ex eth1

br-ex shows the following now:

Bridge br-ex
    Port br-ex
        Interface br-ex
            type: internal
    Port "eth1"
        Interface "eth1"

I changed ifcfg-eth1 to look like this:

DEVICE=eth1
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

And changed ifcfg-br-ex to look like this:

DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
ONBOOT=yes

1st question: I actually don't see 'br-ex' on my compute node when I run 'ovs-vsctl show' but I do see it on the 'control' node. Why is that? Is it normal?

I then tried to set up external bridged networking by following http://community.redhat.com/blog/2015/01/rdo-quickstart-doing-the-neutron-dance/ (http://community.redhat.com/blog/2015...)

Then I ran these commands on my Neutron host:

root@openstack-control ~(keystone_admin)]# neutron net-create public --router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | b1e9055c-3ded-4b45-a8ac-445de57d10c6 |
| mtu                       | 0                                    |
| name                      | public                               |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 57                                   |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 0290aec78d0045c981a740ee540e6be6     |
+---------------------------+--------------------------------------+


root@openstack-control ~(keystone_admin)]# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation_pool start=192.168.122.100,end=192.168.122.130 --gateway=192.168.122.1 public 192.168.122.0/24
Created a new subnet:
+-------------------+--------------------------------------------------------+
| Field             | Value                                                  |
+-------------------+--------------------------------------------------------+
| allocation_pools  | {"start": "192.168.122.100", "end": "192.168.122.130"} |
| cidr              | 192.168.122.0/24                                       |
| dns_nameservers   |                                                        |
| enable_dhcp       | False                                                  |
| gateway_ip        | 192.168.122.1                                          |
| host_routes       |                                                        |
| id                | f1aee2ee-02c9-46c0-9fa9-5b1d53c36d55                   |
| ip_version        | 4                                                      |
| ipv6_address_mode |                                                        |
| ipv6_ra_mode      |                                                        |
| name              | public_subnet                                          |
| network_id        | b1e9055c-3ded-4b45-a8ac-445de57d10c6                   |
| subnetpool_id     |                                                        |
| tenant_id         | 0290aec78d0045c981a740ee540e6be6                       |
+-------------------+--------------------------------------------------------+


root@openstack-control ~(keystone_admin)]# neutron router-create router1
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| distributed           | False                                |
| external_gateway_info |                                      |
| ha                    | False                                |
| id                    | 4dadca55-88ae-4501-8530-6b5080832460 |
| name                  | router1                              |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 0290aec78d0045c981a740ee540e6be6     |
+-----------------------+--------------------------------------+


root@openstack-control ~(keystone_admin)]# neutron router-gateway-set router1 public
Set gateway for router router1

I created a security group called 'wide_open' that should allow any traffic:

root@openstack-control ~(keystone_admin)]# nova secgroup-list-rules wide_open
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| udp         | 1         | 65535   | 0.0.0.0/0 |              |
| tcp         | 1         | 65535   | 0.0.0.0/0 |              |
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

I then created an instance using Fedora 23's cloud image, Fedora-Cloud-Base-23-20151030.x86_64.qcow2, and gave the instance a name ... (more)