Ask Your Question

David Kilcy's profile - activity

2014-11-20 13:57:47 -0500 received badge  Taxonomist
2014-10-26 04:09:10 -0500 received badge  Famous Question (source)
2014-10-26 02:35:14 -0500 received badge  Famous Question (source)
2014-10-20 17:18:55 -0500 received badge  Self-Learner (source)
2014-10-20 08:04:28 -0500 received badge  Notable Question (source)
2014-10-20 04:39:21 -0500 received badge  Popular Question (source)
2014-10-20 00:18:57 -0500 commented answer cant ping the tenant router gateway or assign IP floatingip address to compute node

Thank you, I will look tomorrow, it is late here. What is 192.168.0.127 in your network? Thanks!

2014-10-19 23:31:03 -0500 asked a question cant ping the tenant router gateway or assign IP floatingip address to compute node

I configured the demo tenant Icehouse on 4 CentOS bare-metal servers: controller, neutron network node and 2 compute nodes per these instructions: http://docs.openstack.org/icehouse/in...

Can create a VM and Problem is I cannot ping my tenant gateway 192.168.1.200. Below is my configuration

I created a cirros instance and can access it thru VNC, but no IP address is assigned to eth0.

Any help would be greatly appreciated.


[devops@workstation-02 openstack]$ nova list
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------+
| ID                                   | Name           | Status  | Task State | Power State | Networks                           |
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------+
| 5dea6636-7600-405b-b9bd-c99496d4a56a | demo-instance1 | SHUTOFF | -          | Shutdown    | demo-net=172.16.1.2, 192.168.1.201 |
| 9d277d3b-f42b-4576-9e16-f9e95f8815bd | demo-instance2 | ACTIVE  | -          | Running     | demo-net=172.16.1.4, 192.168.1.202 |
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------+

10.0.0.x is mgmt network
192.168.1.x is external network with gateway 192.168.1.1
10.0.1.x is VM network

source admin-openrc.sh
neutron net-create ext-net --shared --router:external=True
neutron subnet-create ext-net --name ext-subnet \
  --allocation-pool start=192.168.1.200,end=192.168.1.254 \
  --disable-dhcp --gateway 192.168.1.1 192.168.1.0/24
source demo-openrc.sh
neutron net-create demo-net
neutron subnet-create demo-net --name demo-subnet \
  --gateway 172.16.1.1 172.16.1.0/24
neutron net-list
neutron router-create demo-router
neutron router-interface-add demo-router demo-subnet
neutron router-gateway-set demo-router ext-net


Gateway IP = 192.168.1.1
DHCP is turned off
10.x network is on its own switch
192.x network is on its own switch connected to gateway ip

###############################################################################
# Network:
# *.mgmt    - OpenStack Internal Network
# *.pub     - Public Network
# *.vm      - VM Traffic Network
###############################################################################

10.0.0.6        workstation-02.mgmt workstation-02 salt ntp yumrepo 
10.0.0.11       controller-01.mgmt  controller-01
10.0.0.21       network-01.mgmt  network-01
10.0.0.31       compute-01.mgmt  compute-01
10.0.0.32       compute-02.mgmt  compute-02
10.0.1.31       compute-01.vm    compute-01 
10.0.1.32       compute-02.vm    compute-02
192.168.1.6     workstation-02.pub
192.168.1.11    controller-01.pub
```

##### Network Node

```

[root@network-01 ~]$ ping 192.168.1.200
connect: Network is unreachable

[root@controller-01 openstack]$  ping 192.168.1.200
PING 192.168.1.200 (192.168.1.200) 56(84) bytes of data.
From 192.168.1.11 icmp_seq=2 Destination Host Unreachable
From 192.168.1.11 icmp_seq=3 Destination Host Unreachable
From 192.168.1.11 icmp_seq=4 Destination Host Unreachable


[root@network-01 ~]$ echo $OS_USERNAME
admin


[root@network-01 ~]$ ovs-vsctl show
3ada494e-2979-4cff-b748-2f9becbc4c72
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qr-90090478-78"
            tag: 1
            Interface "qr-90090478-78"
                type: internal
        Port "tap26b688cd-94"
            tag: 1
            Interface "tap26b688cd-94"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port "eth2"
            Interface "eth2"
        Port "qg-40d23e5f-87"
            Interface "qg-40d23e5f-87"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Port "gre-0a00011f"
            Interface "gre-0a00011f"
                type: gre
                options: {in_key=flow, local_ip="10.0.1.21", out_key=flow, remote_ip="10.0.1.31"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-0a000120"
            Interface "gre-0a000120"
                type: gre
                options: {in_key=flow, local_ip="10.0.1.21", out_key=flow, remote_ip ...
(more)
2014-10-19 12:28:15 -0500 answered a question Icehouse: 3-node neutron setup - instances only ACTIVE if neutron-dhcp-agent started on controller

USER ERROR

Well, I did some troubleshooting. Apparently I forgot to set bootproto=none on my network node for the external interface. I shutdown the dhcp agent on the controller, restarted the services on the network and compute nodes, and now I am able to successfully provision new instances.

2014-10-19 12:26:20 -0500 received badge  Notable Question (source)
2014-10-17 15:03:47 -0500 received badge  Popular Question (source)
2014-10-16 06:48:36 -0500 received badge  Enthusiast
2014-10-13 01:40:32 -0500 received badge  Teacher (source)
2014-10-12 23:15:31 -0500 received badge  Autobiographer
2014-10-12 21:11:57 -0500 received badge  Editor (source)
2014-10-12 21:11:01 -0500 answered a question Ntp configuration in nodes

Check to make sure you either have no 'restrict' entries, or a restrict to allow access.
For example my controller node is 10.0.0.11. Below will allow anything on the 10. segment to sync.

restrict 10.0.0.0 mask 255.255.255.0 nomodify notrap

Replace your segment and mask with your network configuration, then restart ntp.
I think you can remove all 'restrict' entires in ntp.conf which is an 'allow all'- but im not 100% sure.

2014-10-12 19:56:24 -0500 asked a question Icehouse: 3-node neutron setup - instances only ACTIVE if neutron-dhcp-agent started on controller

I have setup the 3-node neutron architecture on CentOS according to the openstack install instructions. 1 controller, 1 neutron node and 2 compute nodes.

The problem I am having is the instance is stuck in the BUILD, an empty Task State, and Power State of NOSTATE. While attempting to diagnose the problem I started the neutron-dhcp-agent on the CONTROLLER node, and my instance is now successfully able to go ACTIVE and Running. The problem is, neutron-dhcp-agent is already running on my Network node......

Does anyone have any idea where to look to figure out the problem? The instructions state to setup neutron-dhcp-agent on the Network node. It is successfully running on the Network node. There are no errors except on the compute node in /var/log/libvirtd/libvirtd.log : 6815: error : virNetSocketReadWire:1194 : End of file while reading data: Input/output error

Everything is working, but why is it trying to use the neutron-dhcp-agent service on my controller? I also no longer see any new errors in libvirtd.log related to the EOF.

I've checked my configuration several times, and haven't been able to find any difference between that and the Icehouse install guide.

Any ideas on where I can look to diagnose the problem or what it could be? What are the ramifications if I leave the neutron-dhcp-agent running on my controller node? (TBH I'd rather have both the controller and neutron on one machine, so I can use the other machine I have now thats running neutron to also run a controller for redundancy...).

(I installed using SaltStack automation, so I know I didn't fat finger anything, or install something on the wrong box)

[devops@workstation-02 openstack]$ neutron agent-list
+--------------------------------------+--------------------+--------------------+-------+----------------+
 | id                                   | agent_type         | host               | alive | admin_state_up |
+--------------------------------------+--------------------+--------------------+-------+----------------+
| 24d7f3ba-7e47-4945-b3d9-4b139afd86e9 | DHCP agent         | controller-01.mgmt | :-)   | True           |
| 466ec0e8-5211-4d67-80f7-4fb7bc175334 | Open vSwitch agent | compute-01.mgmt    | :-)   | True           |
| 48b888da-2354-402d-9b27-5dd12004bb87 | Open vSwitch agent | compute-02.mgmt    | :-)   | True           |
| 5c2bc6ca-8d8c-4223-ac81-486b50137170 | Open vSwitch agent | network-01.mgmt    | :-)   | True           |
| bc12d36f-8832-4668-bee9-9bc0ddd644db | DHCP agent         | network-01.mgmt    | :-)   | True           |
| c4b8255b-5d09-425c-921c-f94a9bb7d1b1 | Metadata agent     | network-01.mgmt    | :-)   | True           |
| cb59415a-4b75-4345-8e4c-fe55ad732e6c | L3 agent           | network-01.mgmt    | :-)   | True           |
| ddf687e3-e391-483a-ac41-cdf365a674f5 | Metadata agent     | controller-01.mgmt | :-)   | True           |
+--------------------------------------+--------------------+--------------------+-------+----------------+
2014-07-30 06:34:09 -0500 received badge  Famous Question (source)
2014-06-06 23:12:52 -0500 received badge  Supporter (source)
2014-06-06 23:12:22 -0500 received badge  Scholar (source)
2014-06-06 23:12:16 -0500 commented answer CentOS 6 openstack repo with rsync

Thank you very much. I got it working.

2014-06-06 16:59:59 -0500 received badge  Notable Question (source)
2014-06-06 00:48:53 -0500 received badge  Popular Question (source)
2014-06-05 16:27:56 -0500 asked a question CentOS 6 openstack repo with rsync

Hi Everybody, I am setting up a private cloud on bare metal servers in a closed network. The OS on all the nodes is CentOS 6.5. I have created a local YUM repository for the base OS on a utility node that is connected to the Internet and to the closed network. I have successfully setup a cron job to do an rsync with a public mirror from my utility node, and my OpenStack nodes can update from the utility node, so far life is good.

However, I have not been able to find a repository that I can "rsync" to for OpenStack images. The only repository Ive found is repos.fedorapeople.com but I cannot rsync to it. i dont want to have to download the packages manually, I would like to automate as much as possible. Is what I am trying to do possible from fedorapeople.org , or is there an alternate configuration/site/whatever someone could recommend. Thanks in advance!