Ask Your Question
0

Icehouse: 3-node neutron setup - instances only ACTIVE if neutron-dhcp-agent started on controller

asked 2014-10-12 19:39:23 -0500

I have setup the 3-node neutron architecture on CentOS according to the openstack install instructions. 1 controller, 1 neutron node and 2 compute nodes.

The problem I am having is the instance is stuck in the BUILD, an empty Task State, and Power State of NOSTATE. While attempting to diagnose the problem I started the neutron-dhcp-agent on the CONTROLLER node, and my instance is now successfully able to go ACTIVE and Running. The problem is, neutron-dhcp-agent is already running on my Network node......

Does anyone have any idea where to look to figure out the problem? The instructions state to setup neutron-dhcp-agent on the Network node. It is successfully running on the Network node. There are no errors except on the compute node in /var/log/libvirtd/libvirtd.log : 6815: error : virNetSocketReadWire:1194 : End of file while reading data: Input/output error

Everything is working, but why is it trying to use the neutron-dhcp-agent service on my controller? I also no longer see any new errors in libvirtd.log related to the EOF.

I've checked my configuration several times, and haven't been able to find any difference between that and the Icehouse install guide.

Any ideas on where I can look to diagnose the problem or what it could be? What are the ramifications if I leave the neutron-dhcp-agent running on my controller node? (TBH I'd rather have both the controller and neutron on one machine, so I can use the other machine I have now thats running neutron to also run a controller for redundancy...).

(I installed using SaltStack automation, so I know I didn't fat finger anything, or install something on the wrong box)

[devops@workstation-02 openstack]$ neutron agent-list
+--------------------------------------+--------------------+--------------------+-------+----------------+
 | id                                   | agent_type         | host               | alive | admin_state_up |
+--------------------------------------+--------------------+--------------------+-------+----------------+
| 24d7f3ba-7e47-4945-b3d9-4b139afd86e9 | DHCP agent         | controller-01.mgmt | :-)   | True           |
| 466ec0e8-5211-4d67-80f7-4fb7bc175334 | Open vSwitch agent | compute-01.mgmt    | :-)   | True           |
| 48b888da-2354-402d-9b27-5dd12004bb87 | Open vSwitch agent | compute-02.mgmt    | :-)   | True           |
| 5c2bc6ca-8d8c-4223-ac81-486b50137170 | Open vSwitch agent | network-01.mgmt    | :-)   | True           |
| bc12d36f-8832-4668-bee9-9bc0ddd644db | DHCP agent         | network-01.mgmt    | :-)   | True           |
| c4b8255b-5d09-425c-921c-f94a9bb7d1b1 | Metadata agent     | network-01.mgmt    | :-)   | True           |
| cb59415a-4b75-4345-8e4c-fe55ad732e6c | L3 agent           | network-01.mgmt    | :-)   | True           |
| ddf687e3-e391-483a-ac41-cdf365a674f5 | Metadata agent     | controller-01.mgmt | :-)   | True           |
+--------------------------------------+--------------------+--------------------+-------+----------------+
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2014-10-19 12:28:15 -0500

updated 2014-10-19 12:29:23 -0500

USER ERROR

Well, I did some troubleshooting. Apparently I forgot to set bootproto=none on my network node for the external interface. I shutdown the dhcp agent on the controller, restarted the services on the network and compute nodes, and now I am able to successfully provision new instances.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-10-12 19:39:23 -0500

Seen: 288 times

Last updated: Oct 19 '14