Revision history [back]

click to hide/show revision 1
initial version

ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

Hello, i want to run a 3 Node Enviroment.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When I execute this command:

nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=DEMO_NET_ID \
  --security-group default --key-name demo-key demo-instance1

i get the failure: ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

My nova-api.log:

    2015-02-11 14:03:03.182 2803 INFO nova.osapi_compute.wsgi.server [req-3c0f8b1e-4ab3-46fc-bfbb-71d88b9c5e18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.418 2803 INFO nova.osapi_compute.wsgi.server [req-62d6f5d7-2105-469b-b25a-1cccea716cf0 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.465 2803 INFO nova.osapi_compute.wsgi.server [req-e40ede6b-316f-4849-a657-595270b689e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images/86d8dfbb-970$
    2015-02-11 14:03:03.477 2803 INFO nova.api.openstack.wsgi [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] HTTP exception thrown: The resource could not be found.
    2015-02-11 14:03:03.477 2803 INFO nova.osapi_compute.wsgi.server [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/m1.tiny HTT$
    2015-02-11 14:03:03.490 2803 INFO nova.osapi_compute.wsgi.server [req-a1f035c7-40a8-4ba3-8621-84c48d2e9d4b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.505 2803 INFO nova.osapi_compute.wsgi.server [req-9cc0186e-30f9-4c2c-8b43-eb6686327cd5 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.517 2803 INFO nova.osapi_compute.wsgi.server [req-56695602-ee50-4490-a7fc-2d8354bd0b18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
    2015-02-11 14:03:03.744 2803 INFO nova.api.openstack.wsgi [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] HTTP exception thrown: Invalid key_name provided.
    2015-02-11 14:03:03.745 2803 INFO nova.osapi_compute.wsgi.server [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] 192.168.1.110 "POST /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers HTTP/1.1" $

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log

2015-02-11 12:45:45.576 1236 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: Failed to alloc$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     filter_properties)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     reason=msg)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] During wait destroy, instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

Hello, i want to run a 3 Node Enviroment.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When I execute this command:

nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=DEMO_NET_ID \
  --security-group default --key-name demo-key demo-instance1

i get the failure: ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain Could this be a problem?

My nova-api.log:

    2015-02-11 14:03:03.182 2803 INFO nova.osapi_compute.wsgi.server [req-3c0f8b1e-4ab3-46fc-bfbb-71d88b9c5e18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.418 2803 INFO nova.osapi_compute.wsgi.server [req-62d6f5d7-2105-469b-b25a-1cccea716cf0 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.465 2803 INFO nova.osapi_compute.wsgi.server [req-e40ede6b-316f-4849-a657-595270b689e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images/86d8dfbb-970$
    2015-02-11 14:03:03.477 2803 INFO nova.api.openstack.wsgi [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] HTTP exception thrown: The resource could not be found.
    2015-02-11 14:03:03.477 2803 INFO nova.osapi_compute.wsgi.server [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/m1.tiny HTT$
    2015-02-11 14:03:03.490 2803 INFO nova.osapi_compute.wsgi.server [req-a1f035c7-40a8-4ba3-8621-84c48d2e9d4b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.505 2803 INFO nova.osapi_compute.wsgi.server [req-9cc0186e-30f9-4c2c-8b43-eb6686327cd5 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.517 2803 INFO nova.osapi_compute.wsgi.server [req-56695602-ee50-4490-a7fc-2d8354bd0b18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
    2015-02-11 14:03:03.744 2803 INFO nova.api.openstack.wsgi [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] HTTP exception thrown: Invalid key_name provided.
    2015-02-11 14:03:03.745 2803 INFO nova.osapi_compute.wsgi.server [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] 192.168.1.110 "POST /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers HTTP/1.1" $

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log

2015-02-11 12:45:45.576 1236 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: Failed to alloc$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     filter_properties)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     reason=msg)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] During wait destroy, instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

Hello, i want to run a 3 Node Enviroment.Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When I execute this command:

nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=DEMO_NET_ID \
  --security-group default --key-name demo-key demo-instance1

i get the failure: ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain Could this be a problem?

My nova-api.log:

    2015-02-11 14:03:03.182 2803 INFO nova.osapi_compute.wsgi.server [req-3c0f8b1e-4ab3-46fc-bfbb-71d88b9c5e18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.418 2803 INFO nova.osapi_compute.wsgi.server [req-62d6f5d7-2105-469b-b25a-1cccea716cf0 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.465 2803 INFO nova.osapi_compute.wsgi.server [req-e40ede6b-316f-4849-a657-595270b689e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images/86d8dfbb-970$
    2015-02-11 14:03:03.477 2803 INFO nova.api.openstack.wsgi [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] HTTP exception thrown: The resource could not be found.
    2015-02-11 14:03:03.477 2803 INFO nova.osapi_compute.wsgi.server [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/m1.tiny HTT$
    2015-02-11 14:03:03.490 2803 INFO nova.osapi_compute.wsgi.server [req-a1f035c7-40a8-4ba3-8621-84c48d2e9d4b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.505 2803 INFO nova.osapi_compute.wsgi.server [req-9cc0186e-30f9-4c2c-8b43-eb6686327cd5 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.517 2803 INFO nova.osapi_compute.wsgi.server [req-56695602-ee50-4490-a7fc-2d8354bd0b18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
    2015-02-11 14:03:03.744 2803 INFO nova.api.openstack.wsgi [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] HTTP exception thrown: Invalid key_name provided.
    2015-02-11 14:03:03.745 2803 INFO nova.osapi_compute.wsgi.server [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] 192.168.1.110 "POST /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers HTTP/1.1" $

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log

2015-02-11 12:45:45.576 1236 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: Failed to alloc$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     filter_properties)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     reason=msg)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] During wait destroy, instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When I execute this command: command with the correct DEMO_NET_ID:

nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=DEMO_NET_ID \
  --security-group default --key-name demo-key demo-instance1

i get the failure: ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain Could When i enable this be a problem?services then i geht the failure in the dashboard: No valid host found

My nova-api.log:

    2015-02-11 14:03:03.182 2803 INFO nova.osapi_compute.wsgi.server [req-3c0f8b1e-4ab3-46fc-bfbb-71d88b9c5e18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.418 2803 INFO nova.osapi_compute.wsgi.server [req-62d6f5d7-2105-469b-b25a-1cccea716cf0 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.465 2803 INFO nova.osapi_compute.wsgi.server [req-e40ede6b-316f-4849-a657-595270b689e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images/86d8dfbb-970$
    2015-02-11 14:03:03.477 2803 INFO nova.api.openstack.wsgi [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] HTTP exception thrown: The resource could not be found.
    2015-02-11 14:03:03.477 2803 INFO nova.osapi_compute.wsgi.server [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/m1.tiny HTT$
    2015-02-11 14:03:03.490 2803 INFO nova.osapi_compute.wsgi.server [req-a1f035c7-40a8-4ba3-8621-84c48d2e9d4b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.505 2803 INFO nova.osapi_compute.wsgi.server [req-9cc0186e-30f9-4c2c-8b43-eb6686327cd5 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.517 2803 INFO nova.osapi_compute.wsgi.server [req-56695602-ee50-4490-a7fc-2d8354bd0b18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
    2015-02-11 14:03:03.744 2803 INFO nova.api.openstack.wsgi [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] HTTP exception thrown: Invalid key_name provided.
    2015-02-11 14:03:03.745 2803 INFO nova.osapi_compute.wsgi.server [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] 192.168.1.110 "POST /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers HTTP/1.1" $

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log

2015-02-11 12:45:45.576 1236 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: Failed to alloc$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     filter_properties)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     reason=msg)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] During wait destroy, instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When I execute this command with the correct DEMO_NET_ID:

nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=DEMO_NET_ID \
  --security-group default --key-name demo-key demo-instance1

i get the failure: ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log:

    2015-02-11 14:03:03.182 2803 INFO nova.osapi_compute.wsgi.server [req-3c0f8b1e-4ab3-46fc-bfbb-71d88b9c5e18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.418 2803 INFO nova.osapi_compute.wsgi.server [req-62d6f5d7-2105-469b-b25a-1cccea716cf0 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.465 2803 INFO nova.osapi_compute.wsgi.server [req-e40ede6b-316f-4849-a657-595270b689e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images/86d8dfbb-970$
    2015-02-11 14:03:03.477 2803 INFO nova.api.openstack.wsgi [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] HTTP exception thrown: The resource could not be found.
    2015-02-11 14:03:03.477 2803 INFO nova.osapi_compute.wsgi.server [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/m1.tiny HTT$
    2015-02-11 14:03:03.490 2803 INFO nova.osapi_compute.wsgi.server [req-a1f035c7-40a8-4ba3-8621-84c48d2e9d4b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.505 2803 INFO nova.osapi_compute.wsgi.server [req-9cc0186e-30f9-4c2c-8b43-eb6686327cd5 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.517 2803 INFO nova.osapi_compute.wsgi.server [req-56695602-ee50-4490-a7fc-2d8354bd0b18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
    2015-02-11 14:03:03.744 2803 INFO nova.api.openstack.wsgi [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] HTTP exception thrown: Invalid key_name provided.
    2015-02-11 14:03:03.745 2803 INFO nova.osapi_compute.wsgi.server [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] 192.168.1.110 "POST /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers HTTP/1.1" $

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log

2015-02-11 12:45:45.576 1236 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: Failed to alloc$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     filter_properties)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     reason=msg)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] During wait destroy, instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When I execute this command with the correct DEMO_NET_ID:

nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=DEMO_NET_ID \
  --security-group default --key-name demo-key demo-instance1

i get the failure: ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log:

    2015-02-11 14:03:03.182 2803 INFO nova.osapi_compute.wsgi.server [req-3c0f8b1e-4ab3-46fc-bfbb-71d88b9c5e18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.418 2803 INFO nova.osapi_compute.wsgi.server [req-62d6f5d7-2105-469b-b25a-1cccea716cf0 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.465 2803 INFO nova.osapi_compute.wsgi.server [req-e40ede6b-316f-4849-a657-595270b689e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images/86d8dfbb-970$
    2015-02-11 14:03:03.477 2803 INFO nova.api.openstack.wsgi [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] HTTP exception thrown: The resource could not be found.
    2015-02-11 14:03:03.477 2803 INFO nova.osapi_compute.wsgi.server [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/m1.tiny HTT$
    2015-02-11 14:03:03.490 2803 INFO nova.osapi_compute.wsgi.server [req-a1f035c7-40a8-4ba3-8621-84c48d2e9d4b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.505 2803 INFO nova.osapi_compute.wsgi.server [req-9cc0186e-30f9-4c2c-8b43-eb6686327cd5 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.517 2803 INFO nova.osapi_compute.wsgi.server [req-56695602-ee50-4490-a7fc-2d8354bd0b18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
    2015-02-11 14:03:03.744 2803 INFO nova.api.openstack.wsgi [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] HTTP exception thrown: Invalid key_name provided.
    2015-02-11 14:03:03.745 2803 INFO nova.osapi_compute.wsgi.server [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] 192.168.1.110 "POST /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers HTTP/1.1" $


2015-02-12 13:21:10.623 2806 WARNING keystonemiddleware.auth_token [-] Unable to find authentication token in headers
2015-02-12 13:21:10.623 2806 INFO nova.osapi_compute.wsgi.server [-] 192.168.1.110 "POST /v2/43b4014c06c3c861e429/os-server-external-events HTTP/1.1" status: 401 len: 261 time: 0.0010219
2015-02-12 13:21:21.442 2802 INFO nova.osapi_compute.wsgi.server [req-987db7fa-2d50-4a2b-b6c8-219999b33f56 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:21:21.455 2802 INFO nova.osapi_compute.wsgi.server [req-81737775-2583-4bb1-89a2-fee887b4465b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:21:21.521 2803 INFO nova.osapi_compute.wsgi.server [req-7c4a919e-9bea-4270-911b-1a160400a432 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:21:36.640 2803 INFO nova.osapi_compute.wsgi.server [req-7b022c4b-f6e5-40e6-9162-f5ac4df9d82b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:21:36.655 2807 INFO nova.osapi_compute.wsgi.server [req-b0c47821-0d84-411b-81e2-25e786f9724c None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:21:36.833 2807 INFO nova.osapi_compute.wsgi.server [req-73d68fd4-8308-419e-ac88-b07f6e1e5b06 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:21:54.466 2804 INFO nova.osapi_compute.wsgi.server [req-ea7db214-9344-4ad6-8e3d-2b96e5f162fe None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:21:54.480 2809 INFO nova.osapi_compute.wsgi.server [req-5c55a98d-ead2-47c8-b4c4-eb6abae8483c None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:21:54.791 2809 INFO nova.osapi_compute.wsgi.server [req-9b76ac8b-417b-465e-b895-796ab85b2721 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:22:14.908 2804 INFO nova.osapi_compute.wsgi.server [req-4e9fd33d-4bb9-4f9d-8a8a-6434314874d6 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:22:14.923 2805 INFO nova.osapi_compute.wsgi.server [req-3335e32f-4292-401b-821e-ff39449b622f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:22:14.958 2804 INFO nova.osapi_compute.wsgi.server [req-d4cf0bc9-6bc1-4fd4-b735-6671ead483dc None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:22:37.561 2809 INFO nova.osapi_compute.wsgi.server [req-ccea02ef-5417-4f1b-b47a-a5da90ddb900 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:22:37.576 2809 INFO nova.osapi_compute.wsgi.server [req-65d7388c-3d3a-435c-afb2-1718fed0fd49 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:22:37.611 2808 INFO nova.osapi_compute.wsgi.server [req-66227577-09d9-4c62-a72a-04d3b932d988 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:23:02.736 2803 INFO nova.osapi_compute.wsgi.server [req-25341a35-e914-44d0-afc8-4fac2bb88285 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:23:02.749 2803 INFO nova.osapi_compute.wsgi.server [req-e8b12a74-f392-4e15-b85b-ac7b3e75a382 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:23:02.785 2804 INFO nova.osapi_compute.wsgi.server [req-40320d41-fd01-4d14-acb8-306edf40d790 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:23:30.407 2809 INFO nova.osapi_compute.wsgi.server [req-1c1125e4-79fc-434b-8666-a2938ab653ee None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:23:30.421 2809 INFO nova.osapi_compute.wsgi.server [req-4788af85-2462-4747-926b-7fe7a11217af None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:23:30.619 2808 INFO nova.osapi_compute.wsgi.server [req-718f08f3-663f-4328-a47b-7fc57571662c None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log

2015-02-11 12:45:45.576 1236 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: Failed to alloc$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     filter_properties)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     reason=msg)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] During wait destroy, instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

ERROR (BadRequest): Invalid key_name provided. (HTTP 400)failed to allocate the network(s), not rescheduling

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When I execute this command with the correct DEMO_NET_ID:

nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=DEMO_NET_ID \
  --security-group default --key-name demo-key demo-instance1

i get the failure: ERROR (BadRequest): Invalid key_name provided. (HTTP 400)

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log:

    2015-02-11 14:03:03.182 2803 INFO nova.osapi_compute.wsgi.server [req-3c0f8b1e-4ab3-46fc-bfbb-71d88b9c5e18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.418 2803 INFO nova.osapi_compute.wsgi.server [req-62d6f5d7-2105-469b-b25a-1cccea716cf0 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.465 2803 INFO nova.osapi_compute.wsgi.server [req-e40ede6b-316f-4849-a657-595270b689e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images/86d8dfbb-970$
    2015-02-11 14:03:03.477 2803 INFO nova.api.openstack.wsgi [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] HTTP exception thrown: The resource could not be found.
    2015-02-11 14:03:03.477 2803 INFO nova.osapi_compute.wsgi.server [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/m1.tiny HTT$
    2015-02-11 14:03:03.490 2803 INFO nova.osapi_compute.wsgi.server [req-a1f035c7-40a8-4ba3-8621-84c48d2e9d4b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.505 2803 INFO nova.osapi_compute.wsgi.server [req-9cc0186e-30f9-4c2c-8b43-eb6686327cd5 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.517 2803 INFO nova.osapi_compute.wsgi.server [req-56695602-ee50-4490-a7fc-2d8354bd0b18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
    2015-02-11 14:03:03.744 2803 INFO nova.api.openstack.wsgi [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] HTTP exception thrown: Invalid key_name provided.
    2015-02-11 14:03:03.745 2803 INFO nova.osapi_compute.wsgi.server [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] 192.168.1.110 "POST /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers HTTP/1.1" $


2015-02-12 13:21:10.623 2806 WARNING keystonemiddleware.auth_token [-] Unable to find authentication token in headers
2015-02-12 13:21:10.623 2806 INFO nova.osapi_compute.wsgi.server [-] 192.168.1.110 "POST /v2/43b4014c06c3c861e429/os-server-external-events HTTP/1.1" status: 401 len: 261 time: 0.0010219
2015-02-12 13:21:21.442 2802 INFO nova.osapi_compute.wsgi.server [req-987db7fa-2d50-4a2b-b6c8-219999b33f56 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:21:21.455 2802 INFO nova.osapi_compute.wsgi.server [req-81737775-2583-4bb1-89a2-fee887b4465b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:21:21.521 2803 INFO nova.osapi_compute.wsgi.server [req-7c4a919e-9bea-4270-911b-1a160400a432 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:21:36.640 2803 INFO nova.osapi_compute.wsgi.server [req-7b022c4b-f6e5-40e6-9162-f5ac4df9d82b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:21:36.655 2807 INFO nova.osapi_compute.wsgi.server [req-b0c47821-0d84-411b-81e2-25e786f9724c None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:21:36.833 2807 INFO nova.osapi_compute.wsgi.server [req-73d68fd4-8308-419e-ac88-b07f6e1e5b06 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:21:54.466 2804 INFO nova.osapi_compute.wsgi.server [req-ea7db214-9344-4ad6-8e3d-2b96e5f162fe None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:21:54.480 2809 INFO nova.osapi_compute.wsgi.server [req-5c55a98d-ead2-47c8-b4c4-eb6abae8483c None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:21:54.791 2809 INFO nova.osapi_compute.wsgi.server [req-9b76ac8b-417b-465e-b895-796ab85b2721 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:22:14.908 2804 INFO nova.osapi_compute.wsgi.server [req-4e9fd33d-4bb9-4f9d-8a8a-6434314874d6 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:22:14.923 2805 INFO nova.osapi_compute.wsgi.server [req-3335e32f-4292-401b-821e-ff39449b622f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:22:14.958 2804 INFO nova.osapi_compute.wsgi.server [req-d4cf0bc9-6bc1-4fd4-b735-6671ead483dc None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:22:37.561 2809 INFO nova.osapi_compute.wsgi.server [req-ccea02ef-5417-4f1b-b47a-a5da90ddb900 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:22:37.576 2809 INFO nova.osapi_compute.wsgi.server [req-65d7388c-3d3a-435c-afb2-1718fed0fd49 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:22:37.611 2808 INFO nova.osapi_compute.wsgi.server [req-66227577-09d9-4c62-a72a-04d3b932d988 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:23:02.736 2803 INFO nova.osapi_compute.wsgi.server [req-25341a35-e914-44d0-afc8-4fac2bb88285 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:23:02.749 2803 INFO nova.osapi_compute.wsgi.server [req-e8b12a74-f392-4e15-b85b-ac7b3e75a382 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:23:02.785 2804 INFO nova.osapi_compute.wsgi.server [req-40320d41-fd01-4d14-acb8-306edf40d790 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:23:30.407 2809 INFO nova.osapi_compute.wsgi.server [req-1c1125e4-79fc-434b-8666-a2938ab653ee None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:23:30.421 2809 INFO nova.osapi_compute.wsgi.server [req-4788af85-2462-4747-926b-7fe7a11217af None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:23:30.619 2808 INFO nova.osapi_compute.wsgi.server [req-718f08f3-663f-4328-a47b-7fc57571662c None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log

2015-02-11 12:45:45.576 1236 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: Failed to alloc$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     filter_properties)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     reason=msg)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] During wait destroy, instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

failed to allocate the network(s), not rescheduling

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log:

    2015-02-11 14:03:03.182 2803 INFO nova.osapi_compute.wsgi.server [req-3c0f8b1e-4ab3-46fc-bfbb-71d88b9c5e18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.418 2803 INFO nova.osapi_compute.wsgi.server [req-62d6f5d7-2105-469b-b25a-1cccea716cf0 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.465 2803 INFO nova.osapi_compute.wsgi.server [req-e40ede6b-316f-4849-a657-595270b689e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images/86d8dfbb-970$
    2015-02-11 14:03:03.477 2803 INFO nova.api.openstack.wsgi [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] HTTP exception thrown: The resource could not be found.
    2015-02-11 14:03:03.477 2803 INFO nova.osapi_compute.wsgi.server [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/m1.tiny HTT$
    2015-02-11 14:03:03.490 2803 INFO nova.osapi_compute.wsgi.server [req-a1f035c7-40a8-4ba3-8621-84c48d2e9d4b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.505 2803 INFO nova.osapi_compute.wsgi.server [req-9cc0186e-30f9-4c2c-8b43-eb6686327cd5 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.517 2803 INFO nova.osapi_compute.wsgi.server [req-56695602-ee50-4490-a7fc-2d8354bd0b18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
    2015-02-11 14:03:03.744 2803 INFO nova.api.openstack.wsgi [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] HTTP exception thrown: Invalid key_name provided.
    2015-02-11 14:03:03.745 2803 INFO nova.osapi_compute.wsgi.server [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] 192.168.1.110 "POST /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers HTTP/1.1" $


2015-02-12 13:21:10.623 2806 WARNING keystonemiddleware.auth_token [-] Unable to find authentication token in headers
2015-02-12 13:21:10.623 2806 INFO nova.osapi_compute.wsgi.server [-] 192.168.1.110 "POST /v2/43b4014c06c3c861e429/os-server-external-events HTTP/1.1" status: 401 len: 261 time: 0.0010219
2015-02-12 13:21:21.442 2802 INFO nova.osapi_compute.wsgi.server [req-987db7fa-2d50-4a2b-b6c8-219999b33f56 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:21:21.455 2802 INFO nova.osapi_compute.wsgi.server [req-81737775-2583-4bb1-89a2-fee887b4465b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:21:21.521 2803 INFO nova.osapi_compute.wsgi.server [req-7c4a919e-9bea-4270-911b-1a160400a432 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:21:36.640 2803 INFO nova.osapi_compute.wsgi.server [req-7b022c4b-f6e5-40e6-9162-f5ac4df9d82b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:21:36.655 2807 INFO nova.osapi_compute.wsgi.server [req-b0c47821-0d84-411b-81e2-25e786f9724c None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:21:36.833 2807 INFO nova.osapi_compute.wsgi.server [req-73d68fd4-8308-419e-ac88-b07f6e1e5b06 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:21:54.466 2804 INFO nova.osapi_compute.wsgi.server [req-ea7db214-9344-4ad6-8e3d-2b96e5f162fe None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:21:54.480 2809 INFO nova.osapi_compute.wsgi.server [req-5c55a98d-ead2-47c8-b4c4-eb6abae8483c None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:21:54.791 2809 INFO nova.osapi_compute.wsgi.server [req-9b76ac8b-417b-465e-b895-796ab85b2721 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:22:14.908 2804 INFO nova.osapi_compute.wsgi.server [req-4e9fd33d-4bb9-4f9d-8a8a-6434314874d6 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:22:14.923 2805 INFO nova.osapi_compute.wsgi.server [req-3335e32f-4292-401b-821e-ff39449b622f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:22:14.958 2804 INFO nova.osapi_compute.wsgi.server [req-d4cf0bc9-6bc1-4fd4-b735-6671ead483dc None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:22:37.561 2809 INFO nova.osapi_compute.wsgi.server [req-ccea02ef-5417-4f1b-b47a-a5da90ddb900 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:22:37.576 2809 INFO nova.osapi_compute.wsgi.server [req-65d7388c-3d3a-435c-afb2-1718fed0fd49 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:22:37.611 2808 INFO nova.osapi_compute.wsgi.server [req-66227577-09d9-4c62-a72a-04d3b932d988 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:23:02.736 2803 INFO nova.osapi_compute.wsgi.server [req-25341a35-e914-44d0-afc8-4fac2bb88285 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:23:02.749 2803 INFO nova.osapi_compute.wsgi.server [req-e8b12a74-f392-4e15-b85b-ac7b3e75a382 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:23:02.785 2804 INFO nova.osapi_compute.wsgi.server [req-40320d41-fd01-4d14-acb8-306edf40d790 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 13:23:30.407 2809 INFO nova.osapi_compute.wsgi.server [req-1c1125e4-79fc-434b-8666-a2938ab653ee None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/6ba5993f-b2$
2015-02-12 13:23:30.421 2809 INFO nova.osapi_compute.wsgi.server [req-4788af85-2462-4747-926b-7fe7a11217af None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 13:23:30.619 2808 INFO nova.osapi_compute.wsgi.server [req-718f08f3-663f-4328-a47b-7fc57571662c None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log

2015-02-11 12:45:45.576 1236 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: Failed to alloc$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     filter_properties)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     reason=msg)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] During wait destroy, instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

failed to allocate the network(s), not rescheduling

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log:nova-api.log on the controller Node:

 2015-02-11 14:03:03.182 2803 INFO nova.osapi_compute.wsgi.server [req-3c0f8b1e-4ab3-46fc-bfbb-71d88b9c5e18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.418 2803 INFO nova.osapi_compute.wsgi.server [req-62d6f5d7-2105-469b-b25a-1cccea716cf0 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images HTTP/1.1" st$
    2015-02-11 14:03:03.465 2803 INFO nova.osapi_compute.wsgi.server [req-e40ede6b-316f-4849-a657-595270b689e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/images/86d8dfbb-970$
    2015-02-11 14:03:03.477 2803 INFO nova.api.openstack.wsgi [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] HTTP exception thrown: The resource could not be found.
    2015-02-11 14:03:03.477 2803 INFO nova.osapi_compute.wsgi.server [req-d964dc87-df97-42ba-879c-a30cc1b6015f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/m1.tiny HTT$
    2015-02-11 14:03:03.490 2803 INFO nova.osapi_compute.wsgi.server [req-a1f035c7-40a8-4ba3-8621-84c48d2e9d4b None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.505 2803 INFO nova.osapi_compute.wsgi.server [req-9cc0186e-30f9-4c2c-8b43-eb6686327cd5 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors?is_public=N$
    2015-02-11 14:03:03.517 2803 INFO nova.osapi_compute.wsgi.server [req-56695602-ee50-4490-a7fc-2d8354bd0b18    2015-02-12 14:21:28.630 3463 INFO nova.osapi_compute.wsgi.server [req-48438ac7-f6fc-4a2b-9ad4-8dd2620bf0e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:21:51.247 3456 INFO nova.osapi_compute.wsgi.server [req-37b811b3-704f-4881-a7b4-6c86c69b7d33 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:21:51.261 3456 INFO nova.osapi_compute.wsgi.server [req-f8a7de9c-ce54-4dca-9d6f-3e1bed05ff40 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
    2015-02-11 14:03:03.744 2803 INFO nova.api.openstack.wsgi [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] HTTP exception thrown: Invalid key_name provided.
    2015-02-11 14:03:03.745 2803 INFO nova.osapi_compute.wsgi.server [req-3e5ea32b-d2cc-4a6a-8da7-e9b05ed6d3fe None] 192.168.1.110 "POST /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers HTTP/1.1" $
2015-02-12 14:21:51.296 3456 INFO nova.osapi_compute.wsgi.server [req-644c3f4f-b6fd-40b6-841f-afba55f7cf18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:16.409 3463 INFO nova.osapi_compute.wsgi.server [req-2dde0bcc-7428-4043-af6e-27e8ad757f45 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:16.422 3463 INFO nova.osapi_compute.wsgi.server [req-f2548c3c-d3fb-409f-aad6-ee756ff9f809 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:16.457 3463 INFO nova.osapi_compute.wsgi.server [req-5cc351b8-1bf4-476a-9d96-725568e2d849 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:44.070 3456 INFO nova.osapi_compute.wsgi.server [req-1b4ae5af-075d-406d-921a-acdfaa650eff None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:44.084 3456 INFO nova.osapi_compute.wsgi.server [req-1c26700c-8fba-4191-8188-35862c6faff2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:44.120 3458 INFO nova.osapi_compute.wsgi.server [req-27ad2b84-aee5-4e59-be9a-8731adad28c8 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:14.235 3462 INFO nova.osapi_compute.wsgi.server [req-7dea5fb4-6c4e-468e-a196-8bc14e6ed7a2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:14.249 3462 INFO nova.osapi_compute.wsgi.server [req-19abeef2-cf4d-4d8e-be4c-771cee8e01a4 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:14.286 3462 INFO nova.osapi_compute.wsgi.server [req-b681877d-8521-4794-9b14-fce2d3429f67 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:44.438 3463 INFO nova.osapi_compute.wsgi.server [req-cca8edf7-f8e4-4587-bc62-9bf83e4c2ade None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:44.453 3457 INFO nova.osapi_compute.wsgi.server [req-3ca2edda-2421-499d-a2f3-1d8498cab8fc None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:44.486 3457 INFO nova.osapi_compute.wsgi.server [req-136041f3-0bdd-45ad-9637-81ddbe1a4587 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:14.644 3461 INFO nova.osapi_compute.wsgi.server [req-127981e1-24e0-4374-8ae2-da42fc573c8f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:14.658 3461 INFO nova.osapi_compute.wsgi.server [req-fca7e728-3114-4941-bed4-d92925534ca9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:14.791 3463 INFO nova.osapi_compute.wsgi.server [req-3acc7914-5439-4f67-86f2-a4aaecf5bcbe None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:44.900 3458 INFO nova.osapi_compute.wsgi.server [req-e3f7db0c-b684-404b-9dd0-44fcbfbedcdf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:44.915 3458 INFO nova.osapi_compute.wsgi.server [req-570b6c1c-6af6-46f1-9915-7b247f5493cf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:45.047 3463 INFO nova.osapi_compute.wsgi.server [req-66221c3a-2d62-4838-bf1c-81b65d38fafb None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log

2015-02-11 12:45:45.576 1236 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: Failed to alloc$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     filter_properties)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     reason=msg)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] During wait destroy, instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

Run VM: failed to allocate the network(s), not rescheduling

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log on the controller Node:

       2015-02-12 14:21:28.630 3463 INFO nova.osapi_compute.wsgi.server [req-48438ac7-f6fc-4a2b-9ad4-8dd2620bf0e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:21:51.247 3456 INFO nova.osapi_compute.wsgi.server [req-37b811b3-704f-4881-a7b4-6c86c69b7d33 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:21:51.261 3456 INFO nova.osapi_compute.wsgi.server [req-f8a7de9c-ce54-4dca-9d6f-3e1bed05ff40 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:21:51.296 3456 INFO nova.osapi_compute.wsgi.server [req-644c3f4f-b6fd-40b6-841f-afba55f7cf18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:16.409 3463 INFO nova.osapi_compute.wsgi.server [req-2dde0bcc-7428-4043-af6e-27e8ad757f45 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:16.422 3463 INFO nova.osapi_compute.wsgi.server [req-f2548c3c-d3fb-409f-aad6-ee756ff9f809 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:16.457 3463 INFO nova.osapi_compute.wsgi.server [req-5cc351b8-1bf4-476a-9d96-725568e2d849 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:44.070 3456 INFO nova.osapi_compute.wsgi.server [req-1b4ae5af-075d-406d-921a-acdfaa650eff None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:44.084 3456 INFO nova.osapi_compute.wsgi.server [req-1c26700c-8fba-4191-8188-35862c6faff2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:44.120 3458 INFO nova.osapi_compute.wsgi.server [req-27ad2b84-aee5-4e59-be9a-8731adad28c8 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:14.235 3462 INFO nova.osapi_compute.wsgi.server [req-7dea5fb4-6c4e-468e-a196-8bc14e6ed7a2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:14.249 3462 INFO nova.osapi_compute.wsgi.server [req-19abeef2-cf4d-4d8e-be4c-771cee8e01a4 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:14.286 3462 INFO nova.osapi_compute.wsgi.server [req-b681877d-8521-4794-9b14-fce2d3429f67 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:44.438 3463 INFO nova.osapi_compute.wsgi.server [req-cca8edf7-f8e4-4587-bc62-9bf83e4c2ade None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:44.453 3457 INFO nova.osapi_compute.wsgi.server [req-3ca2edda-2421-499d-a2f3-1d8498cab8fc None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:44.486 3457 INFO nova.osapi_compute.wsgi.server [req-136041f3-0bdd-45ad-9637-81ddbe1a4587 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:14.644 3461 INFO nova.osapi_compute.wsgi.server [req-127981e1-24e0-4374-8ae2-da42fc573c8f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:14.658 3461 INFO nova.osapi_compute.wsgi.server [req-fca7e728-3114-4941-bed4-d92925534ca9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:14.791 3463 INFO nova.osapi_compute.wsgi.server [req-3acc7914-5439-4f67-86f2-a4aaecf5bcbe None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:44.900 3458 INFO nova.osapi_compute.wsgi.server [req-e3f7db0c-b684-404b-9dd0-44fcbfbedcdf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:44.915 3458 INFO nova.osapi_compute.wsgi.server [req-570b6c1c-6af6-46f1-9915-7b247f5493cf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:45.047 3463 INFO nova.osapi_compute.wsgi.server [req-66221c3a-2d62-4838-bf1c-81b65d38fafb None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log

2015-02-11 12:45:45.576 1236 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: Failed to alloc$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     filter_properties)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     reason=msg)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] During wait destroy, instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

Run Start VM: failed to allocate the network(s), not rescheduling

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log on the controller Node:

       2015-02-12 14:21:28.630 3463 INFO nova.osapi_compute.wsgi.server [req-48438ac7-f6fc-4a2b-9ad4-8dd2620bf0e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:21:51.247 3456 INFO nova.osapi_compute.wsgi.server [req-37b811b3-704f-4881-a7b4-6c86c69b7d33 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:21:51.261 3456 INFO nova.osapi_compute.wsgi.server [req-f8a7de9c-ce54-4dca-9d6f-3e1bed05ff40 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:21:51.296 3456 INFO nova.osapi_compute.wsgi.server [req-644c3f4f-b6fd-40b6-841f-afba55f7cf18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:16.409 3463 INFO nova.osapi_compute.wsgi.server [req-2dde0bcc-7428-4043-af6e-27e8ad757f45 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:16.422 3463 INFO nova.osapi_compute.wsgi.server [req-f2548c3c-d3fb-409f-aad6-ee756ff9f809 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:16.457 3463 INFO nova.osapi_compute.wsgi.server [req-5cc351b8-1bf4-476a-9d96-725568e2d849 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:44.070 3456 INFO nova.osapi_compute.wsgi.server [req-1b4ae5af-075d-406d-921a-acdfaa650eff None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:44.084 3456 INFO nova.osapi_compute.wsgi.server [req-1c26700c-8fba-4191-8188-35862c6faff2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:44.120 3458 INFO nova.osapi_compute.wsgi.server [req-27ad2b84-aee5-4e59-be9a-8731adad28c8 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:14.235 3462 INFO nova.osapi_compute.wsgi.server [req-7dea5fb4-6c4e-468e-a196-8bc14e6ed7a2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:14.249 3462 INFO nova.osapi_compute.wsgi.server [req-19abeef2-cf4d-4d8e-be4c-771cee8e01a4 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:14.286 3462 INFO nova.osapi_compute.wsgi.server [req-b681877d-8521-4794-9b14-fce2d3429f67 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:44.438 3463 INFO nova.osapi_compute.wsgi.server [req-cca8edf7-f8e4-4587-bc62-9bf83e4c2ade None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:44.453 3457 INFO nova.osapi_compute.wsgi.server [req-3ca2edda-2421-499d-a2f3-1d8498cab8fc None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:44.486 3457 INFO nova.osapi_compute.wsgi.server [req-136041f3-0bdd-45ad-9637-81ddbe1a4587 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:14.644 3461 INFO nova.osapi_compute.wsgi.server [req-127981e1-24e0-4374-8ae2-da42fc573c8f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:14.658 3461 INFO nova.osapi_compute.wsgi.server [req-fca7e728-3114-4941-bed4-d92925534ca9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:14.791 3463 INFO nova.osapi_compute.wsgi.server [req-3acc7914-5439-4f67-86f2-a4aaecf5bcbe None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:44.900 3458 INFO nova.osapi_compute.wsgi.server [req-e3f7db0c-b684-404b-9dd0-44fcbfbedcdf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:44.915 3458 INFO nova.osapi_compute.wsgi.server [req-570b6c1c-6af6-46f1-9915-7b247f5493cf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:45.047 3463 INFO nova.osapi_compute.wsgi.server [req-66221c3a-2d62-4838-bf1c-81b65d38fafb None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log

2015-02-11 12:45:45.576 1236 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: Failed to alloc$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     filter_properties)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]     reason=msg)
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted$
2015-02-11 12:45:45.576 1236 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] During wait destroy, instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

Start VM: failed to allocate the network(s), not rescheduling

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log on the controller Node:

       2015-02-12 14:21:28.630 3463 INFO nova.osapi_compute.wsgi.server [req-48438ac7-f6fc-4a2b-9ad4-8dd2620bf0e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:21:51.247 3456 INFO nova.osapi_compute.wsgi.server [req-37b811b3-704f-4881-a7b4-6c86c69b7d33 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:21:51.261 3456 INFO nova.osapi_compute.wsgi.server [req-f8a7de9c-ce54-4dca-9d6f-3e1bed05ff40 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:21:51.296 3456 INFO nova.osapi_compute.wsgi.server [req-644c3f4f-b6fd-40b6-841f-afba55f7cf18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:16.409 3463 INFO nova.osapi_compute.wsgi.server [req-2dde0bcc-7428-4043-af6e-27e8ad757f45 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:16.422 3463 INFO nova.osapi_compute.wsgi.server [req-f2548c3c-d3fb-409f-aad6-ee756ff9f809 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:16.457 3463 INFO nova.osapi_compute.wsgi.server [req-5cc351b8-1bf4-476a-9d96-725568e2d849 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:44.070 3456 INFO nova.osapi_compute.wsgi.server [req-1b4ae5af-075d-406d-921a-acdfaa650eff None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:44.084 3456 INFO nova.osapi_compute.wsgi.server [req-1c26700c-8fba-4191-8188-35862c6faff2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:44.120 3458 INFO nova.osapi_compute.wsgi.server [req-27ad2b84-aee5-4e59-be9a-8731adad28c8 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:14.235 3462 INFO nova.osapi_compute.wsgi.server [req-7dea5fb4-6c4e-468e-a196-8bc14e6ed7a2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:14.249 3462 INFO nova.osapi_compute.wsgi.server [req-19abeef2-cf4d-4d8e-be4c-771cee8e01a4 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:14.286 3462 INFO nova.osapi_compute.wsgi.server [req-b681877d-8521-4794-9b14-fce2d3429f67 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:44.438 3463 INFO nova.osapi_compute.wsgi.server [req-cca8edf7-f8e4-4587-bc62-9bf83e4c2ade None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:44.453 3457 INFO nova.osapi_compute.wsgi.server [req-3ca2edda-2421-499d-a2f3-1d8498cab8fc None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:44.486 3457 INFO nova.osapi_compute.wsgi.server [req-136041f3-0bdd-45ad-9637-81ddbe1a4587 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:14.644 3461 INFO nova.osapi_compute.wsgi.server [req-127981e1-24e0-4374-8ae2-da42fc573c8f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:14.658 3461 INFO nova.osapi_compute.wsgi.server [req-fca7e728-3114-4941-bed4-d92925534ca9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:14.791 3463 INFO nova.osapi_compute.wsgi.server [req-3acc7914-5439-4f67-86f2-a4aaecf5bcbe None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:44.900 3458 INFO nova.osapi_compute.wsgi.server [req-e3f7db0c-b684-404b-9dd0-44fcbfbedcdf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:44.915 3458 INFO nova.osapi_compute.wsgi.server [req-570b6c1c-6af6-46f1-9915-7b247f5493cf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:45.047 3463 INFO nova.osapi_compute.wsgi.server [req-66221c3a-2d62-4838-bf1c-81b65d38fafb None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.lognova-compute.log from Compute Node

2015-02-11 12:45:45.576 1236 2015-02-12 14:24:34.133 5165 ERROR nova.compute.manager [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b aborted: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Failed to alloc$
2015-02-11 12:45:45.576 1236 allocate network(s)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Traceback (most recent call last):
2015-02-11 12:45:45.576 1236 2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2113, in _b$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     block_device_info=block_device_info)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2621, i$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     block_device_info, disk_info=disk_info)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4435, i$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     raise exception.VirtualInterfaceCreateException()
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] VirtualInterfaceCreateException: Virtual Interface creation failed
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]
2015-02-12 14:24:34.134 5165 ERROR nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Build of instance 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e aborted: Failed to alloc$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Traceback (most recent call last):
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-11 12:45:45.576 1236 2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     filter_properties)
2015-02-11 12:45:45.576 1236 2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-11 12:45:45.576 1236 2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     reason=msg)
2015-02-11 12:45:45.576 1236 2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] BuildAbortException: Build of instance 46dc56aa-6e73-4df3-a3c4-123652dae66b 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e aborted$
2015-02-11 12:45:45.576 1236 2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b]
2015-02-11 12:45:45.597 1236 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]
2015-02-12 14:24:34.157 5165 INFO nova.network.neutronv2.api [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Unable to reset device ID for port None
2015-02-11 13:02:02.315 1236 2015-02-12 14:24:34.491 5165 INFO nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] During sync_power_state the instance has a pending task (spawning). Skip.
2015-02-12 14:25:20.722 5165 AUDIT nova.compute.manager [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Terminating instance
2015-02-11 13:02:02.320 1236 nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:25:20.767 5165 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on controller:5672
2015-02-12 14:25:20.779 5165 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on controller:5672

.......................................................

    2015-02-12 14:30:27.832 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1
2015-02-12 14:30:39.212 5165 WARNING nova.virt.libvirt.driver [-] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] nova.compute.manager [-] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
2015-02-12 14:30:39.212 5165 INFO nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] During wait destroy, sync_power_state the instance disappeared.
2015-02-11 13:02:02.414 1236 INFO nova.virt.libvirt.driver [req-4058c7ee-bf68-40ee-a21f-35c3fc8afe57 None] [instance: 46dc56aa-6e73-4df3-a3c4-123652dae66b] Deletion of /var/lib/nova/instan$
has a pending task (spawning). Skip.
2015-02-12 14:31:29.723 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:31:29.820 5165 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 15747, total allocated virtual ram (MB): 1024
2015-02-12 14:31:29.820 5165 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 48
2015-02-12 14:31:29.821 5165 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0
2015-02-12 14:31:29.821 5165 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-02-12 14:31:29.821 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1
2015-02-12 14:32:31.721 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 15747, total allocated virtual ram (MB): 1024
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 48
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-02-12 14:32:31.814 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110

vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

Start VM: failed to allocate the network(s), not rescheduling

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log on the controller Node:

       2015-02-12 14:21:28.630 3463 INFO nova.osapi_compute.wsgi.server [req-48438ac7-f6fc-4a2b-9ad4-8dd2620bf0e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:21:51.247 3456 INFO nova.osapi_compute.wsgi.server [req-37b811b3-704f-4881-a7b4-6c86c69b7d33 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:21:51.261 3456 INFO nova.osapi_compute.wsgi.server [req-f8a7de9c-ce54-4dca-9d6f-3e1bed05ff40 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:21:51.296 3456 INFO nova.osapi_compute.wsgi.server [req-644c3f4f-b6fd-40b6-841f-afba55f7cf18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:16.409 3463 INFO nova.osapi_compute.wsgi.server [req-2dde0bcc-7428-4043-af6e-27e8ad757f45 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:16.422 3463 INFO nova.osapi_compute.wsgi.server [req-f2548c3c-d3fb-409f-aad6-ee756ff9f809 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:16.457 3463 INFO nova.osapi_compute.wsgi.server [req-5cc351b8-1bf4-476a-9d96-725568e2d849 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:44.070 3456 INFO nova.osapi_compute.wsgi.server [req-1b4ae5af-075d-406d-921a-acdfaa650eff None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:44.084 3456 INFO nova.osapi_compute.wsgi.server [req-1c26700c-8fba-4191-8188-35862c6faff2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:44.120 3458 INFO nova.osapi_compute.wsgi.server [req-27ad2b84-aee5-4e59-be9a-8731adad28c8 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:14.235 3462 INFO nova.osapi_compute.wsgi.server [req-7dea5fb4-6c4e-468e-a196-8bc14e6ed7a2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:14.249 3462 INFO nova.osapi_compute.wsgi.server [req-19abeef2-cf4d-4d8e-be4c-771cee8e01a4 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:14.286 3462 INFO nova.osapi_compute.wsgi.server [req-b681877d-8521-4794-9b14-fce2d3429f67 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:44.438 3463 INFO nova.osapi_compute.wsgi.server [req-cca8edf7-f8e4-4587-bc62-9bf83e4c2ade None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:44.453 3457 INFO nova.osapi_compute.wsgi.server [req-3ca2edda-2421-499d-a2f3-1d8498cab8fc None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:44.486 3457 INFO nova.osapi_compute.wsgi.server [req-136041f3-0bdd-45ad-9637-81ddbe1a4587 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:14.644 3461 INFO nova.osapi_compute.wsgi.server [req-127981e1-24e0-4374-8ae2-da42fc573c8f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:14.658 3461 INFO nova.osapi_compute.wsgi.server [req-fca7e728-3114-4941-bed4-d92925534ca9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:14.791 3463 INFO nova.osapi_compute.wsgi.server [req-3acc7914-5439-4f67-86f2-a4aaecf5bcbe None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:44.900 3458 INFO nova.osapi_compute.wsgi.server [req-e3f7db0c-b684-404b-9dd0-44fcbfbedcdf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:44.915 3458 INFO nova.osapi_compute.wsgi.server [req-570b6c1c-6af6-46f1-9915-7b247f5493cf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:45.047 3463 INFO nova.osapi_compute.wsgi.server [req-66221c3a-2d62-4838-bf1c-81b65d38fafb None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log from Compute Node

2015-02-12 14:24:34.133 5165 ERROR nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Failed to allocate network(s)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Traceback (most recent call last):
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2113, in _b$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     block_device_info=block_device_info)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2621, i$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     block_device_info, disk_info=disk_info)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4435, i$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     raise exception.VirtualInterfaceCreateException()
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] VirtualInterfaceCreateException: Virtual Interface creation failed
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]
2015-02-12 14:24:34.134 5165 ERROR nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Build of instance 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e aborted: Failed to alloc$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Traceback (most recent call last):
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     filter_properties)
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     reason=msg)
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] BuildAbortException: Build of instance 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e aborted$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]
2015-02-12 14:24:34.157 5165 INFO nova.network.neutronv2.api [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Unable to reset device ID for port None
2015-02-12 14:24:34.491 5165 INFO nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] During sync_power_state the instance has a pending task (spawning). Skip.
2015-02-12 14:25:20.722 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:25:20.767 5165 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on controller:5672
2015-02-12 14:25:20.779 5165 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on controller:5672

.......................................................

    2015-02-12 14:30:27.832 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1
2015-02-12 14:30:39.212 5165 WARNING nova.compute.manager [-] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
2015-02-12 14:30:39.212 5165 INFO nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] During sync_power_state the instance has a pending task (spawning). Skip.
2015-02-12 14:31:29.723 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:31:29.820 5165 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 15747, total allocated virtual ram (MB): 1024
2015-02-12 14:31:29.820 5165 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 48
2015-02-12 14:31:29.821 5165 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0
2015-02-12 14:31:29.821 5165 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-02-12 14:31:29.821 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1
2015-02-12 14:32:31.721 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 15747, total allocated virtual ram (MB): 1024
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 48
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-02-12 14:32:31.814 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110
host=controller


vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

Start VM: failed to allocate the network(s), not rescheduling

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log on the controller Node:

       2015-02-12 14:21:28.630 3463 INFO nova.osapi_compute.wsgi.server [req-48438ac7-f6fc-4a2b-9ad4-8dd2620bf0e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:21:51.247 3456 INFO nova.osapi_compute.wsgi.server [req-37b811b3-704f-4881-a7b4-6c86c69b7d33 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:21:51.261 3456 INFO nova.osapi_compute.wsgi.server [req-f8a7de9c-ce54-4dca-9d6f-3e1bed05ff40 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:21:51.296 3456 INFO nova.osapi_compute.wsgi.server [req-644c3f4f-b6fd-40b6-841f-afba55f7cf18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:16.409 3463 INFO nova.osapi_compute.wsgi.server [req-2dde0bcc-7428-4043-af6e-27e8ad757f45 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:16.422 3463 INFO nova.osapi_compute.wsgi.server [req-f2548c3c-d3fb-409f-aad6-ee756ff9f809 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:16.457 3463 INFO nova.osapi_compute.wsgi.server [req-5cc351b8-1bf4-476a-9d96-725568e2d849 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:44.070 3456 INFO nova.osapi_compute.wsgi.server [req-1b4ae5af-075d-406d-921a-acdfaa650eff None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:44.084 3456 INFO nova.osapi_compute.wsgi.server [req-1c26700c-8fba-4191-8188-35862c6faff2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:44.120 3458 INFO nova.osapi_compute.wsgi.server [req-27ad2b84-aee5-4e59-be9a-8731adad28c8 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:14.235 3462 INFO nova.osapi_compute.wsgi.server [req-7dea5fb4-6c4e-468e-a196-8bc14e6ed7a2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:14.249 3462 INFO nova.osapi_compute.wsgi.server [req-19abeef2-cf4d-4d8e-be4c-771cee8e01a4 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:14.286 3462 INFO nova.osapi_compute.wsgi.server [req-b681877d-8521-4794-9b14-fce2d3429f67 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:44.438 3463 INFO nova.osapi_compute.wsgi.server [req-cca8edf7-f8e4-4587-bc62-9bf83e4c2ade None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:44.453 3457 INFO nova.osapi_compute.wsgi.server [req-3ca2edda-2421-499d-a2f3-1d8498cab8fc None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:44.486 3457 INFO nova.osapi_compute.wsgi.server [req-136041f3-0bdd-45ad-9637-81ddbe1a4587 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:14.644 3461 INFO nova.osapi_compute.wsgi.server [req-127981e1-24e0-4374-8ae2-da42fc573c8f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:14.658 3461 INFO nova.osapi_compute.wsgi.server [req-fca7e728-3114-4941-bed4-d92925534ca9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:14.791 3463 INFO nova.osapi_compute.wsgi.server [req-3acc7914-5439-4f67-86f2-a4aaecf5bcbe None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:44.900 3458 INFO nova.osapi_compute.wsgi.server [req-e3f7db0c-b684-404b-9dd0-44fcbfbedcdf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:44.915 3458 INFO nova.osapi_compute.wsgi.server [req-570b6c1c-6af6-46f1-9915-7b247f5493cf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:45.047 3463 INFO nova.osapi_compute.wsgi.server [req-66221c3a-2d62-4838-bf1c-81b65d38fafb None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log from Compute Node

2015-02-12 14:24:34.133 5165 ERROR nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Failed to allocate network(s)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Traceback (most recent call last):
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2113, in _b$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     block_device_info=block_device_info)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2621, i$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     block_device_info, disk_info=disk_info)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4435, i$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     raise exception.VirtualInterfaceCreateException()
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] VirtualInterfaceCreateException: Virtual Interface creation failed
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]
2015-02-12 14:24:34.134 5165 ERROR nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Build of instance 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e aborted: Failed to alloc$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Traceback (most recent call last):
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     filter_properties)
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     reason=msg)
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] BuildAbortException: Build of instance 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e aborted$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]
2015-02-12 14:24:34.157 5165 INFO nova.network.neutronv2.api [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Unable to reset device ID for port None
2015-02-12 14:24:34.491 5165 INFO nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] During sync_power_state the instance has a pending task (spawning). Skip.
2015-02-12 14:25:20.722 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:25:20.767 5165 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on controller:5672
2015-02-12 14:25:20.779 5165 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on controller:5672

.......................................................

    2015-02-12 14:30:27.832 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1
2015-02-12 14:30:39.212 5165 WARNING nova.compute.manager [-] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
2015-02-12 14:30:39.212 5165 INFO nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] During sync_power_state the instance has a pending task (spawning). Skip.
2015-02-12 14:31:29.723 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:31:29.820 5165 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 15747, total allocated virtual ram (MB): 1024
2015-02-12 14:31:29.820 5165 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 48
2015-02-12 14:31:29.821 5165 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0
2015-02-12 14:31:29.821 5165 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-02-12 14:31:29.821 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1
2015-02-12 14:32:31.721 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 15747, total allocated virtual ram (MB): 1024
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 48
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-02-12 14:32:31.814 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110
host=controller


vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

My neutron.conf from Compute Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

host=compute1

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron12345

Start VM: failed to allocate the network(s), not rescheduling

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log on the controller Node:

       2015-02-12 14:21:28.630 3463 INFO nova.osapi_compute.wsgi.server [req-48438ac7-f6fc-4a2b-9ad4-8dd2620bf0e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:21:51.247 3456 INFO nova.osapi_compute.wsgi.server [req-37b811b3-704f-4881-a7b4-6c86c69b7d33 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:21:51.261 3456 INFO nova.osapi_compute.wsgi.server [req-f8a7de9c-ce54-4dca-9d6f-3e1bed05ff40 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:21:51.296 3456 INFO nova.osapi_compute.wsgi.server [req-644c3f4f-b6fd-40b6-841f-afba55f7cf18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:16.409 3463 INFO nova.osapi_compute.wsgi.server [req-2dde0bcc-7428-4043-af6e-27e8ad757f45 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:16.422 3463 INFO nova.osapi_compute.wsgi.server [req-f2548c3c-d3fb-409f-aad6-ee756ff9f809 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:16.457 3463 INFO nova.osapi_compute.wsgi.server [req-5cc351b8-1bf4-476a-9d96-725568e2d849 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:44.070 3456 INFO nova.osapi_compute.wsgi.server [req-1b4ae5af-075d-406d-921a-acdfaa650eff None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:44.084 3456 INFO nova.osapi_compute.wsgi.server [req-1c26700c-8fba-4191-8188-35862c6faff2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:44.120 3458 INFO nova.osapi_compute.wsgi.server [req-27ad2b84-aee5-4e59-be9a-8731adad28c8 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:14.235 3462 INFO nova.osapi_compute.wsgi.server [req-7dea5fb4-6c4e-468e-a196-8bc14e6ed7a2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:14.249 3462 INFO nova.osapi_compute.wsgi.server [req-19abeef2-cf4d-4d8e-be4c-771cee8e01a4 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:14.286 3462 INFO nova.osapi_compute.wsgi.server [req-b681877d-8521-4794-9b14-fce2d3429f67 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:44.438 3463 INFO nova.osapi_compute.wsgi.server [req-cca8edf7-f8e4-4587-bc62-9bf83e4c2ade None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:44.453 3457 INFO nova.osapi_compute.wsgi.server [req-3ca2edda-2421-499d-a2f3-1d8498cab8fc None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:44.486 3457 INFO nova.osapi_compute.wsgi.server [req-136041f3-0bdd-45ad-9637-81ddbe1a4587 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:14.644 3461 INFO nova.osapi_compute.wsgi.server [req-127981e1-24e0-4374-8ae2-da42fc573c8f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:14.658 3461 INFO nova.osapi_compute.wsgi.server [req-fca7e728-3114-4941-bed4-d92925534ca9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:14.791 3463 INFO nova.osapi_compute.wsgi.server [req-3acc7914-5439-4f67-86f2-a4aaecf5bcbe None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:44.900 3458 INFO nova.osapi_compute.wsgi.server [req-e3f7db0c-b684-404b-9dd0-44fcbfbedcdf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:44.915 3458 INFO nova.osapi_compute.wsgi.server [req-570b6c1c-6af6-46f1-9915-7b247f5493cf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:45.047 3463 INFO nova.osapi_compute.wsgi.server [req-66221c3a-2d62-4838-bf1c-81b65d38fafb None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log from Compute Node

2015-02-12 14:24:34.133 5165 ERROR nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Failed to allocate network(s)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Traceback (most recent call last):
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2113, in _b$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     block_device_info=block_device_info)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2621, i$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     block_device_info, disk_info=disk_info)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4435, i$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     raise exception.VirtualInterfaceCreateException()
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] VirtualInterfaceCreateException: Virtual Interface creation failed
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]
2015-02-12 14:24:34.134 5165 ERROR nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Build of instance 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e aborted: Failed to alloc$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Traceback (most recent call last):
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     filter_properties)
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     reason=msg)
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] BuildAbortException: Build of instance 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e aborted$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]
2015-02-12 14:24:34.157 5165 INFO nova.network.neutronv2.api [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Unable to reset device ID for port None
2015-02-12 14:24:34.491 5165 INFO nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] During sync_power_state the instance has a pending task (spawning). Skip.
2015-02-12 14:25:20.722 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:25:20.767 5165 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on controller:5672
2015-02-12 14:25:20.779 5165 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on controller:5672

.......................................................

    2015-02-12 14:30:27.832 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1
2015-02-12 14:30:39.212 5165 WARNING nova.compute.manager [-] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
2015-02-12 14:30:39.212 5165 INFO nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] During sync_power_state the instance has a pending task (spawning). Skip.
2015-02-12 14:31:29.723 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:31:29.820 5165 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 15747, total allocated virtual ram (MB): 1024
2015-02-12 14:31:29.820 5165 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 48
2015-02-12 14:31:29.821 5165 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0
2015-02-12 14:31:29.821 5165 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-02-12 14:31:29.821 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1
2015-02-12 14:32:31.721 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 15747, total allocated virtual ram (MB): 1024
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 48
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-02-12 14:32:31.814 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110
host=controller


vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

My neutron.conf from Compute Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

host=compute1

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron12345

On the Network Node i have: neutron-ns-metadata-proxy-5a4a58ec-1455-4140-a356-a9b24cebdc0e.log

2015-02-12 13:08:34.212 2681 INFO neutron.common.config [-] Logging enabled!
2015-02-12 13:08:34.270 2687 INFO neutron.wsgi [-] (2687) wsgi starting up on http://0.0.0.0:9697/

My neutron-metadata.log on Network Node

2015-02-12 13:08:30.406 1253 ERROR neutron.agent.metadata.agent [-] Failed reporting state!
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent Traceback (most recent call last):
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutron/agent/metadata/agent.py", line 351, in _report_state
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     use_call=self.agent_state.get('start_flag'))
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 70, in report_state
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     return self.call(context, msg)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutron/common/log.py", line 34, in wrapper
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     return method(*args, **kwargs)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 161, in call
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     context, msg, rpc_method='call', **kwargs)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 187, in __call_rpc_method
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     return func(context, msg['method'], **msg['args'])
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 389, in call
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     return self.prepare().call(ctxt, method, **kwargs)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 152, in call
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     retry=self.retry)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     timeout=timeout, retry=retry)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 408, in send
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     retry=retry)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 397, in _send
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     result = self._waiter.wait(msg_id, timeout)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 285, in wait
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     reply, ending = self._poll_connection(msg_id, timeout)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 235, in _poll_connection
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     % msg_id)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent MessagingTimeout: Timed out waiting for a reply to message ID 6b2c70423b1043bbae09da0b18ebc703
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent
2015-02-12 13:08:30.408 1253 WARNING neutron.openstack.common.loopingcall [-] task run outlasted interval by 42.062282 sec
2015-02-12 13:17:00.422 1253 ERROR oslo.messaging._drivers.impl_rabbit [-] Failed to publish message to topic 'q-plugin': [Errno 104] Connection reset by peer
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit Traceback (most recent call last):
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 655, in ensure
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     return method()
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 752, in _publish
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     publisher = cls(self.conf, self.channel, topic=topic, **kwargs)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 378, in __init__
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     **options)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 326, in __init__
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self.reconnect(channel)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 334, in reconnect
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     routing_key=self.routing_key)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/kombu/messaging.py", line 83, in __init__
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self.revive(self._channel)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/kombu/messaging.py", line 214, in revive
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self.declare()
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/kombu/messaging.py", line 100, in declare
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self.exchange.declare()
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/kombu/entity.py", line 163, in declare
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     nowait=nowait, passive=passive,
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/amqp/channel.py", line 613, in exchange_declare
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self._send_method((40, 10), args)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 56, in _send_method
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self.channel_id, method_sig, args, content,
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/amqp/method_framing.py", line 221, in write_method
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     write_frame(1, channel, payload)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/amqp/transport.py", line 177, in write_frame
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     frame_type, channel, size, payload, 0xce,
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 359, in sendall
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     tail = self.send(data, flags)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 342, in send
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     total_sent += fd.send(data[total_sent:], flags)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit error: [Errno 104] Connection reset by peer
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit

Start VM: failed to allocate the network(s), not rescheduling

Hello, i want to run a 3 Node Enviroment with CentOS 7 and Openstack Juno.

1 Controller Node 1 Network Node 1 Compute Node

I followed this instruction: http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html

I have installed all Modules and now i want run a VM.

When i want to start a VM via the Dashboard i get the failure after a few minutes:

failed to allocate the network(s), not rescheduling

When i execute this command: nova service-list

then i get:

+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status   | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:32.000000 | trial log       |
| 2  | nova-cert        | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:48:45.000000 | trial log       |
| 3  | nova-conductor   | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:00.000000 | trial log       |
| 4  | nova-consoleauth | localhost.localdomain | internal | disabled | down  | 2015-02-05T14:49:19.000000 | trial log       |
| 5  | nova-scheduler   | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 6  | nova-cert        | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 7  | nova-conductor   | controller            | internal | enabled  | up    | 2015-02-11T13:40:26.000000 | -               |
| 8  | nova-consoleauth | controller            | internal | enabled  | up    | 2015-02-11T13:40:27.000000 | -               |
| 9  | nova-compute     | compute1              | nova     | enabled  | up    | 2015-02-11T13:40:32.000000 | -               |
+----+------------------+-----------------------+----------+----------+-------+----------------------------+-----------------+

I have disabled the first four services because i didn't unterstand why they are on host localhost.localdomain When i enable this services then i geht the failure in the dashboard: No valid host found

My nova-api.log on the controller Node:

       2015-02-12 14:21:28.630 3463 INFO nova.osapi_compute.wsgi.server [req-48438ac7-f6fc-4a2b-9ad4-8dd2620bf0e9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:21:51.247 3456 INFO nova.osapi_compute.wsgi.server [req-37b811b3-704f-4881-a7b4-6c86c69b7d33 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:21:51.261 3456 INFO nova.osapi_compute.wsgi.server [req-f8a7de9c-ce54-4dca-9d6f-3e1bed05ff40 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:21:51.296 3456 INFO nova.osapi_compute.wsgi.server [req-644c3f4f-b6fd-40b6-841f-afba55f7cf18 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:16.409 3463 INFO nova.osapi_compute.wsgi.server [req-2dde0bcc-7428-4043-af6e-27e8ad757f45 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:16.422 3463 INFO nova.osapi_compute.wsgi.server [req-f2548c3c-d3fb-409f-aad6-ee756ff9f809 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:16.457 3463 INFO nova.osapi_compute.wsgi.server [req-5cc351b8-1bf4-476a-9d96-725568e2d849 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:22:44.070 3456 INFO nova.osapi_compute.wsgi.server [req-1b4ae5af-075d-406d-921a-acdfaa650eff None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:22:44.084 3456 INFO nova.osapi_compute.wsgi.server [req-1c26700c-8fba-4191-8188-35862c6faff2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:22:44.120 3458 INFO nova.osapi_compute.wsgi.server [req-27ad2b84-aee5-4e59-be9a-8731adad28c8 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:14.235 3462 INFO nova.osapi_compute.wsgi.server [req-7dea5fb4-6c4e-468e-a196-8bc14e6ed7a2 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:14.249 3462 INFO nova.osapi_compute.wsgi.server [req-19abeef2-cf4d-4d8e-be4c-771cee8e01a4 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:14.286 3462 INFO nova.osapi_compute.wsgi.server [req-b681877d-8521-4794-9b14-fce2d3429f67 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:23:44.438 3463 INFO nova.osapi_compute.wsgi.server [req-cca8edf7-f8e4-4587-bc62-9bf83e4c2ade None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:23:44.453 3457 INFO nova.osapi_compute.wsgi.server [req-3ca2edda-2421-499d-a2f3-1d8498cab8fc None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:23:44.486 3457 INFO nova.osapi_compute.wsgi.server [req-136041f3-0bdd-45ad-9637-81ddbe1a4587 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:14.644 3461 INFO nova.osapi_compute.wsgi.server [req-127981e1-24e0-4374-8ae2-da42fc573c8f None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:14.658 3461 INFO nova.osapi_compute.wsgi.server [req-fca7e728-3114-4941-bed4-d92925534ca9 None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:14.791 3463 INFO nova.osapi_compute.wsgi.server [req-3acc7914-5439-4f67-86f2-a4aaecf5bcbe None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$
2015-02-12 14:24:44.900 3458 INFO nova.osapi_compute.wsgi.server [req-e3f7db0c-b684-404b-9dd0-44fcbfbedcdf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/servers/40dfe2e2-1d$
2015-02-12 14:24:44.915 3458 INFO nova.osapi_compute.wsgi.server [req-570b6c1c-6af6-46f1-9915-7b247f5493cf None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/flavors/1 HTTP/1.1"$
2015-02-12 14:24:45.047 3463 INFO nova.osapi_compute.wsgi.server [req-66221c3a-2d62-4838-bf1c-81b65d38fafb None] 192.168.1.110 "GET /v2/b8b7c49f7a73463fa3ca0d2615b9680f/extensions HTTP/1.1$

My Neutron server.log from Controller Node:

 2015-02-11 14:03:03.686 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43930)
    2015-02-11 14:03:03.724 1453 INFO neutron.wsgi [req-b9670750-36b7-4375-b33d-e622415706c2 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/networks.json?id=cf37728c-cba6-4b16-a197-$
    2015-02-11 14:03:03.725 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43931)
    2015-02-11 14:03:03.737 1453 INFO neutron.wsgi [req-edbf366e-a7a1-4043-be7a-da503dd81063 None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/ports.json?tenant_id=b8b7c49f7a73463fa3ca$
    2015-02-11 14:03:03.739 1453 INFO neutron.wsgi [-] (1453) accepted ('192.168.1.110', 43932)
    2015-02-11 14:03:03.742 1453 INFO neutron.wsgi [req-d56002d6-c8e7-47d4-8dc6-2657e988c0ba None] 192.168.1.110 - - [11/Feb/2015 14:03:03] "GET /v2.0/quotas/b8b7c49f7a73463fa3ca0d2615b9680f.j$

My nova-compute.log from Compute Node

2015-02-12 14:24:34.133 5165 ERROR nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Failed to allocate network(s)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Traceback (most recent call last):
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2113, in _b$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     block_device_info=block_device_info)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2621, i$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     block_device_info, disk_info=disk_info)
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4435, i$
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     raise exception.VirtualInterfaceCreateException()
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] VirtualInterfaceCreateException: Virtual Interface creation failed
2015-02-12 14:24:34.133 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]
2015-02-12 14:24:34.134 5165 ERROR nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Build of instance 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e aborted: Failed to alloc$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Traceback (most recent call last):
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _d$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     filter_properties)
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2148, in _b$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]     reason=msg)
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] BuildAbortException: Build of instance 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e aborted$
2015-02-12 14:24:34.134 5165 TRACE nova.compute.manager [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e]
2015-02-12 14:24:34.157 5165 INFO nova.network.neutronv2.api [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] Unable to reset device ID for port None
2015-02-12 14:24:34.491 5165 INFO nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] During sync_power_state the instance has a pending task (spawning). Skip.
2015-02-12 14:25:20.722 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:25:20.767 5165 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on controller:5672
2015-02-12 14:25:20.779 5165 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on controller:5672

.......................................................

    2015-02-12 14:30:27.832 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1
2015-02-12 14:30:39.212 5165 WARNING nova.compute.manager [-] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
2015-02-12 14:30:39.212 5165 INFO nova.compute.manager [-] [instance: 40dfe2e2-1ddb-4e46-9f5c-7c0e24be7b7e] During sync_power_state the instance has a pending task (spawning). Skip.
2015-02-12 14:31:29.723 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:31:29.820 5165 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 15747, total allocated virtual ram (MB): 1024
2015-02-12 14:31:29.820 5165 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 48
2015-02-12 14:31:29.821 5165 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0
2015-02-12 14:31:29.821 5165 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-02-12 14:31:29.821 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1
2015-02-12 14:32:31.721 5165 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 15747, total allocated virtual ram (MB): 1024
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 48
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0
2015-02-12 14:32:31.813 5165 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-02-12 14:32:31.814 5165 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute1:compute1

My nova.conf from controller Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

my_ip = 192.168.1.110
host=controller


vncserver_listen = 192.168.1.110
vncserver_proxyclient_address = 192.168.1.110

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql://nova:nova1234@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova12345

[glance]
host = controller

neutron-agent list

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 2c6c2f32-d084-49d1-a3a1-58341c67d9b8 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
| 329ee32c-4421-4775-bc73-d32eb7b0df8d | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3a5135fa-c664-426c-beaf-c7e5e4abaefa | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| dc9fe7b2-ab9d-42ec-8a98-abd5b95c7737 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
| ec13afd0-8e21-45d9-9cd4-fc49805e48db | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

My neutron.conf from Compute Node

[DEFAULT]
verbose = True

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest

auth_strategy = keystone

core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

host=compute1

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron12345

On the Network Node i have: neutron-ns-metadata-proxy-5a4a58ec-1455-4140-a356-a9b24cebdc0e.log

2015-02-12 13:08:34.212 2681 INFO neutron.common.config [-] Logging enabled!
2015-02-12 13:08:34.270 2687 INFO neutron.wsgi [-] (2687) wsgi starting up on http://0.0.0.0:9697/

My neutron-metadata.log on Network Node

2015-02-12 13:08:30.406 1253 ERROR neutron.agent.metadata.agent [-] Failed reporting state!
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent Traceback (most recent call last):
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutron/agent/metadata/agent.py", line 351, in _report_state
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     use_call=self.agent_state.get('start_flag'))
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 70, in report_state
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     return self.call(context, msg)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutron/common/log.py", line 34, in wrapper
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     return method(*args, **kwargs)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 161, in call
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     context, msg, rpc_method='call', **kwargs)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 187, in __call_rpc_method
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     return func(context, msg['method'], **msg['args'])
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 389, in call
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     return self.prepare().call(ctxt, method, **kwargs)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 152, in call
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     retry=self.retry)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     timeout=timeout, retry=retry)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 408, in send
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     retry=retry)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 397, in _send
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     result = self._waiter.wait(msg_id, timeout)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 285, in wait
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     reply, ending = self._poll_connection(msg_id, timeout)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 235, in _poll_connection
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent     % msg_id)
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent MessagingTimeout: Timed out waiting for a reply to message ID 6b2c70423b1043bbae09da0b18ebc703
2015-02-12 13:08:30.406 1253 TRACE neutron.agent.metadata.agent
2015-02-12 13:08:30.408 1253 WARNING neutron.openstack.common.loopingcall [-] task run outlasted interval by 42.062282 sec
2015-02-12 13:17:00.422 1253 ERROR oslo.messaging._drivers.impl_rabbit [-] Failed to publish message to topic 'q-plugin': [Errno 104] Connection reset by peer
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit Traceback (most recent call last):
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 655, in ensure
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     return method()
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 752, in _publish
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     publisher = cls(self.conf, self.channel, topic=topic, **kwargs)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 378, in __init__
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     **options)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 326, in __init__
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self.reconnect(channel)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 334, in reconnect
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     routing_key=self.routing_key)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/kombu/messaging.py", line 83, in __init__
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self.revive(self._channel)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/kombu/messaging.py", line 214, in revive
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self.declare()
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/kombu/messaging.py", line 100, in declare
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self.exchange.declare()
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/kombu/entity.py", line 163, in declare
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     nowait=nowait, passive=passive,
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/amqp/channel.py", line 613, in exchange_declare
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self._send_method((40, 10), args)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 56, in _send_method
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     self.channel_id, method_sig, args, content,
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/amqp/method_framing.py", line 221, in write_method
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     write_frame(1, channel, payload)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/amqp/transport.py", line 177, in write_frame
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     frame_type, channel, size, payload, 0xce,
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 359, in sendall
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     tail = self.send(data, flags)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 342, in send
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit     total_sent += fd.send(data[total_sent:], flags)
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit error: [Errno 104] Connection reset by peer
2015-02-12 13:17:00.422 1253 TRACE oslo.messaging._drivers.impl_rabbit