Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

OPENSTACK KILO - VM's stuck in scheduling state

Hi,

Wonder if the community could help me, I have an openstack enviroment bult as following but can not boot VM's, they are stuck in the scheduling state.(Please let me know if I need to provide more info, or there is a troubleshooting process that you could suggest)

I have 3 KVM hosts running OPENSTACk management services(Each Openstack component Replicated 3 times for HA) - VM1,2,3-Kloud-Controller1,2,3 running Cinder,Glance,NOVA,Neutron API's, these VM's are attached to a CEPH backend which stores the images and volumes for Glance and Cinder. 3 VM's running the rabbit messaging queue, 2 Mysql Vm's running Galera - for actvie-actve 3 VM's running neutron nodes - with ML2 plugin.

There is an CEPH cluster - with 3 Monitors and OSD daemons on each of the 3 nodes with DAs's attached to each host.CEPH cluster is in a healthy state and I have previously being able to boot virtual machines.

When I do a nova agent-list - it returns nothing : root@kloud-controller1:~# nova agent-list +----------+------------+----+--------------+---------+---------+-----+ | Agent_id | Hypervisor | OS | Architecture | Version | Md5hash | Url | +----------+------------+----+--------------+---------+---------+-----+ +----------+------------+----+--------------+---------+---------+-----+

if I do a nova-manage - it returns services are fine:

root@kloud-controller1:~# nova-manage service list No handlers could be found for logger "oslo_config.cfg" Binary Host Zone Status State Updated_At nova-cert kloud-controller2 internal enabled :-) 2015-06-08 09:25:27 nova-conductor kloud-controller2 internal enabled :-) 2015-06-08 09:25:27 nova-consoleauth kloud-controller2 internal enabled :-) 2015-06-08 09:25:25 nova-scheduler kloud-controller2 internal enabled :-) 2015-06-08 09:25:28 nova-consoleauth kloud-controller1 internal enabled :-) 2015-06-08 09:25:27 nova-cert kloud-controller1 internal enabled :-) 2015-06-08 09:25:27 nova-scheduler kloud-controller1 internal enabled :-) 2015-06-08 09:25:29 nova-conductor kloud-controller1 internal enabled :-) 2015-06-08 09:25:34 nova-consoleauth kloud-controller3 internal enabled :-) 2015-06-08 09:25:32 nova-scheduler kloud-controller3 internal enabled :-) 2015-06-08 09:25:24 nova-cert kloud-controller3 internal enabled :-) 2015-06-08 09:25:33 nova-conductor kloud-controller3 internal enabled :-) 2015-06-08 09:25:31 nova-compute kloud-compute1 nova enabled :-) 2015-06-08 09:25:31 nova-compute kloud-compute3 nova enabled :-) 2015-06-08 09:25:25 nova-compute kloud-compute4 nova enabled :-) 2015-06-08 09:25:31 nova-compute kloud-compute2 nova enabled :-) 2015-06-08 09:25:

cat openrc.sh

export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=admin export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=xxxxxx export OS_AUTH_URL=http://172.x.x.x:35357/v3 export OS_IMAGE_API_VERSION=2 export OS_VOLUME_API_VERSION=2

keystone --debug catalog

root@kloud-controller1:~# keystone --debug catalog DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://172.x.x.x:35357/v3/tokens INFO:urllib3.connectionpool:Starting new HTTP connection (1): 172.x.x.x DEBUG:urllib3.connectionpool:"POST /v3/tokens HTTP/1.1" 404 93 DEBUG:keystoneclient.session:Request returned failure status: 404 Authorization Failed: The resource could not be found. (HTTP 404) (Request-ID: req-b03f47bc-23cb-4383-9671-2ce1279b6738)

I have turned off all redundant VM's so that I have just one instance of each Openstack service and checked the nova scheduler log and cinder scheduler log.

root@kloud-controller1:~# tail -5 /var/log/nova/nova-scheduler.log 2015-06-08 10:37:39.746 1120 INFO nova.scheduler.host_manager [req-4721db32-89ac-47ad-9c54-ad37879572dc - - - - -] Successfully synced instances from host 'kloud-compute4'. 2015-06-08 10:38:51.105 1120 INFO nova.scheduler.host_manager [req-9d030779-9fac-4ead-8959-bf181d07db3c - - - - -] Successfully synced instances from host 'kloud-compute1'. 2015-06-08 10:39:07.700 1120 INFO nova.scheduler.host_manager [req-e81f59aa-a6b7-41dc-96a2-a1917fb9c9cd - - - - -] Successfully synced instances from host 'kloud-compute3'. 2015-06-08 10:39:28.210 1120 INFO nova.scheduler.host_manager [req-fdd2c9b1-e818-4eea-a43a-79a13741c870 - - - - -] Successfully synced instances from host 'kloud-compute2'. 2015-06-08 10:39:42.717 1120 INFO nova.scheduler.host_manager [req-2c9555f8-dada-4944-aca3-24856733bd2a - - - - -] Successfully synced instances from host 'kloud-compute4'.

Thanks.

OPENSTACK KILO - VM's stuck in scheduling state

Hi,

Wonder if the community could help me, I have an openstack enviroment bult as following but can not boot VM's, they are stuck in the scheduling state.(Please let me know if I need to provide more info, or there is a troubleshooting process that you could suggest)

I have 3 KVM hosts running OPENSTACk management services(Each Openstack component Replicated 3 times for HA) - VM1,2,3-Kloud-Controller1,2,3 running Cinder,Glance,NOVA,Neutron API's, these VM's are attached to a CEPH backend which stores the images and volumes for Glance and Cinder. 3 VM's running the rabbit messaging queue, 2 Mysql Vm's running Galera - for actvie-actve 3 VM's running neutron nodes - with ML2 plugin.

There is an CEPH cluster - with 3 Monitors and OSD daemons on each of the 3 nodes with DAs's attached to each host.CEPH cluster is in a healthy state and I have previously being able to boot virtual machines.

When I do a nova agent-list - it returns nothing :

root@kloud-controller1:~# nova agent-list
+----------+------------+----+--------------+---------+---------+-----+
| Agent_id | Hypervisor | OS | Architecture | Version | Md5hash | Url |
+----------+------------+----+--------------+---------+---------+-----+
+----------+------------+----+--------------+---------+---------+-----+

+----------+------------+----+--------------+---------+---------+-----+

if I do a nova-manage - it returns services are fine:

root@kloud-controller1:~# nova-manage service list
No handlers could be found for logger "oslo_config.cfg"
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        kloud-controller2                    internal         enabled    :-)   2015-06-08 09:25:27
nova-conductor   kloud-controller2                    internal         enabled    :-)   2015-06-08 09:25:27
nova-consoleauth kloud-controller2                    internal         enabled    :-)   2015-06-08 09:25:25
nova-scheduler   kloud-controller2                    internal         enabled    :-)   2015-06-08 09:25:28
nova-consoleauth kloud-controller1                    internal         enabled    :-)   2015-06-08 09:25:27
nova-cert        kloud-controller1                    internal         enabled    :-)   2015-06-08 09:25:27
nova-scheduler   kloud-controller1                    internal         enabled    :-)   2015-06-08 09:25:29
nova-conductor   kloud-controller1                    internal         enabled    :-)   2015-06-08 09:25:34
nova-consoleauth kloud-controller3                    internal         enabled    :-)   2015-06-08 09:25:32
nova-scheduler   kloud-controller3                    internal         enabled    :-)   2015-06-08 09:25:24
nova-cert        kloud-controller3                    internal         enabled    :-)   2015-06-08 09:25:33
nova-conductor   kloud-controller3                    internal         enabled    :-)   2015-06-08 09:25:31
nova-compute     kloud-compute1                       nova             enabled    :-)   2015-06-08 09:25:31
nova-compute     kloud-compute3                       nova             enabled    :-)   2015-06-08 09:25:25
nova-compute     kloud-compute4                       nova             enabled    :-)   2015-06-08 09:25:31
nova-compute     kloud-compute2                       nova             enabled    :-)   2015-06-08 09:25:

09:25:

cat openrc.sh

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=xxxxxx
export OS_AUTH_URL=http://172.x.x.x:35357/v3
export OS_IMAGE_API_VERSION=2
export OS_VOLUME_API_VERSION=2

OS_VOLUME_API_VERSION=2

keystone --debug catalog

root@kloud-controller1:~# keystone --debug catalog
DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://172.x.x.x:35357/v3/tokens
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 172.x.x.x
DEBUG:urllib3.connectionpool:"POST /v3/tokens HTTP/1.1" 404 93
DEBUG:keystoneclient.session:Request returned failure status: 404
Authorization Failed: The resource could not be found. (HTTP 404) (Request-ID: req-b03f47bc-23cb-4383-9671-2ce1279b6738)

req-b03f47bc-23cb-4383-9671-2ce1279b6738)

I have turned off all redundant VM's so that I have just one instance of each Openstack service and checked the nova scheduler log and cinder scheduler log.

root@kloud-controller1:~# tail -5 /var/log/nova/nova-scheduler.log
2015-06-08 10:37:39.746 1120 INFO nova.scheduler.host_manager [req-4721db32-89ac-47ad-9c54-ad37879572dc - - - - -] Successfully synced instances from host 'kloud-compute4'.
2015-06-08 10:38:51.105 1120 INFO nova.scheduler.host_manager [req-9d030779-9fac-4ead-8959-bf181d07db3c - - - - -] Successfully synced instances from host 'kloud-compute1'.
2015-06-08 10:39:07.700 1120 INFO nova.scheduler.host_manager [req-e81f59aa-a6b7-41dc-96a2-a1917fb9c9cd - - - - -] Successfully synced instances from host 'kloud-compute3'.
2015-06-08 10:39:28.210 1120 INFO nova.scheduler.host_manager [req-fdd2c9b1-e818-4eea-a43a-79a13741c870 - - - - -] Successfully synced instances from host 'kloud-compute2'.
2015-06-08 10:39:42.717 1120 INFO nova.scheduler.host_manager [req-2c9555f8-dada-4944-aca3-24856733bd2a - - - - -] Successfully synced instances from host 'kloud-compute4'.

'kloud-compute4'.

Thanks.