Ask Your Question

I created several instances, but now I have delete them, but in the Hypervisors there still have the Memory Usage and Local Disk Usage in ocata.

asked 2017-12-19 21:32:44 -0500

liaodalin19903 gravatar image

updated 2018-01-03 20:44:15 -0500

I install the openstack-ocata in one physical server(CentOS7.2). the controller and compute node all in it.

I created several instances, but now I have delete them, but in the Hypervisors there still have the Memory Usage and Local Disk Usage.

image description

Why there is this issue? is the ocata bug or our configuration error?

If the Memory Usage is high, like upper, whether when create a new instance, it may can not find the usable host? (NoValidHost: No valid host was found.)

I create the instance get this issue, so I am suspect this is caused by the upper issue(Memory Usage high, but I have delete all the instances):

NoValidHost: No valid host was found.

File "/usr/lib/python2.7/site-packages/nova/conductor/", line 866, in schedule_and_build_instances request_specs[0].to_legacy_filter_properties_dict()) File "/usr/lib/python2.7/site-packages/nova/conductor/", line 597, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/", line 371, in wrapped return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/", line 129, in select_destinations return, 'select_destinations', **msg_args) File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/", line 169, in call retry=self.retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/", line 97, in _send timeout=timeout, retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/", line 458, in send retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/", line 449, in _send raise result


I use nova list, get bellow logs:

# nova list
/usr/lib/python2.7/site-packages/novaclient/ UserWarning: The 'tenant_id' argument is deprecated in Ocata and its use may result in errors in future releases. As 'project_id' is provided, the 'tenant_id' argument will be ignored.
| ID | Name | Status | Task State | Power State | Networks |
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2018-01-16 08:20:31 -0500

liaodalin19903 gravatar image

After I restarted the openstack-nova-compute it become normal:

systemctl restart  openstack-nova-compute

image description

edit flag offensive delete link more

answered 2017-12-26 01:34:16 -0500

yaoshenshen+1 gravatar image

check mysql table nova

edit flag offensive delete link more


@yaoshenshen+1 whats meaning of check mysql table nova ?

liaodalin19903 gravatar imageliaodalin19903 ( 2017-12-29 03:10:53 -0500 )edit

how about in controll [root@control ~]# source admin-openrc [root@control ~]# nova list

yaoshenshen+1 gravatar imageyaoshenshen+1 ( 2017-12-29 04:14:54 -0500 )edit

@yaoshenshen+1 see my EDIT-1

liaodalin19903 gravatar imageliaodalin19903 ( 2018-01-03 20:44:28 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2017-12-19 21:32:44 -0500

Seen: 273 times

Last updated: Jan 16 '18