Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

I created several instances, but now I have delete them, but in the Hypervisors there still have the Memory Usage and Local Disk Usage in ocata.

I install the openstack-ocata in one physical server(CentOS7.2). the controller and compute node all in it.

I created several instances, but now I have delete them, but in the Hypervisors there still have the Memory Usage and Local Disk Usage.

image description

Why there is this issue? is the ocata bug or our configuration error?


If the Memory Usage is high, like upper, whether when create a new instance, it may can not find the usable host? (NoValidHost: No valid host was found.)

I create the instance get this issue, so I am suspect this is caused by the upper issue(Memory Usage high, but I have delete all the instances):

NoValidHost: No valid host was found.

File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 866, in schedule_and_build_instances request_specs[0].to_legacy_filter_properties_dict()) File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 597, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 371, in wrapped return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 129, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 169, in call retry=self.retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 97, in _send timeout=timeout, retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 458, in send retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 449, in _send raise result

I created several instances, but now I have delete them, but in the Hypervisors there still have the Memory Usage and Local Disk Usage in ocata.

I install the openstack-ocata in one physical server(CentOS7.2). the controller and compute node all in it.

I created several instances, but now I have delete them, but in the Hypervisors there still have the Memory Usage and Local Disk Usage.

image description

Why there is this issue? is the ocata bug or our configuration error?


If the Memory Usage is high, like upper, whether when create a new instance, it may can not find the usable host? (NoValidHost: No valid host was found.)

I create the instance get this issue, so I am suspect this is caused by the upper issue(Memory Usage high, but I have delete all the instances):

NoValidHost: No valid host was found.

File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 866, in schedule_and_build_instances request_specs[0].to_legacy_filter_properties_dict()) File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 597, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 371, in wrapped return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 129, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 169, in call retry=self.retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 97, in _send timeout=timeout, retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 458, in send retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 449, in _send raise result

EDIT-1

I use nova list, get bellow logs:

# nova list
/usr/lib/python2.7/site-packages/novaclient/client.py:278: UserWarning: The 'tenant_id' argument is deprecated in Ocata and its use may result in errors in future releases. As 'project_id' is provided, the 'tenant_id' argument will be ignored.
  warnings.warn(msg)
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+