Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Deploying Openstack with Multi-Hypervisor Environment

After digging I found how to add another compute node for vmware. Used the RDO answer file. Everything shows up in Horizon, storage, cpu memory etc. Converted a simple cirros image into a vmdk image and uploaded it to glance. Hypervisors show up in nova hypervisor-list as up. run with the new vmware hypervisor nova boot --nic net-id=cd86a5e8-46a0-4044-8ffe-8ca5cd35d2c6 --image ed9c81a4-97b0-4936-b880-b0ae511f9b47 --flavor m1.nano cirrosvmdk --availability-zone nova:server3

Message No valid host was found. There are not enough hosts available. Code 500 Details File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 392, in build_instances context, request_spec, filter_properties) File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 436, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 372, in wrapped return func(args, *kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(args, *kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 121, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in call retry=self.retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 461, in _send raise result

Cannot find anything in the logs so far, my KVM side is fine. Anything I can check or test would be greatly appreciated

Deploying Openstack with Multi-Hypervisor Environment

After digging I found how to add another compute node for vmware. Used the RDO answer file. Everything shows up in Horizon, storage, cpu memory etc. Converted a simple cirros image into a vmdk image and uploaded it to glance. Hypervisors show up in nova hypervisor-list as up. run with the new vmware hypervisor nova boot --nic net-id=cd86a5e8-46a0-4044-8ffe-8ca5cd35d2c6 --image ed9c81a4-97b0-4936-b880-b0ae511f9b47 --flavor m1.nano cirrosvmdk --availability-zone nova:server3

Message No valid host was found. There are not enough hosts available. Code 500 Details File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 392, in build_instances context, request_spec, filter_properties) File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 436, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 372, in wrapped return func(args, *kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(args, *kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 121, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in call retry=self.retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 461, in _send raise result

Cannot find anything in the logs so far, my KVM side is fine. Anything I can check or test would be greatly appreciated

logs [root@vmnode ~]# tail -f /var/log/{nova,neutron}/*.log ==> /var/log/nova/nova-compute.log <== 2016-06-17 03:51:56.646 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:5809 2016-06-17 03:52:02.646 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:02.647 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:03.616 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:11.620 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:23.619 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:23.620 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:6405 2016-06-17 03:52:37.620 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:40.620 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:40.621 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:6775

==> /var/log/nova/nova-manage.log <== 2016-06-17 03:39:37.005 7386 DEBUG nova.servicegroup.drivers.db [req-d69cdf77-dec4-4fbc-a9e2-941ff89fadbf - - - - -] Seems service nova-osapi_compute on host 0.0.0.0 is down. Last heartbeat was 2016-05-24 15:57:03. Elapsed time is 2050954.00566 is_up /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:82 2016-06-17 03:39:37.006 7386 DEBUG nova.servicegroup.drivers.db [req-d69cdf77-dec4-4fbc-a9e2-941ff89fadbf - - - - -] Seems service nova-metadata on host 0.0.0.0 is down. Last heartbeat was 2016-05-24 15:57:03. Elapsed time is 2050954.00645 is_up /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:82 2016-06-17 03:40:02.491 7449 DEBUG nova.servicegroup.drivers.db [req-fcf2f3ce-9b4c-40c0-ae32-02c2694015dd - - - - -] Seems service nova-osapi_compute on host 0.0.0.0 is down. Last heartbeat was 2016-05-24 15:57:03. Elapsed time is 2050979.49176 is_up /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:82 2016-06-17 03:40:02.492 7449 DEBUG nova.servicegroup.drivers.db [req-fcf2f3ce-9b4c-40c0-ae32-02c2694015dd - - - - -] Seems service nova-metadata on host 0.0.0.0 is down. Last heartbeat was 2016-05-24 15:57:03. Elapsed time is 2050979.49249 is_up /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:82

==> /var/log/neutron/openvswitch-agent.log <== 2016-06-16 17:30:22.112 7620 INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 8.1.0 2016-06-16 17:30:22.217 7620 INFO oslo_rootwrap.client [-] Spawned new rootwrap daemon process with pid=7646 2016-06-16 17:30:22.348 7620 INFO neutron.agent.l2.extensions.manager [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] Loaded agent extensions: [] 2016-06-16 17:30:22.613 7620 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] Agent initialized successfully, now running... 2016-06-16 17:30:22.624 7620 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] Configuring tunnel endpoints to other OVS agents 2016-06-17 03:47:38.349 7620 INFO neutron.agent.securitygroups_rpc [req-14c58c3d-75b0-44fc-bd50-95b58313df22 1648b526d6654e49a84e005c332ed2e0 c18e4cfbbda34172b80904eddb8a82ee - - -] Security group member updated [u'9ba4203a-e7ae-4258-ac40-3741744130b3'] 2016-06-17 03:47:40.038 7620 INFO neutron.agent.securitygroups_rpc [req-07444027-96a0-4a1c-8413-55dc712e2cd2 a3c4ed20bba444d68c9c937b52b72360 fe8dcc08759e4be595dd16fee82ac622 - - -] Security group member updated [u'9ba4203a-e7ae-4258-ac40-3741744130b3'] 2016-06-17 03:47:41.226 7620 INFO neutron.agent.common.ovs_lib [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] Port 5633d73a-8c89-43e0-94a2-1efc7638f7ff not present in bridge br-int 2016-06-17 03:47:41.227 7620 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] port_unbound(): net_uuid None not in local_vlan_map 2016-06-17 03:47:41.227 7620 INFO neutron.agent.securitygroups_rpc [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] Remove device filter for [u'5633d73a-8c89-43e0-94a2-1efc7638f7ff']

==> /var/log/neutron/ovs-cleanup.log <== 2016-06-16 16:49:05.292 3904 INFO oslo_rootwrap.client [-] Spawned new rootwrap daemon process with pid=3913 2016-06-16 16:49:05.320 3904 INFO neutron.cmd.ovs_cleanup [-] Cleaning bridge: br-int 2016-06-16 16:49:05.332 3904 INFO neutron.cmd.ovs_cleanup [-] OVS cleanup completed successfully 2016-06-16 16:49:05.334 3904 INFO oslo_rootwrap.client [-] Stopping rootwrap daemon process with pid=3913 2016-06-16 16:49:35.098 1162 INFO neutron.common.config [-] Logging enabled! 2016-06-16 16:49:35.118 1162 INFO neutron.common.config [-] /usr/bin/neutron-ovs-cleanup version 8.1.0 2016-06-16 16:49:35.627 1162 INFO oslo_rootwrap.client [-] Spawned new rootwrap daemon process with pid=2320 2016-06-16 16:49:35.667 1162 INFO neutron.cmd.ovs_cleanup [-] Cleaning bridge: br-int 2016-06-16 16:49:35.680 1162 INFO neutron.cmd.ovs_cleanup [-] OVS cleanup completed successfully 2016-06-16 16:49:35.681 1162 INFO oslo_rootwrap.client [-] Stopping rootwrap daemon process with pid=2320

==> /var/log/nova/nova-compute.log <== 2016-06-17 03:52:49.635 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:49.662 9197 INFO nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Auditing locally available compute resources for node domain-c48.BA5D3D91-414D-4276-B679-0B2A4920D49B 2016-06-17 03:52:49.662 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:49.678 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:49.708 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:49.709 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:49.722 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:50.196 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:50.212 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:50.213 9197 DEBUG nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Hypervisor: free VCPUs: 12 _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:632 2016-06-17 03:52:50.213 9197 DEBUG nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Hypervisor/Node resource view: name=domain-c48.BA5D3D91-414D-4276-B679-0B2A4920D49B free_ram=32329MB free_disk=1733GB free_vcpus=12 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:654 2016-06-17 03:52:50.213 9197 DEBUG oslo_concurrency.lockutils [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270 2016-06-17 03:52:50.283 9197 INFO nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Total usable vcpus: 12, total allocated vcpus: 0 2016-06-17 03:52:50.283 9197 INFO nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Final resource view: name=domain-c48.BA5D3D91-414D-4276-B679-0B2A4920D49B phys_ram=67996MB used_ram=512MB phys_disk=2794GB used_disk=0GB total_vcpus=12 used_vcpus=0 pci_stats=[] 2016-06-17 03:52:50.312 9197 INFO nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Compute_service record updated for vmnode.tictoc.local:domain-c48.BA5D3D91-414D-4276-B679-0B2A4920D49B 2016-06-17 03:52:50.312 9197 DEBUG oslo_concurrency.lockutils [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.099s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282 2016-06-17 03:52:56.298 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:57.621 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:57.621 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:5733 2016-06-17 03:52:57.621 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:5737 2016-06-17 03:52:57.651 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:5809

Deploying Openstack with Multi-Hypervisor Environment

After digging I found how to add another compute node for vmware. Used the RDO answer file. Everything shows up in Horizon, storage, cpu memory etc. Converted a simple cirros image into a vmdk image and uploaded it to glance. Hypervisors show up in nova hypervisor-list as up. run with the new vmware hypervisor nova boot --nic net-id=cd86a5e8-46a0-4044-8ffe-8ca5cd35d2c6 --image ed9c81a4-97b0-4936-b880-b0ae511f9b47 --flavor m1.nano cirrosvmdk --availability-zone nova:server3

Message No valid host was found. There are not enough hosts available. Code 500 Details File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 392, in build_instances context, request_spec, filter_properties) File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 436, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 372, in wrapped return func(args, *kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(args, *kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 121, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in call retry=self.retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 461, in _send raise result

Cannot find anything in the logs so far, my KVM side is fine. Anything I can check or test would be greatly appreciated

logs [root@vmnode ~]# tail -f /var/log/{nova,neutron}/*.log ==> /var/log/nova/nova-compute.log <== 2016-06-17 03:51:56.646 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:5809 2016-06-17 03:52:02.646 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:02.647 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:03.616 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:11.620 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:23.619 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:23.620 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:6405 2016-06-17 03:52:37.620 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:40.620 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:40.621 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:6775

==> /var/log/nova/nova-manage.log <== 2016-06-17 03:39:37.005 7386 DEBUG nova.servicegroup.drivers.db [req-d69cdf77-dec4-4fbc-a9e2-941ff89fadbf - - - - -] Seems service nova-osapi_compute on host 0.0.0.0 is down. Last heartbeat was 2016-05-24 15:57:03. Elapsed time is 2050954.00566 is_up /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:82 2016-06-17 03:39:37.006 7386 DEBUG nova.servicegroup.drivers.db [req-d69cdf77-dec4-4fbc-a9e2-941ff89fadbf - - - - -] Seems service nova-metadata on host 0.0.0.0 is down. Last heartbeat was 2016-05-24 15:57:03. Elapsed time is 2050954.00645 is_up /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:82 2016-06-17 03:40:02.491 7449 DEBUG nova.servicegroup.drivers.db [req-fcf2f3ce-9b4c-40c0-ae32-02c2694015dd - - - - -] Seems service nova-osapi_compute on host 0.0.0.0 is down. Last heartbeat was 2016-05-24 15:57:03. Elapsed time is 2050979.49176 is_up /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:82 2016-06-17 03:40:02.492 7449 DEBUG nova.servicegroup.drivers.db [req-fcf2f3ce-9b4c-40c0-ae32-02c2694015dd - - - - -] Seems service nova-metadata on host 0.0.0.0 is down. Last heartbeat was 2016-05-24 15:57:03. Elapsed time is 2050979.49249 is_up /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:82

==> /var/log/neutron/openvswitch-agent.log <== 2016-06-16 17:30:22.112 7620 INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 8.1.0 2016-06-16 17:30:22.217 7620 INFO oslo_rootwrap.client [-] Spawned new rootwrap daemon process with pid=7646 2016-06-16 17:30:22.348 7620 INFO neutron.agent.l2.extensions.manager [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] Loaded agent extensions: [] 2016-06-16 17:30:22.613 7620 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] Agent initialized successfully, now running... 2016-06-16 17:30:22.624 7620 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] Configuring tunnel endpoints to other OVS agents 2016-06-17 03:47:38.349 7620 INFO neutron.agent.securitygroups_rpc [req-14c58c3d-75b0-44fc-bd50-95b58313df22 1648b526d6654e49a84e005c332ed2e0 c18e4cfbbda34172b80904eddb8a82ee - - -] Security group member updated [u'9ba4203a-e7ae-4258-ac40-3741744130b3'] 2016-06-17 03:47:40.038 7620 INFO neutron.agent.securitygroups_rpc [req-07444027-96a0-4a1c-8413-55dc712e2cd2 a3c4ed20bba444d68c9c937b52b72360 fe8dcc08759e4be595dd16fee82ac622 - - -] Security group member updated [u'9ba4203a-e7ae-4258-ac40-3741744130b3'] 2016-06-17 03:47:41.226 7620 INFO neutron.agent.common.ovs_lib [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] Port 5633d73a-8c89-43e0-94a2-1efc7638f7ff not present in bridge br-int 2016-06-17 03:47:41.227 7620 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] port_unbound(): net_uuid None not in local_vlan_map 2016-06-17 03:47:41.227 7620 INFO neutron.agent.securitygroups_rpc [req-26945167-8663-4e5a-98bd-b59317fd92f0 - - - - -] Remove device filter for [u'5633d73a-8c89-43e0-94a2-1efc7638f7ff']

==> /var/log/neutron/ovs-cleanup.log <== 2016-06-16 16:49:05.292 3904 INFO oslo_rootwrap.client [-] Spawned new rootwrap daemon process with pid=3913 2016-06-16 16:49:05.320 3904 INFO neutron.cmd.ovs_cleanup [-] Cleaning bridge: br-int 2016-06-16 16:49:05.332 3904 INFO neutron.cmd.ovs_cleanup [-] OVS cleanup completed successfully 2016-06-16 16:49:05.334 3904 INFO oslo_rootwrap.client [-] Stopping rootwrap daemon process with pid=3913 2016-06-16 16:49:35.098 1162 INFO neutron.common.config [-] Logging enabled! 2016-06-16 16:49:35.118 1162 INFO neutron.common.config [-] /usr/bin/neutron-ovs-cleanup version 8.1.0 2016-06-16 16:49:35.627 1162 INFO oslo_rootwrap.client [-] Spawned new rootwrap daemon process with pid=2320 2016-06-16 16:49:35.667 1162 INFO neutron.cmd.ovs_cleanup [-] Cleaning bridge: br-int 2016-06-16 16:49:35.680 1162 INFO neutron.cmd.ovs_cleanup [-] OVS cleanup completed successfully 2016-06-16 16:49:35.681 1162 INFO oslo_rootwrap.client [-] Stopping rootwrap daemon process with pid=2320

==> /var/log/nova/nova-compute.log <== 2016-06-17 03:52:49.635 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:49.662 9197 INFO nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Auditing locally available compute resources for node domain-c48.BA5D3D91-414D-4276-B679-0B2A4920D49B 2016-06-17 03:52:49.662 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:49.678 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:49.708 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:49.709 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:49.722 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:50.196 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:50.212 9197 DEBUG oslo_vmware.api [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Waiting for function oslo_vmware.api._invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:122 2016-06-17 03:52:50.213 9197 DEBUG nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Hypervisor: free VCPUs: 12 _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:632 2016-06-17 03:52:50.213 9197 DEBUG nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Hypervisor/Node resource view: name=domain-c48.BA5D3D91-414D-4276-B679-0B2A4920D49B free_ram=32329MB free_disk=1733GB free_vcpus=12 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:654 2016-06-17 03:52:50.213 9197 DEBUG oslo_concurrency.lockutils [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270 2016-06-17 03:52:50.283 9197 INFO nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Total usable vcpus: 12, total allocated vcpus: 0 2016-06-17 03:52:50.283 9197 INFO nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Final resource view: name=domain-c48.BA5D3D91-414D-4276-B679-0B2A4920D49B phys_ram=67996MB used_ram=512MB phys_disk=2794GB used_disk=0GB total_vcpus=12 used_vcpus=0 pci_stats=[] 2016-06-17 03:52:50.312 9197 INFO nova.compute.resource_tracker [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Compute_service record updated for vmnode.tictoc.local:domain-c48.BA5D3D91-414D-4276-B679-0B2A4920D49B 2016-06-17 03:52:50.312 9197 DEBUG oslo_concurrency.lockutils [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.099s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282 2016-06-17 03:52:56.298 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:57.621 9197 DEBUG oslo_service.periodic_task [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 2016-06-17 03:52:57.621 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:5733 2016-06-17 03:52:57.621 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:5737 2016-06-17 03:52:57.651 9197 DEBUG nova.compute.manager [req-cbfc8e41-a54b-464b-937f-b2178f9e61f3 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:5809

Everything Looks good [root@vmnode ~(keystone_admin)]# systemctl list-unit-files | grep openstack openstack-ceilometer-compute.service enabled openstack-ceilometer-polling.service disabled openstack-nova-compute.service enabled [root@vmnode ~(keystone_admin)]# systemctl list-unit-files | grep openstack openstack-ceilometer-compute.service enabled openstack-ceilometer-polling.service disabled openstack-nova-compute.service enabled [root@vmnode ~(keystone_admin)]# systemctl list-unit-files | grep neutron neutron-dhcp-agent.service disabled neutron-l3-agent.service disabled neutron-linuxbridge-cleanup.service disabled neutron-metadata-agent.service disabled neutron-netns-cleanup.service disabled neutron-openvswitch-agent.service enabled neutron-ovs-cleanup.service enabled neutron-server.service disabled [root@vmnode ~(keystone_admin)]# systemctl list-unit-files | grep ovs neutron-ovs-cleanup.service enabled [root@vmnode ~(keystone_admin)]#