Ask Your Question
1

Two identical servers show different resources, xen and kvm

asked 2014-06-06 10:07:18 -0500

danoreilly gravatar image

updated 2014-06-09 16:16:45 -0500

smaffulli gravatar image

Xen compute is CentOS 6.5, KVM compute node is RHEL 6.5. On the XEN node, I see:

 tm1cldcmpl03 QEMU  32  0   920MB   512MB   2.0GB   0Bytes  0<BR>

On the KVM node I see:

 tm1cldcmpl02 QEMU  32  5   31GB    3GB 2.0GB   55.0GB  5

These are identical HP Proliant DL380 gen8 system with 32 cores and 32gb of memory. I can't launch anything on xen system because it doesn't think it has enough memory to do anything.

Here's not nova compute log excerpt:

2014-06-09 14:51:15.701 3916 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on tm1cldctll01:5672<br>
2014-06-09 14:52:03.745 3916 AUDIT nova.compute.resource_tracker [req-f1e977f5-fcb5-4878-8b1f-9c176b825646 None None] Auditing locally available compute resources<br>
2014-06-09 14:52:04.854 3916 AUDIT nova.compute.resource_tracker [req-f1e977f5-fcb5-4878-8b1f-9c176b825646 None None] Free ram (MB): 408<br>
2014-06-09 14:52:04.854 3916 AUDIT nova.compute.resource_tracker [req-f1e977f5-fcb5-4878-8b1f-9c176b825646 None None] Free disk (GB): 2<br>
2014-06-09 14:52:04.855 3916 AUDIT nova.compute.resource_tracker [req-f1e977f5-fcb5-4878-8b1f-9c176b825646 None None] Free VCPUS: 32<br>
2014-06-09 14:52:04.899 3916 INFO nova.compute.resource_tracker [req-f1e977f5-fcb5-4878-8b1f-9c176b825646 None None] Compute_service record updated for tm1cldcmpl03:tm1cldcmpl03<br>
2014-06-09 14:52:12.253 3916 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources<br>
2014-06-09 14:52:13.345 3916 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 408<br>
2014-06-09 14:52:13.345 3916 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 2<br>
2014-06-09 14:52:13.345 3916 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 32<br>
2014-06-09 14:52:13.385 3916 INFO nova.compute.resource_tracker [-] Compute_service record updated for tm1cldcmpl03:tm1cldcmpl03<br>

the xl info:

# xl info<br>
host                   : tm1cldcmpl03<br>
release                : 3.10.34-11.el6.centos.alt.x86_64<br>
version                : #1 SMP Fri Mar 28 00:57:43 UTC 2014<br>
machine                : x86_64<br>
nr_cpus                : 32<br>
max_cpu_id             : 63<br>
nr_nodes               : 2<br>
cores_per_socket       : 8<br>
threads_per_core       : 2<br>
cpu_mhz                : 2893<br>
hw_caps                : bfebfbff:2c100800:00000000:00003f40:17bee3ff:00000000:00000001:00000000<br>
virt_caps              : hvm hvm_directio<br>
total_memory           : 32733<br>
free_memory            : 31280<br>
sharing_freed_memory   : 0<br>
sharing_used_memory    : 0<br>
free_cpus              : 0<br>
xen_major              : 4<br>
xen_minor              : 2<br>
xen_extra              : .4-30.el6<br>
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64<br>
xen_scheduler          : credit<br>
xen_pagesize           : 4096<br>
platform_params        : virt_start=0xffff800000000000<br>
xen_changeset          : unavailable<br>
xen_commandline        : dom0_mem=1024M,max:1024M loglvl=all guest_loglvl=all<br>
cc_compiler            : gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4)<br>
cc_compile_by          : mockbuild<br>
cc_compile_domain      : centos.org<br>
cc_compile_date        : Thu Mar 27 23:03:24 UTC 2014<br>
xend_config_format     : 4<br>

And why would the # of vcpus in use be different?

edit retag flag offensive close merge delete

Comments

Could you provide xl info (xm info) along with $ dmesg > dmesg.log after loading Xen Dom0 ?

DanIzack ( 2014-06-06 13:16:43 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2014-06-09 23:09:38 -0500

DanIzack gravatar image

updated 2014-06-09 23:11:14 -0500

Xen Hypervisor clearly reports 
total_memory           : 32733
free_memory            : 31280

What means there are  31 GB available for Xen DomUs management.
Memory allocated for QEMU is not important , due to on Xen Node. VMs memory allocation and further
management will be provided  by Xen
edit flag offensive delete link more

Comments

The problem is to make openstack aware that particular Compute node is under Xen management. Nova on Controller is missing Xen plugin, I believe.

DanIzack ( 2014-06-09 23:27:44 -0500 )edit

Which specific package is that? My controller is RHEL 6.5 while the compute node is CentOS 6.5, in case it matters.

danoreilly ( 2014-06-10 08:47:16 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

[hide preview]

Get to know Ask OpenStack

Resources for moderators

Question Tools

Follow
1 follower

Stats

Asked: 2014-06-06 10:07:18 -0500

Seen: 149 times

Last updated: Jun 09 '14