# [Havana][Nova] libvirtError: internal error: kvm binary is deprecated on compute node with ceph [closed]

Hello,

the complete error message is this:

libvirtError: internal error: process exited while connecting to monitor: W: kvm binary is deprecated, please use qemu-system-x86_64 instead


at least the part that caught my attention. And it actually points to different errors:

1. libvirtError: internal error: process exited while connecting to monitor: W: kvm binary is deprecated, please use qemu-system-x86_64 instead
2. Warning: option deprecated, use lost_tick_policy property of kvm-pit instead.
3. open /var/lib/nova/instances/2eb2259d-f44a-402b-8d0d-facf8a3f37de/console.log: Permission denied

Other suggest that they change the path in the emulation tag in an xml definition file: http://superuser.com/questions/606923/error-while-creating-a-vm-using-kvm (http://superuser.com/questions/606923...)

Whereas the only file with OpenStack I could dig up was this

/usr/share/pyshared/nova/compute/fakevirtinstance.xml

And changing it and restarting nova-compute did not solve the issue.

This is a setup with OpenStack deployed on physical servers incl. ceph as Block Storage. The Setup worked, until I - Updated the compute nodes (but nothing KVM or OpenStack sepcific was on the list) - added some more ceph parameters

The later was done as I figured that I missed some. It worked, as I said before. I.e. I could see the number of Objects increasing after I created a Volume etc. (but that's rather cinder, isn't it . . . and that part still works).

Anyways, the creation of a new instance based on a volume failed with complaints about the keyring missing. After I fixed that, by adding

env CEPH_ARGS="--id volumes"


to /etc/init/nova-compute.conf I end up the the error messages above. I'm not sure if this is Ceph related or I'm missing some config parameters. Any help is appreciated.

nova-compute.conf

http://paste.openstack.org/show/61582/

nova.conf

http://paste.openstack.org/show/61583/

complete trace:

http://paste.openstack.org/show/61585/

The Reason why I'm not sure if this is a Ceph issue is, that if I remove the CEPH_ARGS argument in the upstart script, the last error message again is:

Stderr: "2014-01-21 11:05:53.645059 7fdc0f6f6780 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication\n2014-01-21 11:05:53.645065 7fdc0f6f6780  0 librados: client.admin initialization error (2) No such file or directory\nrbd: couldn't connect to the cluster!\n"


But prior to that, the error's are the same. Permission denied on /var/lib/nova/instances/UUID/console.log

Also the kvm binary is deprecated error.

Strange here is also, that without the CEPH_ARGS parameter the connection is attempted with client.admin.

Nevertheless as user nova I can only map a volume to a block device using sudo. So another question, maybe sometime in another thread could be "Why does nova default to using client.admin?"

Yet a major difference seems to be, that with the CEPH_ARGS the whole creation process at least get's to "Block Device ...

edit retag reopen merge delete

### Closed for the following reason too localized by TypoPhil close date 2014-01-24 09:18:03.214251

In Addition, using nova volume-create (from a compute node), will create a volume in the pool. Looks definitely as though it's the mapping of a block device that fails. This maybe due to permission error's.

( 2014-01-21 06:50:40 -0600 )edit

I'm having a similar issue to this. Can you look inside /var/lib/nova/<instance_id>/libvirt.xml and see if disk source is correct?

( 2014-01-21 09:54:58 -0600 )edit

Problem being that nova tries to create the instance, retries and then stops in error state. The Instance information is stored in the database, but there is nothing on the compute node. I'm reinstalling one compute node ATM to see if things change.

( 2014-01-22 01:24:06 -0600 )edit

I'll close this. I reinstalled the second compute node and now it works just fine. Again. If the first will start to behave the same way I'll just copy over the configs. Fix'em up and restart the services. Thanks so far.

( 2014-01-24 09:17:01 -0600 )edit