Ask Your Question

multi-hypervisor deployment

asked 2019-03-04 14:14:43 -0500

arbermejo gravatar image

updated 2019-03-11 08:47:48 -0500

i have a minimal deployment with 4 nodes (controller, compute(kvm), object storage, block storage) wich i made manually fallowing the official openstack docs , and im adding a new compute node (lxc), i fallowed the instructions of the Compute Service installation section in the docs and added my new compute node, but when i run nova-status upgrade check i'm having this output

Check: Resource Providers
| Result: Warning
| Details: There are 1 compute resource providers and 2 compute nodes | in the deployment. Ideally the number of compute resource
| providers should equal the number of enabled compute nodes
| otherwise the cloud may be underutilized

Also i am trying to deploy a new instance on the lxc hypervisor.

For doing this I added an image from the official ubuntu image repository and used this command to load it to my environment

openstack image create --public --disk-format raw --container-format bare --file bionic-server-cloudimg-amd64.img bionic

Also i already made the flavor corresponding to the host aggregate i made for lxc

openstack flavor set 89cedbec-0eaf-4d75-953d-571932428611 --property aggregate_instance_extra_specs:compute-type=lxc

i'm think that the scheduler is working fine and is trying to deploy my instance in the lxc compute node, but don't know what i'm missing

I really preciate any help or use case of a multi-hypervisor deployment


Ok this is crazy, you where right i have a trouble with the quotas and i fixed it, but now it is giving me another error in /var/log/nova/nova-compute.log

Hi, just now i tried creating an instance with an ephemeral volume and didn't work either, i already checked the configs files on my lxc compute node and my cinder node and i can't see any problem there.

I previously apologize for the length of the question, but I really don't know where is the problem in my configuration. The only odd line on the logs for me is this one

2019-03-08 16:14:58.569 2438 ERROR nova.compute.manager [req-78f6ee93-7554-48c0-9ed9-8457b674347c a16c6ef0319b4643a4ec8e56a1d025cb 59065d8f970b467aa94ef7b35f1edab5 - default default] [instance: e6bd1d65-f77c-482e-bda5-31df539b3e06] Instance failed to spawn: TypeError: mount() takes exactly 4 arguments (3 given)

Just now i tried to create a new instance on the lxc compute node and checked the logs from /var/log/nova/nova-compute.log and the /var/log/cinder/cinder-volume.log (in the cinder node) and it seems there are any problem there. This is the output for the instance (e6bd1d65-f77c-482e-bda5-31df539b3e06) from the compute node. Basically as the conclusion of the analysis of both logs, show me that cinder didn't detect any error.

root@nova-lxc /h/team# tail -n 100 /var/log/nova/nova-compute.log | grep e6bd1d65-f77c-482e-bda5-31df539b3e06
2019-03-08 16:13:57.273 2438 INFO [req-78f6ee93-7554-48c0-9ed9-8457b674347c a16c6ef0319b4643a4ec8e56a1d025cb 59065d8f970b467aa94ef7b35f1edab5 - default default] [instance: e6bd1d65-f77c-482e-bda5-31df539b3e06] Attempting claim on node nova-lxc: memory 2048 MB, disk 5 GB, vcpus 1 CPU
2019-03-08 16:13:57.274 2438 INFO [req-78f6ee93-7554-48c0-9ed9-8457b674347c a16c6ef0319b4643a4ec8e56a1d025cb 59065d8f970b467aa94ef7b35f1edab5 - default default] [instance: e6bd1d65-f77c-482e-bda5-31df539b3e06] Total memory: 16039 MB, used: 512.00 MB
2019-03-08 16:13:57 ...
edit retag flag offensive close merge delete



The scheduler uses the Placement service to determine which compute nodes have enough capacity. Your new node was not registered in the Placement service, so that it can’t be used to schedule instances. Check its nova-compute log to find out why.

Bernd Bausch gravatar imageBernd Bausch ( 2019-03-04 16:23:16 -0500 )edit

is giving me this error 2019-03-05 06:38:39.139 14661 ERROR nova.compute.manager ResourceProviderCreationFailed: Failed to create resource provider nova-lxc

arbermejo gravatar imagearbermejo ( 2019-03-05 07:17:58 -0500 )edit

Second update: The first error indicates incorrect Cinder setup, and the other errors should be consequences of not being able to access the volume on the storage server. Fix Cinder and the Nova-Cinder interaction, and if you have an urgent need to launch instances, use ephemeral storage.

Bernd Bausch gravatar imageBernd Bausch ( 2019-03-06 19:57:30 -0500 )edit

This error looks similar to yours: There is a solution; not sure if it fits your situation.

Bernd Bausch gravatar imageBernd Bausch ( 2019-03-07 00:12:59 -0500 )edit

I believe the mount error is a consequence of the iscsiadm error that you deleted from your question. I would check whether nova.conf on the compute node is set up correctly for managing iSCSI-based volumes, and generally dig deeper into iSCSI. Kernel message buffer, other log files, ...

Bernd Bausch gravatar imageBernd Bausch ( 2019-03-08 17:25:19 -0500 )edit

2 answers

Sort by » oldest newest most voted

answered 2019-03-11 08:58:51 -0500

updated 2019-03-11 09:01:04 -0500

mountpoint /dev/vda on host None looks quite incorrect. Does the LXC driver support volumes at all?

It looks like it doesn’t:

edit flag offensive delete link more


so that mans that lxc can't attach volumes to instance on Openstack, so how do they work? Can i launch an instance without a block volume?? PD: i tried with a flavor with ephemeral volume and 0 Disk and didn't work either

arbermejo gravatar imagearbermejo ( 2019-03-11 09:37:35 -0500 )edit

according to the support matrix Openstack don't have support erither to attach virtual network interfaces, then what is the use of an LXC instance in it???

arbermejo gravatar imagearbermejo ( 2019-03-11 09:41:16 -0500 )edit

Attaching virtual NICs means hotplugging additional interfaces. You can run instances without this feature. And you can run instances without volumes. Just use ephemeral storage. I don't know if it is supposed to work with 0 disks, though.

Bernd Bausch gravatar imageBernd Bausch ( 2019-03-11 10:02:11 -0500 )edit

then attaching block volumes means adding additional volumes??. This is what the matrix says: The attach volume operation provides a means to hotplug additional block storage to a running instance

arbermejo gravatar imagearbermejo ( 2019-03-11 10:53:45 -0500 )edit

Well, that's what it seems to mean, indeed. Unfortunately, I am not wiser than you (or Google) about it.

Instances without volumes are fine. You can even run your cloud without Cinder. However, instances without network connection don't make sense.

Bernd Bausch gravatar imageBernd Bausch ( 2019-03-12 00:29:19 -0500 )edit

answered 2019-03-05 15:37:41 -0500

The new error indicates that you launch the instance with a volume as root disk, and that the volume can’t be created because the user is over quota.

Delete a few unused volumes, or increase the quota, or use ephemeral storage instead of the volume.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2019-03-04 14:14:43 -0500

Seen: 120 times

Last updated: Mar 11