Ask Your Question

eblock's profile - activity

2019-02-22 08:18:01 -0600 commented answer How to enable audio on my instance?

This thread is rather old, but I'm still struggling with sound on a Windows VM. If I add <sound model='ich6'> <alias name='sound0'/> <address [...]/> </sound> to the xml of a running instance this works only if openstack is not involved. Is there still only the hacking way?

2019-02-22 04:19:00 -0600 commented question SSH key gets Injected to wrong user by cloud-init

Can you share more of your cloud.cfg? Do you attach any user-data file to the instance? I also recommend to delete all previous data from any cloud-init processes before uploading it as a new base image.

2019-02-22 04:16:09 -0600 commented question Rocky on CentOS7 - instance creation stuck in BUILD status

What is your storage backend for instances and volumes?

2019-02-22 04:15:51 -0600 commented question Rocky on CentOS7 - instance creation stuck in BUILD status

The "Final resource" message is not really related to a stuck build. If the compute node doesn't show any errors then the nova api doesn't even reach the point to actually build the instance. So you should look at nova-conductor and nova-scheduler logs on the control node, maybe also neutron logs.

2019-02-20 03:32:29 -0600 commented question ceph error during overcloud deploy

Unfortunately, I don't have experience with tripleo so I don't have an answer to that.

2019-02-20 03:28:38 -0600 commented question I have 8 Physical host i want to setup OOO openstack setup with Ceph. Deatils as Below.

For a production environment I would always recommend to use dedicated servers, mixing services like openstack and ceph is not a good idea. Performance impacts during Ceph recovery can be huge, especially if the compute nodes also run VMs.

2019-02-20 03:22:00 -0600 commented question Instance in compute node cannot get ip from dhcp

Did you enable DHCP in the base image? Have you verified that the subnet has DHCP enabled? openstack subnet show <SUBNET_ID> | grep enable_dhcp

2019-02-19 02:45:30 -0600 commented question ceph error during overcloud deploy

Have you tried to execute the ceph-volume command manually? The --prepare option is recognized in my Luminous cluster (12.2.10). But IIRC the 12.2.9 release you seem to be using was a very bad one. Do you have the option to use a later release? 12.2.11 has been released recently.

2019-02-18 02:42:25 -0600 commented question Receiving error NetworkNotFoundClient

What about neutron linuxbridge-agent logs from compute node? Can you check them, too? According to the docs the v3 entries are not required anymore, so I don't see that as a potential problem.

2019-02-18 02:04:22 -0600 commented question instance creation stuck on 'BUILD' state

What about nova-conductor.log? Are all nova and neutron services up and running?

2019-02-14 10:03:30 -0600 commented question How to bring up multiple interfaces up with cloud-init

That user-data can contain all kinds of commands, create network configs, restart services and so on. For example, we use that to enable salt-minion with a startup config and that minion registers itself at the salt-master and receives further configuration, everything we need is done via salt.

2019-02-14 10:01:52 -0600 commented question How to bring up multiple interfaces up with cloud-init

I might be wrong, but my understanding of cloud-init is to execute an initial configuration at first boot, just some basic stuff. The rest has to be handled by salt, puppet or other automatic configruation tools, or the admin user. You can add user-data for cloud-init to run after boot.

2019-02-14 08:01:39 -0600 commented question Receiving error NetworkNotFoundClient

Either your neutron config or the neutron section in the nova config(s) (more likely) is probably wrong. Can you verify if the network id actually exists? Run openstack network show 958824f7-cbf7-41b3-9700-d7c24860fa82 and paste the output. Also check neutron logs for more hints.

2019-02-14 04:04:28 -0600 commented question What is necessary to make Neutron changes?

Can you show output of openstack network agent list? You can edit your question and add the output to it. Also paste the output of systemctl status openstack-neutron-linuxbridge-agent.service and check /var/log/neutron/neutron-linuxbridge-agent.log. Is the device mapping valid on your nodes?

2019-02-14 02:45:55 -0600 commented question Openstack compute node not recognized as hypervisor

So in fact the compute node is discovered, then try running an instance on it:

openstack server create --image <IMAGE> --flavor <FLAVOR> --nic net-id=<NET_ID> --availability-zone nova:openstack-2 <VM_NAME>

Monitor nova logs on control and compute node during this to see if the instance fails.

2019-02-13 10:28:50 -0600 commented question Openstack compute node not recognized as hypervisor

Can you also show openstack compute service list? Did you run the nova-manage discover_hosts and related commands? I'm not familiar with devstack, to be honest, so it could be something related to that.

2019-02-13 09:53:54 -0600 commented question Openstack compute node not recognized as hypervisor

Please answer all questions if someone tries to help ;-) Is the nova-compute service up and running on the new compute node? Paste the output of systemctl status nova-compute of the new compute node in your question. Is the nova.conf correct for the second node? Have you checked ip addresses?

2019-02-13 06:02:07 -0600 commented question Openstack compute node not recognized as hypervisor

Can you rephrase your question? Your current statement is that you added a compute node, but it's not present in the list, but the compute node is present, that doesn't make sense right now. Did you add a second compute node but only the first is visible? Have you checked nova-compute service?

2019-02-13 05:59:54 -0600 commented question What is necessary to make Neutron changes?

Usually it's sufficient to restart the neutron services, but a reboot will do the trick, too. Do you see errors after the reboot? Did you correct neutron configs and restart neutron services on the compute nodes, too?

2019-02-11 09:37:42 -0600 commented answer attach HDD to /dev/sda?

Sure:

openstack volume set --image-property hw_scsi_model=virtio-scsi --image-property hw_disk_bus=scsi <IMAGE_ID>
2019-02-11 07:09:05 -0600 commented question Extending an in-use cinder volume?

Have you verified that your openstack version installed by packstack is actually Pike or newer? What does the output of nova-manage --version show? Pike version is 16.X.

2019-02-08 08:46:21 -0600 commented answer attach HDD to /dev/sda?

Perfect, that works like a charm! Instead of changing volume metadata (properties in volume show output) I have to change volume_image_metadata, that gives me a sda disk. So both described workflows are successful.

2019-02-08 05:29:21 -0600 commented answer attach HDD to /dev/sda?

@Bernd are there cinder properties (except bus=scsi) that would make the booted instance (from that volume) use sda instead of vda? I tried a couple of different things, none of them led to sda device name except for uploading the volume as image and add the mentioned properties.

2019-02-08 05:27:30 -0600 commented answer attach HDD to /dev/sda?

Please note that in my test the installation worked (ISO as vda, volume as sda), the instance can be started successfully. But you'll have to upload the volume as image and add the scsi properties, launching the volume directly also works, but only with vda device name.

2019-02-08 04:53:01 -0600 commented answer attach HDD to /dev/sda?

@Bernd's approach also works, the ISO is attached as vda, the volume as sda, so the installation on the volume probably will work.

2019-02-07 03:35:40 -0600 commented question ceph block size

I forgot to mention that you don't have a SOP since every client talks to his "own" OSD daemons directly, so it may not be faster, but the cluster handles way more requests in parallel.

2019-02-07 03:30:58 -0600 answered a question attach HDD to /dev/sda?

I just tried the following with Ocata (and Ceph as backend): I added image properties hw_scsi_model=virtio-scsi and hw_disk_bus=scsi to an ISO image. Launching that ISO without attached disks results in /dev/sda device name in the partitioner. Then I attach new (empty) volumes to that instance, both are recognized as sdb and sdc.

EDIT: Here's the command I'm using to create an instance from ISO attached with two volumes. Please note: this worked for me only with the already mentioned image properties for the ISO. Without them, the build fails because libvirt tries to allocate one volume as vda, the other as sda. This could be the result from the bug I created almost three years ago, which is still open.

nova boot --block-device source=image,id=<ISO_IMAGE>,dest=volume,size=5,shutdown=preserve,device=sda,bootindex=0 --block-device source=blank,dest=volume,size=5,shutdown=preserve,device=sdb,bootindex=1 --nic net-id=<NET_ID> --flavor 2 test1

This results in a running instance, trying to install from ISO, two volumes are attached: image description

2019-02-06 10:03:48 -0600 commented question HyperV Based Windows VM Migration to OpenStack (Rocky).

There are lots of information on Google ;-) For example, look at this brief description. Basically, you have to check your current instances have the right disk type etc., then upload them as glance images into your openstack environment.

2019-02-06 09:56:32 -0600 commented question ceph block size

It's not really faster because you have the replication, network latency, and some more limitations. But as @Bernd already states, the design is built for reliability so your data is still available even if some of your hard disks fail.

2019-02-06 09:52:20 -0600 commented question Migrate VM from KVM to openstack-KVM

Not sure if I understand, but you can upload the images from your source environment as glance images into your openstack environment (openstack image create --file /path-to-vm/vm.image <IMAGE_NAME>, then launch instances from those glance images via openstack. Is that what you're asking?

2019-02-06 09:47:33 -0600 commented answer attach HDD to /dev/sda?

Or you create that image with these properties: openstack image create --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi ... <IMAGE>

2019-02-06 09:46:53 -0600 commented answer attach HDD to /dev/sda?

You can also set global properties for that image (after upload) with openstack image set --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi <IMAGE>, all instances are then started with /dev/sd* as hard disk.

2019-02-06 02:53:31 -0600 commented answer ceph node recover block

I'm not sure what you mean by "ceph file system", there is a CephFS (Ceph Filesystem) which you can mount and provide POSIX complient shared space (like NFS) to clients, but that's optional and probably not what you're asking. Ceph is a software-defined storage, so on top of the OS you run Ceph.

2019-02-06 02:51:06 -0600 commented answer ceph node recover block

I think you should familiarize yourself with Ceph, otherwise this will get out of hand. Ceph OSDs (where the data is stored) don't have a filesystem anymore, they used to be on XFS, but Bluestore is the way to go now, although it's still possible to use filestore OSDs.

2019-02-05 04:52:34 -0600 received badge  Notable Question (source)
2019-02-05 02:19:02 -0600 commented question Non admin users unable to create cinder volume backup

I'm running out of ideas, if the domain_specific_drivers_enabled is not used but the other commands work, it's probably something else.

2019-02-04 06:07:41 -0600 commented question Non admin users unable to create cinder volume backup

What about the non-admin user? Are the backup commands available for him? Did you try the other commands? Is there something else missing?

2019-02-04 06:05:34 -0600 answered a question ceph node recover block

That's not how Ceph works. A ceph cluster consists of multiple nodes, each with one or more OSDs. Your data (an image) is striped into Placement Groups, so it's divided into many objects, typically these stripes have a size of 4 MB. These PGs define a pool and are stored on different OSDs for resiliency. In case of a replicated pool with size 3 each PG is stored on three different OSDs. A client (e.g. glance) has to only read from the primary OSD, while it has to write to three OSDs, so reads are faster than write operations. Just to give you an example (there's only one image in this pool):

# glance image
control:~ # openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 4578b1bd-fe9a-4547-9bc5-97372f0a5721 | Cirros | active |
+--------------------------------------+--------+--------+

# list pool content
ceph-2:~ # rbd -p glance8 ls
4578b1bd-fe9a-4547-9bc5-97372f0a5721

ceph-2:~ # rados df | grep glance8
POOL_NAME               USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS      RD WR_OPS      WR
glance8               79 MiB      16      0     48                  0       0        0 294616 468 MiB   5264 162 MiB

# show PG placement
ceph-2:~ # ceph pg ls-by-pool glance8 | tr -s ' ' | cut -d " " --fields=1,2,7,13 | column -t
PG    OBJECTS  LOG  ACTING
19.0  1        54   [7,3,0]p7
19.1  1        2    [6,1,7]p6
19.2  0        0    [3,0,7]p3
19.3  1        55   [2,0,3]p2
19.4  2        83   [6,7,1]p6
19.5  2        123  [7,6,1]p7
19.6  2        98   [1,7,6]p1
19.7  2        53   [6,1,7]p6
19.8  0        84   [6,2,1]p6
19.9  1        98   [3,6,7]p3
19.a  2        4    [2,6,3]p2
19.b  2        51   [1,7,0]p1`

So as you see, the only image is divided into 16 objects, placed on different PGs and on different OSDs. If one object/PG/OSD gets corrupted somehow Ceph will try to recover from the remaining healthy PGs. An advice regarding the replication size: avoid using only 2 copies (except for tests), this will get you into trouble sooner or later.

2019-02-01 09:04:06 -0600 commented answer nova can't create instances using ceph

Here is a more detailed answer.

2019-02-01 09:03:42 -0600 commented answer nova can't create instances using ceph

Even if you work with Ceph as backend it can result in a local base image on the compute node(s). You have to disable image cache by setting cache_image = none in /etc/nova/nova.conf on your compute nodes, then delete the images in /var/lib/nova/instances/_base to clear the cache.

2019-02-01 09:00:08 -0600 edited answer nova ceph copy-on-write images doesn't work even if it's configured to

The ceph docs state:

show_multiple_locations = True show_image_direct_url = True

This only seems to be relevant for Mitaka, for other releases only show_image_direct_url = true is necessary. But I'm running Ocata and just recently I changed show_multiple_locations to false, since the config description reports it as deprecated. But then I got error messages from nova during (live) snapshot creation

ImageNotAuthorized: Not authorized for <IMAGE>

and had to switch that option back to true. Now the snapshots are working again. I'm not sure if this is a bug in Ocata or if I missed something during the upgrades from Mitaka to Ocata.

EDIT: I found the problem in my environment. The solution is here. Setting cache_image = none on all compute nodes (/etc/nova/nova.conf) disabled the local cache in the local filesystem on the compute nodes (/var/lib/nova/instances/_base).

I believe my issue was the following: I had launched a couple of instances with a wrong base image property (not raw). So the compute node(s) created a base file locally and uploaded the falt image back to Ceph. After fixing the raw format issue nothing changed for the compute nodes, for some reason they still needed to have a local copy, maybe some metadata somewhere, I couldn't really find out. Anyway, after I found the mentioned thread (actually I'm wondering why it took me so long, it seems pretty obvious now) I disabled the cache_image option and restarted all compute services. Launching a new instance from the same base image as before doesn't result in a flat image anymore, it's a copy-on-write clone now, just like it's supposed to be.

2019-02-01 08:37:45 -0600 commented answer remove image caching on nova-compute?

Just to update this topic, setting cache_images = none in /etc/nova/nova.conf on compute node(s) disables local cache on the compute nodes. It saved my day!

2019-02-01 08:35:27 -0600 commented answer Prefetched and cached images in Glance

This solved my problem!

2019-02-01 08:34:56 -0600 answered a question Image caching on compute nodes

I had an image cache issue with Ceph backend: because of a previous incorrect image property I had images locally cached on the compute nodes (shared storage), this disabled the advantages of fast copy-on-write clones. Fixing the image's property and deleting the cache (/var/lib/instances/_base) didn't solve anything. To let others find it faster than me, this answer solved my problem. Setting cache_images = none in /etc/nova/nova.conf on all compute nodes solved my issue.

2019-02-01 05:33:20 -0600 commented question Non admin users unable to create cinder volume backup

Can you verify that the other openstack commands work as expected? I would try all the main services, openstack server | image | network | compute etc., to see if anything fails. It's just a guess, but I wanted to share this because the behavior was very similar.

2019-02-01 05:31:18 -0600 commented question Non admin users unable to create cinder volume backup

While that option was missing I couldn't run openstack domain list, I only got a list of other commands, just like your message. So enabling this option "revealed" that command, I'm guessing something like this could be responsible in your case.