发起问题

utdream 的档案 - activity

2020-02-17 19:53:40 -0500 获得奖牌  受欢迎的问题 (source)
2020-02-17 19:53:40 -0500 获得奖牌  著名问题 (source)
2020-02-17 19:53:40 -0500 获得奖牌  热门的问题 (source)
2019-09-05 05:08:31 -0500 获得奖牌  著名问题 (source)
2019-07-22 03:52:55 -0500 获得奖牌  著名问题 (source)
2019-07-03 09:34:50 -0500 获得奖牌  受欢迎的问题 (source)
2019-07-03 09:34:50 -0500 获得奖牌  热门的问题 (source)
2019-06-12 14:33:51 -0500 评论问题 What activities create queues in RabbitMQ of openstack?

OVH did a pretty interesting talk about message queuing at scale at the Denver Open Infrastructure Summit. Understanding the problems they had with it can really add to understanding what it does: https://www.openstack.org/videos/summ...

2019-06-12 12:59:13 -0500 评论问题 cant insert password into fedora VM

Can you create the VM successfully if you create it manually? Using a tool like virt-manager or virt-install?

2019-06-12 12:09:00 -0500 回答问题 [Glance] unauthorized (HTTP 401) on block1 but not controller?

After several days of hacking away at this, I finally figured this out. This turned out to be a timing issue. The system time on my "block1" (glance) server was totally different than the system time on the "controller" (keystone) server, and this was borking the glance auth process.

The crazy thing is that I had followed the documentation setting up Chrony, but never verified it was working properly. I am building this in a lab environment with carefully controlled networking and never allowed connections to time servers. When I was testing the working debug logs against the non-working debug logs, I noticed the difference in the log times, and having taken the time to set up Chrony in the first place, that pointed me in the right direction.

Hope this post helps someone in the future with a similar issue.

2019-06-10 11:31:18 -0500 评论问题 [Glance] unauthorized (HTTP 401) on block1 but not controller?

Thanks Bernd. I posted a snippet from the Glance API log in debug mode. Unfortunately there is little more than the 401 response in the log. Do you mean logs from Keystone maybe?

2019-06-10 11:29:29 -0500 获得奖牌  热门的问题 (source)
2019-06-08 01:39:57 -0500 获得奖牌  受欢迎的问题 (source)
2019-06-06 21:32:54 -0500 评论问题 [Glance] unauthorized (HTTP 401) on block1 but not controller?

Nice. That is a handy little script. Thank you. I still just get a 401 HTML response that way, but I will remember that script for later.

Both controller and block1 have identical [keystone_authtoken] configs. The JSON confirms the ID for each user is the same. Anywhere else I can look?

2019-06-06 13:34:16 -0500 问了问题 [Glance] unauthorized (HTTP 401) on block1 but not controller?

I have Glance installed and working just fine on the controller, however, I'd prefer to have images stored on a separate server (or several separate servers - connected to a SAN), but whenever I try to setup glance on a server OTHER than the controller, I can't get authentication to work. Here's the rundown:

[root@reedfish ~]# openstack endpoint list --service image -c ID -c Region -c Enabled -c Interface -c URL
+----------------------------------+----------+---------+-----------+------------------------+
| ID                               | Region   | Enabled | Interface | URL                    |
+----------------------------------+----------+---------+-----------+------------------------+
| 0c8e133c511f44f7920c7949c2c6076d | VivioLab | True    | internal  | http://block1:9292     |
| 345b1fe10a404fe69fbba3eced54dcb5 | VivioLab | False   | internal  | http://controller:9292 |
| a02dc0e640b74a41a0a6735871f1dfd0 | VivioLab | False   | public    | http://controller:9292 |
| a0bda4de9d844f54a66c55a9a7037f87 | VivioLab | False   | admin     | http://controller:9292 |
| afb0973d92f84ac1914a486b8489bca4 | VivioLab | True    | admin     | http://block1:9292     |
| e8eaf47dec9842898ffce433e6bba9c1 | VivioLab | True    | public    | http://block1:9292     |
+----------------------------------+----------+---------+-----------+------------------------+

Here I have only have endpoints on one of the image servers. Here is what happens when I attempt to get an image list from the image server:

[root@reedfish ~]# openstack image list
Unauthorized (HTTP 401)

the /var/log/glance/api.log only shows the auth failing (doesn't say why or how it was checking...):

...
2019-06-06 11:30:18.869 15817 DEBUG glance.api.middleware.version_negotiation [-] new path /v2/images process_request /usr/lib/python2.7/site-packages/glance/api/middleware/version_negotiation.py:70
2019-06-06 11:30:20.918 15817 INFO eventlet.wsgi.server [-] 192.168.253.20 - - [06/Jun/2019 11:30:20] "GET /v2/images HTTP/1.1" 401 566 2.050240

However, when I enable the endpoints on the controller, the same authentication settings work just fine, like so:

[root@reedfish ~]# openstack endpoint set --enable 345b1fe10a404fe69fbba3eced54dcb5
[root@reedfish ~]# openstack endpoint set --enable a02dc0e640b74a41a0a6735871f1dfd0
[root@reedfish ~]# openstack endpoint set --enable a0bda4de9d844f54a66c55a9a7037f87
[root@reedfish ~]# openstack image list

[root@reedfish ~]#

... I don't have any images at the moment, but the command ran just fine when the controller endpoints are enabled.

In looking at the (really old posts) from others having similar issues, I have checked on a few things like ensuring the regions are all the same, and I'm thinking this is probably something similar, but I don't know where to look.

Any direction that anyone can provide on where to look to figure out why the authentication is failing from a remote server would be greatly appreciated!

UPDATE =============

In comparing the --debug output from a working with a non-working command, it looks like the non-working command is not instantiating the image api. Not sure why, as I installed and configured both instances the same, but here is the output:

BROKEN OUTPUT:

Making authentication request to http://controller:5000/v3/auth/tokens
http://controller:5000 "POST /v3/auth/tokens HTTP/1.1" 201 3371
{JSON SNIP FOR BREVITY}
http://block1:9292 "GET /v2/images HTTP/1.1" 401 358
RESP: [401] Connection: keep-alive Content-Length: 358 Content-Type: text/html; charset=UTF-8 Date: Thu, 06 Jun 2019 22:17:52 GMT Www-Authenticate: Keystone uri="http://controller:5000"
RESP BODY: Omitted, Content-Type is set to text/html; charset=UTF-8. Only application/json responses have their bodies logged.
Request returned failure status ...
(more)
2019-06-06 13:01:57 -0500 获得奖牌  粉丝
2019-06-03 17:11:43 -0500 获得奖牌  自学者 (source)
2019-06-03 17:11:43 -0500 获得奖牌  老师 (source)
2019-06-03 16:53:14 -0500 评论问题 Flavor with name already exists

could it be possible that your flavor exists in a different project that your current user doesn't have permission to see? Maybe check the DB directly for that flavor to see what project it belongs to?

2019-06-03 16:53:13 -0500 回答问题 Error 16: Inconsistent filesystem structure

Alright, I've figured this out. When I created the flavor, it looks like I misunderstood the RAM value. I'm guessing I assumed it was a GB value instead of a MB value, and I only assigned a value of 2 (thinking it was 2GB) to the flavor - but this turned out to be only 2MB.

As soon as I corrected the flavor and gave it a workable amount of RAM to use (512 instead of 2), the system booted just fine.

Makes me wonder how many of those "Error 16: Inconsistent filesystem structure" issues that I found on various forums that didn't have any answers were really just memory issues and not storage issues.

Anyway, that was a real BEAR to track down. Hope this post helps someone in the future.

2019-06-01 00:35:48 -0500 评论问题 Error 16: Inconsistent filesystem structure

That was my first thought too. Re-downloaded the image, re-added to openstack, same thing. Tried your suggestion on both the cirros img file and the /var/lib/nova file and both came back okay. ("corrupt: false"). Might try a different OS...

2019-05-31 17:03:38 -0500 获得奖牌  编辑 (source)
2019-05-31 16:32:27 -0500 问了问题 Error 16: Inconsistent filesystem structure

Hi Folks,

I am busily learning about OpenStack in a 4-node dev environment and I've got most of the core services up and running well (keystone, neutron, glance, nova, horizon), but now I'm having an issue whenever I try to launch the Cirros image as per the "Launch an Instance" Nova documentation. I can create an instance of Cirros, and it appears to be built fine but when I console into it all I get is "Error 16: Inconsistent filesystem structure" error.

In researching this error, it's an error coming from GRUB because GRUB can't tell how to read the local file system. I don't get any errors (that I can find) in the nova-compute log. I can't find any errors in the libvirtd logs, or anywhere else honestly - just this error from GRUB.

I used the command show here to create the image: https://docs.openstack.org/glance/rocky/install/verify.html (https://docs.openstack.org/glance/roc...)

specifically: "--disk_format qcow2 --container-format bare", and I can confirm it with "openstack image show".

Is there some way that I can test the image settings perhaps?

I'll keep poking at it, but any direction anyone an provide would be greatly appreciated! ;)

=========

Additional notes:

I can run both the cirros img file and the VM's disk image using guestfish:

# guestfish -a /var/lib/nova/instances/5eb355e7-3b47-41ec-bd0f-2feff2f8822c/disk
><fs> run
><fs> list-filesystems
/dev/sda1: ext3
/dev/sda15: vfat
><fs> fsck ext3 /dev/sda1
0
2019-05-31 16:32:27 -0500 评论回答 Nova "Conflicting resource provider name"

haha. They're little 1U's so the name is appropriate. ;) I have not reviewed the placement stuff yet and admittedly I'm still an openstack newb. Thank you for your help! Will try this and report back.

2019-05-31 13:26:37 -0500 获得奖牌  学者 (source)
2019-05-31 00:18:47 -0500 问了问题 Nova "Conflicting resource provider name"

Hi folks,

I've got a lab cluster set up that I'm testing OpenStack on and I'm running into the following error:

2019-05-30 18:53:01.203 23258 ERROR nova.scheduler.client.report [req-82e72f3e-4896-435b-9388-70c22247ca83 - - - - -] [req-69c5c0a6-d8bd-4f8c-bf6d-cfa18e13e487] Failed to create resource provider record in placement API for UUID 124e7c9f-7edb-4574-9675-48f0d686e968. Got 409: {"errors": [{"status": 409, "request_id": "req-69c5c0a6-d8bd-4f8c-bf6d-cfa18e13e487", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: anchovy.lab.local already exists.  ", "title": "Conflict"}]}.

... but I'm not sure how to address this. I've tried removing the server using "openstack compute service delete" but it didn't appear to have any affect.

Found this thread: https://ask.openstack.org/en/question/97941/how-to-remove-compute-host-in-controller-unfortunately-i-removed-that-host-but-name-still-available/ (https://ask.openstack.org/en/question...)

... but after re-adding the instance again I ran into the same issue. I'm at a loss.

Any direction on this would be greatly appreciated.

Thank you in advance!

THE SOLUTION ==============================================================

I had to install the EPEL repo to install pip which would then allow me to install 'osc-placement' which then provided the placement resource provider commands to use with the 'openstack' command.

Fix based on Bernd's excellent advice below:

# yum -y install epel-release
# yum -y install python-pip
# pip install osc-placement
# . admin-openrc
# openstack resource provider list
+--------------------------------------+-----------------------+------------+
| uuid                                 | name                  | generation |
+--------------------------------------+-----------------------+------------+
| 1824de38-b376-4d05-8c42-3c5a65adb6cc | anchovy.lab.local     |         11 |
+--------------------------------------+-----------------------+------------+
# openstack resource provider delete 1824de38-b376-4d05-8c42-3c5a65adb6cc
# ssh 192.168.253.40
# systemctl restart libvirtd.service openstack-nova-compute.service

... and things seem to be working just fine now. Thank you Bernd!

2019-05-30 21:08:48 -0500 获得奖牌  拥护者 (source)
2019-05-21 16:43:00 -0500 评论问题 new compute node does not show up

did you ever figure this out?