Ask Your Question
0

[Glance] unauthorized (HTTP 401) on block1 but not controller? [closed]

asked 2019-06-06 13:34:16 -0500

utdream gravatar image

updated 2019-06-06 17:25:58 -0500

I have Glance installed and working just fine on the controller, however, I'd prefer to have images stored on a separate server (or several separate servers - connected to a SAN), but whenever I try to setup glance on a server OTHER than the controller, I can't get authentication to work. Here's the rundown:

[root@reedfish ~]# openstack endpoint list --service image -c ID -c Region -c Enabled -c Interface -c URL
+----------------------------------+----------+---------+-----------+------------------------+
| ID                               | Region   | Enabled | Interface | URL                    |
+----------------------------------+----------+---------+-----------+------------------------+
| 0c8e133c511f44f7920c7949c2c6076d | VivioLab | True    | internal  | http://block1:9292     |
| 345b1fe10a404fe69fbba3eced54dcb5 | VivioLab | False   | internal  | http://controller:9292 |
| a02dc0e640b74a41a0a6735871f1dfd0 | VivioLab | False   | public    | http://controller:9292 |
| a0bda4de9d844f54a66c55a9a7037f87 | VivioLab | False   | admin     | http://controller:9292 |
| afb0973d92f84ac1914a486b8489bca4 | VivioLab | True    | admin     | http://block1:9292     |
| e8eaf47dec9842898ffce433e6bba9c1 | VivioLab | True    | public    | http://block1:9292     |
+----------------------------------+----------+---------+-----------+------------------------+

Here I have only have endpoints on one of the image servers. Here is what happens when I attempt to get an image list from the image server:

[root@reedfish ~]# openstack image list
Unauthorized (HTTP 401)

the /var/log/glance/api.log only shows the auth failing (doesn't say why or how it was checking...):

...
2019-06-06 11:30:18.869 15817 DEBUG glance.api.middleware.version_negotiation [-] new path /v2/images process_request /usr/lib/python2.7/site-packages/glance/api/middleware/version_negotiation.py:70
2019-06-06 11:30:20.918 15817 INFO eventlet.wsgi.server [-] 192.168.253.20 - - [06/Jun/2019 11:30:20] "GET /v2/images HTTP/1.1" 401 566 2.050240

However, when I enable the endpoints on the controller, the same authentication settings work just fine, like so:

[root@reedfish ~]# openstack endpoint set --enable 345b1fe10a404fe69fbba3eced54dcb5
[root@reedfish ~]# openstack endpoint set --enable a02dc0e640b74a41a0a6735871f1dfd0
[root@reedfish ~]# openstack endpoint set --enable a0bda4de9d844f54a66c55a9a7037f87
[root@reedfish ~]# openstack image list

[root@reedfish ~]#

... I don't have any images at the moment, but the command ran just fine when the controller endpoints are enabled.

In looking at the (really old posts) from others having similar issues, I have checked on a few things like ensuring the regions are all the same, and I'm thinking this is probably something similar, but I don't know where to look.

Any direction that anyone can provide on where to look to figure out why the authentication is failing from a remote server would be greatly appreciated!

UPDATE =============

In comparing the --debug output from a working with a non-working command, it looks like the non-working command is not instantiating the image api. Not sure why, as I installed and configured both instances the same, but here is the output:

BROKEN OUTPUT:

Making authentication request to http://controller:5000/v3/auth/tokens
http://controller:5000 "POST /v3/auth/tokens HTTP/1.1" 201 3371
{JSON SNIP FOR BREVITY}
http://block1:9292 "GET /v2/images HTTP/1.1" 401 358
RESP: [401] Connection: keep-alive Content-Length: 358 Content-Type: text/html; charset=UTF-8 Date: Thu, 06 Jun 2019 22:17:52 GMT Www-Authenticate: Keystone uri="http://controller:5000"
RESP BODY: Omitted, Content-Type is set to text/html; charset=UTF-8. Only application/json responses have their bodies logged.
Request returned failure status ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by utdream
close date 2019-06-12 14:24:05.714338

Comments

For more insight, submit APIs without client. E.g.:

T=$(openstack token issue -c id -f value)
curl -H "x-auth-token: $T" http://block1:9292/v2/images

But I think the block1 Glance doesn't cooperate with the controller Keystone. Check [keystoneauth] in glance-api.conf on block1.

Bernd Bausch gravatar imageBernd Bausch ( 2019-06-06 19:08:43 -0500 )edit

Nice. That is a handy little script. Thank you. I still just get a 401 HTML response that way, but I will remember that script for later.

Both controller and block1 have identical [keystone_authtoken] configs. The JSON confirms the ID for each user is the same. Anywhere else I can look?

utdream gravatar imageutdream ( 2019-06-06 21:32:54 -0500 )edit

The logs. That's all I can think of right now. I hope that the glance-api log on block1 contains clues why Glance over there has problems with your authentication.

Bernd Bausch gravatar imageBernd Bausch ( 2019-06-08 02:20:36 -0500 )edit

Thanks Bernd. I posted a snippet from the Glance API log in debug mode. Unfortunately there is little more than the 401 response in the log. Do you mean logs from Keystone maybe?

utdream gravatar imageutdream ( 2019-06-10 11:31:18 -0500 )edit

I did mean Glance logs, hoping they show how Glance contacts Keystone. The Keystone logs can also help, since they should show how Glance contacts it.

If the log are inconclusive, I would take out the big guns and trace traffic between the Glance and Keystone hosts. Wireshark or tcpdump -xX.

Bernd Bausch gravatar imageBernd Bausch ( 2019-06-11 19:38:47 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2019-06-12 12:09:00 -0500

utdream gravatar image

After several days of hacking away at this, I finally figured this out. This turned out to be a timing issue. The system time on my "block1" (glance) server was totally different than the system time on the "controller" (keystone) server, and this was borking the glance auth process.

The crazy thing is that I had followed the documentation setting up Chrony, but never verified it was working properly. I am building this in a lab environment with carefully controlled networking and never allowed connections to time servers. When I was testing the working debug logs against the non-working debug logs, I noticed the difference in the log times, and having taken the time to set up Chrony in the first place, that pointed me in the right direction.

Hope this post helps someone in the future with a similar issue.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2019-06-06 13:34:16 -0500

Seen: 79 times

Last updated: Jun 12