Ask Your Question
1

RDO: Keystone dies after 1 - 3 days, can't login into horizon dashboard

asked 2014-03-20 12:52:22 -0500

cloudssky gravatar image

updated 2014-03-20 13:38:36 -0500

On a RDO installation (controller and 2 compute nodes) I'd the following problem: couldn't login to the dashboard.

The message by login was:

An error occurred authenticating. Please try again later.

The solution was to start keystone

[root@csky03 ~]# /etc/init.d/openstack-keystone status

keystone dead but pid file exists

[root@csky03 ~]# /etc/init.d/openstack-keystone start

Starting keystone: [ OK ]

[root@csky03 ~]# /etc/init.d/openstack-keystone status

keystone (pid 18754) is running...

But the problem remains, since keystone dies after 1 - 3 days.

Is this RDO specific on an all in one controller and 2 compute nodes? Or does this happen also to other distros.

I could find some similar questions here, which stated to change SELinux to permissive or disable it at all, but in my case thats already in place.

Is anybody else facing a similar problem?

Any ideas why keystone dies after 1 - 3 days?

Note: a funny thing from my observations after some weeks: if I work with the system the whole day, this didn't happen to me anytime, but it seems that, if I sleep, keystone likes to sleep too :-) But sometimes it needs more that 2 days to go in sleep mode.

As a workaround, I'll now create a cron job to start keystone and let you know if it could fix the problem after 1 - 3 days:

[root@controller ~]# crontab -e

0 3 * * * /etc/init.d/openstack-keystone start

edit retag flag offensive close merge delete

Comments

What sort of errors do you see in the keystone log file after it has died?

larsks gravatar imagelarsks ( 2014-03-20 13:15:39 -0500 )edit

Hi Lars, there are no errors, only the info:

2014-03-17 22:07:45.380 2964 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:5000

2014-03-20 18:21:46.511 18754 INFO keystone.common.environment [-] Environment configured as: eventlet

cloudssky gravatar imagecloudssky ( 2014-03-20 13:25:36 -0500 )edit

Which openstack-keystone version is that? Please add debug = True in keystone.conf before next restart, hopefully that will catch more info. Or run it from shell with keystone-all -d and capture terminal output e.g. using script(1)

Alan Pevec gravatar imageAlan Pevec ( 2014-03-21 05:27:43 -0500 )edit

RIP Keystone.

punalpatel gravatar imagepunalpatel ( 2014-03-23 03:43:40 -0500 )edit

Have you looked in the system logs to see if the process was killed because of out of memory issues?

dstanek gravatar imagedstanek ( 2014-03-27 08:16:54 -0500 )edit

3 answers

Sort by ยป oldest newest most voted
0

answered 2014-03-27 15:08:20 -0500

cloudssky gravatar image

updated 2014-03-27 15:09:45 -0500

The simplest workaround for me was to set the cron job as:

0 */1 * * * /etc/init.d/openstack-keystone restart

It works the whole week.

Thanks to tim-bell and SunSunich for their kind feedback.

I'll try SuchSunich's advice and let you know.

edit flag offensive delete link more
1

answered 2014-03-26 02:39:27 -0500

SunSunich gravatar image

Maybe this problem occurs because keystone database have to many old records. I use this one in cron:

0 * * * * keystone-manage token_flus

edit flag offensive delete link more
3

answered 2014-03-23 09:46:18 -0500

tim-bell gravatar image

There are some socket issues with keystone in the original havana release. Bug chain is at https://bugs.launchpad.net/python-nov... . This is now being backported to stable Havana so should appear in RDO when the next stable release is out.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2014-03-20 12:52:22 -0500

Seen: 887 times

Last updated: Mar 27 '14