Ask Your Question

multi nodes nova vnc setup (error code 1006)

asked 2018-03-20 10:21:57 -0500

vercetty92 gravatar image

updated 2018-03-20 11:00:31 -0500

Hi Folks,

I'm going crazy on how to setup the [vnc] part of nova.conf.

I have 3 nodes, all of them are both controller and compute host --> node1 --> node2 --> node3 --> ha vip used by keepalived

I tried several different configuration for the [vnc] part of nova.conf file, but I always have a "error code 1006" when I try to access an instance console via vnc. I think it's only a vnc proxy issue because I can have a working console if I connect using a vnc client to the IP of the node running the instance (on port 5900)

the 3 nodes are using several memcached server (each node have a memcache server)

I found several tuto but most of the time controller and compute roles are not on the same node.

What should be the content of the vnc part on each node? Last attempt I did was like that:


enabled = True

vncserver_listen =

vncserver_proxyclient_address = $my_ip

novncproxy_base_url =

on nova logs I can see that this message is present on a different node every time I try to access the console:

2018-03-20 16:58:17.242 1430 INFO nova.consoleauth.manager [req-01c5432d-1a74-4f2f-996c-7d76b3a1159c 6c332937dab249af916428758e416821 73e4d78698ea42afb8f0a48b9bd8bcc9 - default default] Received Token: 1f150a60-3109-4ed7-b44e-9002fb21e408, {'instance_uuid': u'f48cb7b9-91f4-4c42-9187-af081baf8292', 'access_url': u'', 'token': u'1f150a60-3109-4ed7-b44e-9002fb21e408', 'last_activity_at': 1521561497.239911, 'internal_access_path': None, 'console_type': u'novnc', 'host': u'', 'port': u'5900'}

and got this on another node (random too):

2018-03-20 16:58:22.011 1397 INFO nova.consoleauth.manager [req-fe3a012b-eb3c-4866-a1d2-3963726ea93c - - - - -] Checking Token: 1f150a60-3109-4ed7-b44e-9002fb21e408, False

even if both messages are generated on the same node, I have some problem

Thx a lot in advance

edit retag flag offensive close merge delete


nova_consoleauth can’t run in active/active configuration. I guess that the various consoleauths don’t share token information. Could that be your problem?

Bernd Bausch gravatar imageBernd Bausch ( 2018-03-20 18:56:42 -0500 )edit

Hi Bernd, even if I try to stop nova_consoleauth on 2 node, same issue, but in this case the 2 messages I mentioned above are displayed in the samenode (where consoleauth is still running)

vercetty92 gravatar imagevercetty92 ( 2018-03-21 03:46:34 -0500 )edit

any one having a multi node cluster with both controller and compute role on each node with a working vnc console? thx

vercetty92 gravatar imagevercetty92 ( 2018-03-23 04:58:05 -0500 )edit

2 answers

Sort by » oldest newest most voted

answered 2018-04-06 03:41:49 -0500

vercetty92 gravatar image

ok so I have recreated a cluster from scratch and it's now ok. Here is the setup on each node:

novncproxy_base_url= (this is the HA vip)

and effectively the service openstack-nova-consoleauth need to run only on one controller, so I create a pacemaker group that contains the ha VIP, and a consoleauth systemd resource.

edit flag offensive delete link more



Bernd Bausch gravatar imageBernd Bausch ( 2018-04-06 04:48:33 -0500 )edit

answered 2018-03-21 03:31:09 -0500

Deepa gravatar image

Ensure console is running by below command

openstack console log show --lines 100 “instance id”

Make sure nova-consoleauth and nova-novncproxy services are running.

And nova.conf is as below

vnc_enabled = True novnc_enabled = True vnc_keymap = en-us vncserver_listen = vncserver_proxyclient_address = $yourip novncproxy_base_url = $yoururl:6080/vnc_auto.html

In case if you are changing configuration make sure to restart nova-consoleauth,nova-novncproxy and nova-compute service

Click on console of VM (from horizon) and open in a new tab ,refresh it 2-4 times and see

edit flag offensive delete link more



Hi Deepa, the command you provided give me no output, is it normal?. But I know I can connect to the console without using the vnc proxy and by targeting port 5900 of the host that own the instance. I did the changes you mentioned (novnc_enabled & vnc_keymap was not present) but same thing. thx

vercetty92 gravatar imagevercetty92 ( 2018-03-21 04:05:07 -0500 )edit

It should give you the last 100 lines of console logs of the instance .I am having same configuration as above and for me it works fine .Also did you refresh(4 + times) the console in new tab

Deepa gravatar imageDeepa ( 2018-03-21 04:21:05 -0500 )edit

Yes I have refreshed the console tab a lot of time. On your setup all your hosts are both controller and compute? and all the openstack services are running on each node? (at least nova-api nova-compute nova-conductor nova-consoleauth nova-novncproxy nova-scheduler) Thx

vercetty92 gravatar imagevercetty92 ( 2018-03-21 04:55:40 -0500 )edit

No mine is a different set up with controllers and compute .Both are on different physical machines .

Deepa gravatar imageDeepa ( 2018-03-21 05:56:56 -0500 )edit

ok so it's like all the finding I had, most of the people have controller and compute role separated. my goal is to have all the role on all the nodes, but seems it's really not standard.

vercetty92 gravatar imagevercetty92 ( 2018-03-22 04:39:54 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2018-03-20 10:21:57 -0500

Seen: 706 times

Last updated: Apr 06 '18