multi nodes nova vnc setup (error code 1006)
Hi Folks,
I'm going crazy on how to setup the [vnc] part of nova.conf.
I have 3 nodes, all of them are both controller and compute host
10.106.1.90 --> node1
10.106.1.99 --> node2
10.106.1.100 --> node3
10.106.1.101 --> ha vip used by keepalived
I tried several different configuration for the [vnc] part of nova.conf file, but I always have a "error code 1006" when I try to access an instance console via vnc. I think it's only a vnc proxy issue because I can have a working console if I connect using a vnc client to the IP of the node running the instance (on port 5900)
the 3 nodes are using several memcached server (each node have a memcache server)
I found several tuto but most of the time controller and compute roles are not on the same node.
What should be the content of the vnc part on each node? Last attempt I did was like that:
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://10.106.1.101:6080/vnc_auto.html
on nova logs I can see that this message is present on a different node every time I try to access the console:
2018-03-20 16:58:17.242 1430 INFO nova.consoleauth.manager [req-01c5432d-1a74-4f2f-996c-7d76b3a1159c 6c332937dab249af916428758e416821 73e4d78698ea42afb8f0a48b9bd8bcc9 - default default] Received Token: 1f150a60-3109-4ed7-b44e-9002fb21e408, {'instance_uuid': u'f48cb7b9-91f4-4c42-9187-af081baf8292', 'access_url': u'http://10.106.1.101:6080/vnc_auto.html?token=1f150a60-3109-4ed7-b44e-9002fb21e408', 'token': u'1f150a60-3109-4ed7-b44e-9002fb21e408', 'last_activity_at': 1521561497.239911, 'internal_access_path': None, 'console_type': u'novnc', 'host': u'10.106.1.99', 'port': u'5900'}
and got this on another node (random too):
2018-03-20 16:58:22.011 1397 INFO nova.consoleauth.manager [req-fe3a012b-eb3c-4866-a1d2-3963726ea93c - - - - -] Checking Token: 1f150a60-3109-4ed7-b44e-9002fb21e408, False
even if both messages are generated on the same node, I have some problem
Thx a lot in advance
nova_consoleauth can’t run in active/active configuration. I guess that the various consoleauths don’t share token information. Could that be your problem?
Hi Bernd, even if I try to stop nova_consoleauth on 2 node, same issue, but in this case the 2 messages I mentioned above are displayed in the samenode (where consoleauth is still running)
any one having a multi node cluster with both controller and compute role on each node with a working vnc console? thx