Ask Your Question

randall.crook's profile - activity

2019-01-23 03:03:02 -0600 received badge  Popular Question (source)
2019-01-23 03:03:02 -0600 received badge  Famous Question (source)
2019-01-23 03:03:02 -0600 received badge  Notable Question (source)
2018-08-11 01:09:13 -0600 asked a question Anyone else had firefox issues with novnc?

Having issues with firefox 61.0.2 where it appears the firefox is not supplying the token when starting up the novnc element.

Below is the logs generated when I select the console for an instance using firefox and I get the error:

Failed to connect to server (code: 1006)

But I get no issues if I use google chrome 68.0.3440.106

/var/log/nova-consoleauth.log

2018-08-11 16:00:12.799 1554 INFO nova.consoleauth.manager [req-ebb823bd-6ef0-4cfc-91ae-32339b872ac9 fe59f9c226074e4594a66540bc578804 b56955e177e348dfaf0a86e28ffdbf4c - default default] Received Token: 77cd29a7-5625-4b0b-a79b-13174129a222, {'instance_uuid': u'1e48c584-49e5-465c-9dbb-dd71c25be879', 'access_url': u'http://10.1.1.17:6080/vnc_auto.html?token=77cd29a7-5625-4b0b-a79b-13174129a222', 'token': u'77cd29a7-5625-4b0b-a79b-13174129a222', 'last_activity_at': 1533967212.799178, 'internal_access_path': None, 'console_type': u'novnc', 'host': u'10.1.1.17', 'port': u'5900'}
2018-08-11 16:00:13.591 1554 INFO nova.consoleauth.manager [req-e399a99c-dfc6-4f15-8e22-b4e14009f60d - - - - -] Checking Token: , False

/var/log/nova-novncproxy.log

2018-08-11 16:00:13.554 22507 INFO nova.console.websocketproxy [-] 10.1.1.68 - - [11/Aug/2018 16:00:13] 10.1.1.68: Plain non-SSL (ws://) WebSocket connection
2018-08-11 16:00:13.555 22507 INFO nova.console.websocketproxy [-] 10.1.1.68 - - [11/Aug/2018 16:00:13] 10.1.1.68: Version hybi-13, base64: 'False'
2018-08-11 16:00:13.555 22507 INFO nova.console.websocketproxy [-] 10.1.1.68 - - [11/Aug/2018 16:00:13] 10.1.1.68: Path: '/websockify'
2018-08-11 16:00:13.593 22507 INFO nova.console.websocketproxy [req-e399a99c-dfc6-4f15-8e22-b4e14009f60d - - - - -] handler exception: The token '' is invalid or has expired
2017-11-26 18:30:37 -0600 received badge  Famous Question (source)
2017-05-30 09:12:34 -0600 received badge  Notable Question (source)
2017-05-30 09:12:34 -0600 received badge  Popular Question (source)
2017-01-06 23:54:08 -0600 asked a question Cant get volume_clear option in cinder.conf to work.

I have an RDO packstack allinone newton install running well on an old server. But I can not convince cinder to not wipe the volumes on delete. It issues a dd if=zero to clear the whole volume no matter what I set the volume_clear and volume_clear_size options to in both the cinder.conf and nova.conf files even after multiple restarts and reboots.

I understand why does it, but it becomes an issue when I am waiting a fair amount of time for the dd to completely wipe 160 GB volumes.

It's a simple home use server with one disk and I don't need the security the wipe gives and cinder just keeps wiping the volumes prior to deleting them.

Any help would be appreciated.

Randall.

2016-11-12 11:20:49 -0600 received badge  Famous Question (source)
2016-10-25 11:43:01 -0600 received badge  Notable Question (source)
2016-10-24 15:18:17 -0600 received badge  Supporter (source)
2016-10-24 15:17:57 -0600 received badge  Scholar (source)
2016-10-24 15:16:16 -0600 commented answer How is local_gb set?

Worked like a charm... Thanks very much for the quick response.

Regards,

Randall.

2016-10-24 15:15:28 -0600 received badge  Popular Question (source)
2016-10-24 13:15:50 -0600 received badge  Student (source)
2016-10-24 03:14:47 -0600 received badge  Notable Question (source)
2016-10-24 03:14:47 -0600 received badge  Famous Question (source)
2016-10-24 03:14:47 -0600 received badge  Popular Question (source)
2016-10-24 03:12:12 -0600 asked a question How is local_gb set?

Just finished installing Newton on Centos 7.2 using the current RDO packstack.

I made sure there was plenty of space by setting up a cinder-volumes VG and all seemed to be ok until i tried to launch a third instance and I got the following error:

"Error: Failed to perform requested operation on instance "dii", the instance has an error status: Please try again later [Error: No valid host was found. There are not enough hosts available.]".

The messages I'm getting in the log are:

2016-10-23 23:32:50.290 1507 WARNING nova.scheduler.host_manager [req-7fff9103-75ac-4578-8c11-0957545b7295 f68c4ddce4b24061ababda71362b7324 4752b726f6e84141b386e8f0adacae13 - - -] Host anesidora has more disk space than database expected (130 GB > 19 GB)
2016-10-23 23:32:50.291 1507 INFO nova.filters [req-7fff9103-75ac-4578-8c11-0957545b7295 f68c4ddce4b24061ababda71362b7324 4752b726f6e84141b386e8f0adacae13 - - -] Filter DiskFilter returned 0 hosts
2016-10-23 23:32:50.292 1507 INFO nova.filters [req-7fff9103-75ac-4578-8c11-0957545b7295 f68c4ddce4b24061ababda71362b7324 4752b726f6e84141b386e8f0adacae13 - - -] Filtering removed all hosts for the request with instance ID '9d0e6a3e-1401-4f14-bdd1-219dd60315be'. Filter results: ['RetryFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'RamFilter: (start: 1, end: 1)', 'DiskFilter: (start: 1, end: 0)']

Of course there is plenty of disk pace on my cinder-volumes VG where the volumes are actually being created:

[root@anesidora nova]# vgs
  VG             #PV #LV #SN Attr   VSize   VFree
  centos           1   3   0 wz--n- 246.00g  15.86g
  cinder-volumes   1   2   0 wz--n- 684.82g 504.82g

And the volumes are being created on the cinder VG

  LV                                          VG             Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pool00                                      centos         twi-aotz-- 140.00g               10.73  5.67
  root                                        centos         Vwi-aotz-- 140.00g pool00        10.73
  swap                                        centos         -wi-ao----  90.00g
  volume-75aa56f5-03e9-4d04-b7b7-147e44bb5abe cinder-volumes -wi-ao----  80.00g
  volume-eab6405f-3824-456d-bad4-aabfcca87d0d cinder-volumes -wi-ao---- 100.00g

From what Im seeing, the hypervisor local_gb has been set to the size of the pool00 and not to the size of the cinder-volumes VG. And I gather that as I add instances the local_used is increased based on the instance volume size and so it runs out fast as the local_gb is wrong for this install.

Can anyone help with how I can reset the local_gb to the actual size of the cinder-volumes VG?

Thanks in advance,

Randall.

2016-06-10 09:40:54 -0600 asked a question What creates the /dev/disk/by-path links for the mitaka lvm - iscsi cinder volumes?

I am losing the /dev/disk/by-path links for the instances cinder volumes when I reboot the server. (Using a packstack --allinone setup). I can see the lvm lv's in /dev/disk/by-id and I can see what the instances are asking for when I go to start them and I can manually create the links as a work around but what ever process is supposed to create them automatically is not.

what should be there:

lrwxrwxrwx. 1 root root 10 Jun 10 21:33 ip-10.1.1.29:3260-iscsi-iqn.2010-10.org.openstack:volume-c8bc2d31-6473-45d7-97f8-17a129c6e936-lun-0 -> ../../dm-4
lrwxrwxrwx. 1 root root 10 Jun 10 21:35 ip-10.1.1.29:3260-iscsi-iqn.2010-10.org.openstack:volume-ea0d69f8-9441-49b1-9947-d3136c83e92d-lun-0 -> ../../dm-3
lrwxrwxrwx. 1 root root  9 Jun 10 20:49 ip-10.1.254.1:3260-iscsi-iqn.2001-04.com.crookware:storage.disk1.sys1.a-lun-0 -> ../../sdb
lrwxrwxrwx. 1 root root 10 Jun 10 20:49 ip-10.1.254.1:3260-iscsi-iqn.2001-04.com.crookware:storage.disk1.sys1.a-lun-0-part1 -> ../../sdb1
lrwxrwxrwx. 1 root root  9 Jun 10 20:49 ip-10.1.254.1:3260-iscsi-iqn.2001-04.com.crookware:storage.disk1.sys1.a-lun-1 -> ../../sdc
lrwxrwxrwx. 1 root root  9 Jun 10 20:49 ip-10.1.254.3:3260-iscsi-iqn.2005-10.org.freenas.ctl:cinder-lun-0 -> ../../sdd
lrwxrwxrwx. 1 root root  9 Jun 10 20:49 pci-0000:00:1f.1-ata-1.0 -> ../../sr0
lrwxrwxrwx. 1 root root  9 Jun 10 20:49 pci-0000:02:00.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx. 1 root root 10 Jun 10 20:49 pci-0000:02:00.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Jun 10 20:49 pci-0000:02:00.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 10 Jun 10 20:49 pci-0000:02:00.0-scsi-0:0:0:0-part3 -> ../../sda3

what ends up there after a reboot:

lrwxrwxrwx. 1 root root  9 Jun 10 20:49 ip-10.1.254.1:3260-iscsi-iqn.2001-04.com.crookware:storage.disk1.sys1.a-lun-0 -> ../../sdb
lrwxrwxrwx. 1 root root 10 Jun 10 20:49 ip-10.1.254.1:3260-iscsi-iqn.2001-04.com.crookware:storage.disk1.sys1.a-lun-0-part1 -> ../../sdb1
lrwxrwxrwx. 1 root root  9 Jun 10 20:49 ip-10.1.254.1:3260-iscsi-iqn.2001-04.com.crookware:storage.disk1.sys1.a-lun-1 -> ../../sdc
lrwxrwxrwx. 1 root root  9 Jun 10 20:49 ip-10.1.254.3:3260-iscsi-iqn.2005-10.org.freenas.ctl:cinder-lun-0 -> ../../sdd
lrwxrwxrwx. 1 root root  9 Jun 10 20:49 pci-0000:00:1f.1-ata-1.0 -> ../../sr0
lrwxrwxrwx. 1 root root  9 Jun 10 20:49 pci-0000:02:00.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx. 1 root root 10 Jun 10 20:49 pci-0000:02:00.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Jun 10 20:49 pci-0000:02:00.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 10 Jun 10 20:49 pci-0000:02:00.0-scsi-0:0:0:0-part3 -> ../../sda3

Using CentOS 7 as the server OS.

If anyone can point me in the direction of how these links are created it would be mush appreciated. (I have checked udev rules but nothing in them seems close to what is ... (more)