Ask Your Question

bertly's profile - activity

2018-11-27 08:49:59 -0600 received badge  Student (source)
2018-06-16 05:32:01 -0600 received badge  Popular Question (source)
2018-03-26 14:39:59 -0600 received badge  Notable Question (source)
2018-03-26 14:39:59 -0600 received badge  Popular Question (source)
2018-03-26 14:39:23 -0600 received badge  Famous Question (source)
2018-03-26 14:39:23 -0600 received badge  Popular Question (source)
2018-03-26 14:39:23 -0600 received badge  Notable Question (source)
2018-03-24 15:33:06 -0600 received badge  Famous Question (source)
2018-03-24 15:33:06 -0600 received badge  Notable Question (source)
2017-12-19 07:27:57 -0600 received badge  Famous Question (source)
2017-08-28 07:07:24 -0600 received badge  Famous Question (source)
2017-08-09 11:18:08 -0600 received badge  Notable Question (source)
2017-06-28 10:55:50 -0600 received badge  Famous Question (source)
2017-05-24 07:37:43 -0600 received badge  Notable Question (source)
2017-05-16 11:27:24 -0600 received badge  Notable Question (source)
2017-05-10 06:19:37 -0600 received badge  Popular Question (source)
2017-05-09 11:24:02 -0600 received badge  Popular Question (source)
2017-05-09 04:39:55 -0600 answered a question How to remove dashboard ubuntu theme ?

I edited /var/lib/dpkg/info/openstack-dashboard-ubuntu-theme.postrm and removed all except the exit 0, removed the package and rebuild openstack-dashboard. All good now.

2017-05-04 08:21:54 -0600 asked a question Disable "Create New Volume" from Horizon

Hi, it seems that the new version of Horizon in Newton has "Create New Volume" automatically enabled. I tried to disable it within local_settings.py:

LAUNCH_INSTANCE_DEFAULTS = {
#    'config_drive': False,
#    'enable_scheduler_hints': True
    'disable_image': True,
#    'disable_instance_snapshot': False,
    'disable_volume': True,
#    'disable_volume_snapshot': False,
}

Is that the right place to look?

Example:

http://i.imgur.com/4snRupO.png (image description)

Thanks

bert

2017-05-04 06:51:52 -0600 received badge  Popular Question (source)
2017-05-04 02:26:41 -0600 commented answer VolumeNotCreated: Volume did not finish being created even after we waited 197 seconds or 61 attempts

yes thats it. Thank you.

2017-05-04 02:26:27 -0600 received badge  Scholar (source)
2017-05-03 10:59:07 -0600 asked a question VolumeNotCreated: Volume did not finish being created even after we waited 197 seconds or 61 attempts

Hi, currently running in the following error while creating a new server. Cinder is very slow (which is atm okay) and nova-compute stops waiting for the image to be created.

Where can I configure this timeouts? I can't find it in the documentation of Openstack Newton.

2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [req-cf72b121-a4f8-40a5-b235-38cabc8a508f 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - - -] [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f] Instance failed block device setup
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f] Traceback (most recent call last):
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1586, in _prep_block_device
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]     wait_func=self._await_block_device_map_created)
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]   File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 514, in attach_block_devices
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]     map(_log_and_attach, block_device_mapping)
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]   File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 512, in _log_and_attach
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]     bdm.attach(*attach_args, **attach_kwargs)
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]   File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 404, in attach
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]     self._call_wait_func(context, wait_func, volume_api, vol['id'])
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]   File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 363, in _call_wait_func
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]     {'volume_id': volume_id, 'exc': exc})
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]     self.force_reraise()
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]     six.reraise(self.type_, self.value, self.tb)
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]   File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 353, in _call_wait_func
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]     wait_func(context, volume_id)
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1258, in _await_block_device_map_created
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f]     volume_status=volume_status)
2017-05-03 17:55:18.229 30849 ERROR nova.compute.manager [instance: 0e4680eb-fe6a-4573-bbc4-84bdf27e879f] VolumeNotCreated: Volume 159efd05-b301-4994-b76d-4fd92695e876 did not finish being created even after we waited 197 seconds or 61 attempts. And its status is creating.
2017-05-03 17:55:18.229 30849 ...
(more)
2017-05-02 07:48:26 -0600 asked a question Invalid image identifier or unable to access requested image.

Hi, I did an upgrade from Mitaka to Newton and running into a problem to start a virtual machine. As storage we are using ceph. It seems that cinder cannot access glance, but the glance api responds with 200er and I see a different error in the nova-compute logs.

UPDATE: When I enable glance api v1 in glance-api then it is working again. Any ideas why version 2 is not working?

glance-api.conf ... enable_v1_api=True enable_v2_api=True ...

cinder-api.log

2017-05-02 14:35:52.845 3769 INFO cinder.api.openstack.wsgi [req-ce751821-c230-49d6-a5f3-4e003043607a 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - default default] POST http://controller:8776/v2/3723cf7355e946efbf40f4f29ff845f6/volumes
2017-05-02 14:35:52.847 3769 INFO cinder.api.v2.volumes [req-ce751821-c230-49d6-a5f3-4e003043607a 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - default default] Create volume of 1 GB
2017-05-02 14:35:52.865 3769 INFO cinder.api.openstack.wsgi [req-ce751821-c230-49d6-a5f3-4e003043607a 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - default default] HTTP exception thrown: Invalid image identifier or unable to access requested image. <-------------------
2017-05-02 14:35:52.867 3769 INFO cinder.api.openstack.wsgi [req-ce751821-c230-49d6-a5f3-4e003043607a 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - default default] http://controller:8776/v2/3723cf7355e946efbf40f4f29ff845f6/volumes returned with HTTP 400
2017-05-02 14:35:52.869 3769 INFO eventlet.wsgi.server [req-ce751821-c230-49d6-a5f3-4e003043607a 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - default default] 172.16.102.99 "POST /v2/3723cf7355e946efbf40f4f29ff845f6/volumes HTTP/1.1" status: 400  len: 392 time: 0.0301850

glance-api.log

2017-05-02 14:35:50.490 3678 INFO eventlet.wsgi.server [req-701563e7-c5cc-4c24-afeb-48696ee01d03 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - default default] 172.16.102.99 - - [02/May/2017 14:35:50] "GET /v2/images/d5dcdfa0-1638-4c53-9d2f-27781ff5c76b HTTP/1.1" 200 907 0.026464
2017-05-02 14:35:50.500 3678 INFO eventlet.wsgi.server [req-e5b3420a-741c-44e7-afb2-6e88c11957f9 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - default default] 172.16.102.99 - - [02/May/2017 14:35:50] "GET /v2/schemas/image HTTP/1.1" 200 4364 0.006064
2017-05-02 14:35:51.004 3679 INFO eventlet.wsgi.server [-] 172.16.102.99 - - [02/May/2017 14:35:51] "OPTIONS / HTTP/1.0" 200 94 0.002375
2017-05-02 14:35:51.017 3679 INFO eventlet.wsgi.server [-] 172.16.102.99 - - [02/May/2017 14:35:51] "OPTIONS / HTTP/1.0" 200 94 0.001645
2017-05-02 14:35:51.062 3679 INFO eventlet.wsgi.server [req-4cb01749-2138-4e09-a15b-3c25628b919b 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - default default] 172.16.102.99 - - [02/May/2017 14:35:51] "GET /v2/images/d5dcdfa0-1638-4c53-9d2f-27781ff5c76b HTTP/1.1" 200 907 0.043746
2017-05-02 14:35:51.076 3679 INFO eventlet.wsgi.server [req-87b96c28-b3ba-4dd6-a507-16a7c5bd13ea 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - default default] 172.16.102.99 - - [02/May/2017 14:35:51] "GET /v2/schemas/image HTTP/1.1" 200 4364 0.007997
2017-05-02 14:35:51.945 3680 INFO eventlet.wsgi.server [req-e108571a-2775-4c74-9399-f0141ba61c86 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - default default] 172.16.102.99 - - [02/May/2017 14:35:51] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 2661 0.056409
2017-05-02 14:35:51.958 3680 INFO eventlet.wsgi.server [req-52ed314c-921d-427f-bdcd-9e002e9ee6f2 217ce8c543b3495fa23b48a99cf9e4c5 3723cf7355e946efbf40f4f29ff845f6 - default default] 172.16.102.99 - - [02/May/2017 14:35:51] "GET /v2/schemas/image HTTP/1.1" 200 4364 0.007294

nova-compute.log

2017-05-02 14:35:56.575 30849 ERROR nova.compute ...
(more)
2017-01-06 05:14:53 -0600 received badge  Notable Question (source)
2017-01-06 04:08:40 -0600 commented question cinder-volume AttributeError

removed all configuration, but stillgetting the same error

2017-01-06 03:52:51 -0600 received badge  Supporter (source)
2017-01-06 03:52:44 -0600 commented answer Cinder gives ServiceTooOld: One of the services is in Liberty version after upgrade.

yes deleting the services with None as RPC version and Object version worked.

2017-01-06 02:36:44 -0600 commented answer Cinder gives ServiceTooOld: One of the services is in Liberty version after upgrade.

is it save to remove the cinder services and they will be recreated?

2017-01-05 06:46:14 -0600 commented question cinder-volume AttributeError

Already enabled verbose logging. The folder /usr/share/cinder does not exist. Also pasted my config.

2017-01-05 06:23:03 -0600 received badge  Popular Question (source)
2017-01-05 05:03:47 -0600 commented question cinder-volume AttributeError

updated all packages (dist-upgrade) and upgraded the openstack packages to newton

2017-01-05 04:16:30 -0600 answered a question Cinder gives ServiceTooOld: One of the services is in Liberty version after upgrade.

Did you find a way to solve the problem? I am running in the same now.

Thanks

bert

2017-01-05 03:40:23 -0600 asked a question cinder-volume AttributeError

Hi, just upgraded from Mitaka to Newton and Ubuntu 16.04 and now getting a python error for cinder-volume.

# /etc/init.d/cinder-volume systemd-start
Traceback (most recent call last):
  File "/usr/bin/cinder-volume", line 6, in <module>
    from cinder.cmd.volume import main
  File "/usr/lib/python2.7/dist-packages/cinder/cmd/volume.py", line 39, in <module>
    from oslo_privsep import priv_context
  File "/usr/lib/python2.7/dist-packages/oslo_privsep/priv_context.py", line 27, in <module>
    from oslo_privsep import daemon
  File "/usr/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 213, in <module>
    def _fd_logger(level=logging.WARN):
AttributeError: 'module' object has no attribute 'WARN'

Any idea why this is happening?

Thanks

bert

UPDATE:

# dpkg -l cinder-volume
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                                                                      Version                                   Architecture                              Description
+++-=========================================================================-=========================================-=========================================-========================================================================================================================================================
ii  cinder-volume                                                             2:9.0.0-0ubuntu1~cloud0                   all                                       Cinder storage service - Volume server

/etc/cinder/cinder.conf

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
auth_strategy = keystone
my_ip = 10.10.102.6
enabled_backends = ceph
glance_api_servers = http://controller:9292
transport_url = rabbit://openstack:SECRETPW@controller

[database]
connection = mysql+pymysql://cinder:SECRETPW@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = SECRETPW

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 41d2271d-fb04-416b-8844-90929b45d783
volume_backend_name = ceph

Do you see something suspicious? Unfortunately I dont have any other log, which is a little bit weird I think (or its the systemd way...)

UPDATE2:

removed all cinder and python packages for oslo. Now I am getting this:

Setting up cinder-common (2:9.0.0-0ubuntu1~cloud0) ...
Traceback (most recent call last):
  File "/usr/bin/cinder-manage", line 6, in <module>
    from cinder.cmd.manage import main
  File "/usr/lib/python2.7/dist-packages/cinder/cmd/manage.py", line 76, in <module>
    from cinder import context
  File "/usr/lib/python2.7/dist-packages/cinder/context.py", line 29, in <module>
    from cinder import policy
  File "/usr/lib/python2.7/dist-packages/cinder/policy.py", line 21, in <module>
    from oslo_policy import policy
  File "/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py", line 215, in <module>
    from oslo_policy.openstack.common import fileutils
  File "/usr/local/lib/python2.7/dist-packages/oslo_policy/openstack/common/fileutils.py", line 23, in <module>
    from oslo.utils import excutils
ImportError: No module named utils
dpkg: error processing package cinder-common (--configure):
 subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of cinder-volume:
 cinder-volume depends on cinder-common (= 2:9.0.0-0ubuntu1~cloud0); however:
  Package cinder-common is not configured yet.

dpkg: error processing package cinder-volume (--configure):
 dependency problems - leaving unconfigured
Processing triggers for libc-bin (2.23-0ubuntu5) ...
No apport report written because the error message indicates its a ...
(more)
2017-01-02 09:31:26 -0600 answered a question Neutron releases 3 IPs after minor update in Mitaka

OKAY it was caused by apparmor. I removed it and libvirtd was not restarted. Since I cant restart them atm I installed apparmor again and its working. Very weird and would be interesting why this is causing three different mac address for one vm.

Any insights?

2017-01-02 08:42:19 -0600 asked a question Neutron releases 3 IPs after minor update in Mitaka

Hi, I just did an minor update on Mitaka to "8.3.0-0ubuntu1.1~cloud0" on Ubuntu 14.04. During the upgrade nothing special happened. Only rabbitmq was upgraded to 3.6.6 (which caused an upgrade in Rabbitmq), but this is working fine.

The problem that I have now that new VMs get three IPs assigned when booting. Why? Good question? The VM has only one interface and one MAC. The associated mac address gets a ip address and is reachable over this. The two other IPs have different MAC address, but they are still assigned to the VM.

E.g:

root@gc-controller01:/var/lib/neutron/dhcp/14594365-3c61-4228-a70f-68cc37b67f84# grep 10.10.105.149 *
addn_hosts:10.10.105.149    host-10-10-105-149.openstacklocal host-10-10-105-149
host:fa:16:3e:59:14:ac,host-10-10-105-149.openstacklocal,10.10.105.149  <-------- this is the real MAC
root@gc-controller01:/var/lib/neutron/dhcp/14594365-3c61-4228-a70f-68cc37b67f84# grep 10.10.105.148 *
addn_hosts:10.10.105.148    host-10-10-105-148.openstacklocal host-10-10-105-148
host:fa:16:3e:76:91:13,host-10-10-105-148.openstacklocal,10.10.105.148
root@gc-controller01:/var/lib/neutron/dhcp/14594365-3c61-4228-a70f-68cc37b67f84# grep 10.10.105.147 *
addn_hosts:10.10.105.147    host-10-10-105-147.openstacklocal host-10-10-105-147
host:fa:16:3e:0e:8b:5e,host-10-10-105-147.openstacklocal,10.10.105.147
root@gc-controller01:/var/lib/neutron/dhcp/14594365-3c61-4228-a70f-68cc37b67f84#

Within the VM:

Jan  2 14:07:22 monero-1 dhclient[954]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 3 (xid=0xf6ca6917)
Jan  2 14:07:22 monero-1 dhclient[954]: DHCPREQUEST of 10.10.105.149 on eth0 to 255.255.255.255 port 67 (xid=0x1769caf6)
Jan  2 14:07:22 monero-1 sh[939]: DHCPREQUEST of 10.10.105.149 on eth0 to 255.255.255.255 port 67 (xid=0x1769caf6)
Jan  2 14:07:22 monero-1 sh[939]: DHCPOFFER of 10.10.105.149 from 10.10.104.11
Jan  2 14:07:22 monero-1 dhclient[954]: DHCPOFFER of 10.10.105.149 from 10.10.104.11
Jan  2 14:07:22 monero-1 dhclient[954]: DHCPACK of 10.10.105.149 from 10.10.104.11
Jan  2 14:07:22 monero-1 sh[939]: DHCPACK of 10.10.105.149 from 10.10.104.11
Jan  2 14:07:22 monero-1 dhclient[954]: bound to 10.10.105.149 -- renewal in 155961 seconds.
Jan  2 14:07:22 monero-1 sh[939]: bound to 10.10.105.149 -- renewal in 155961 seconds.

Any idea why this is happening? Rabbit looks good, the VM is running and reachable. Nothing suspicious in the logs except this, but I doubt that this has something to do with the problem:

2017-01-02 14:55:58.244 12136 ERROR nova.scheduler.utils [req-bc03ee37-35b0-45f1-97c8-972ef8a36b20 c38bf6f97e134c26aba5041afcd17a68 b33954142af1438eadbd159b3801db7f - - -] [instance: 93cb46fa-4585-4090-9565-cb41f65a5937] Error from last host: gc-hv04.example.com (node gc-hv04.example.com): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1926, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2116, in _build_and_run_instance ...
(more)