Ask Your Question

binooetomo's profile - activity

2019-02-19 19:25:34 -0600 asked a question redis sentinel

Dear All.

Is the redis client library/module used in all openstack service/component can work with redis-sentinel ?

Sincerely -binjo-

2019-02-19 11:48:38 -0600 received badge  Notable Question (source)
2019-02-17 23:07:57 -0600 received badge  Popular Question (source)
2019-02-16 04:50:55 -0600 asked a question instance creation stuck on 'BUILD' state

I have an all in one openstack box.

Try to create instance, it's 'created' but state stucked at 'BUILD'.

[root@ceph-node4 ~]# date
Sab Peb 16 17:29:16 WIB 2019
[root@ceph-node4 ~]# openstack server show 90c793bf-937d-43f0-858f-a44cd2a93773 -f json
{
  "OS-EXT-STS:task_state": "scheduling", 
  "addresses": "", 
  "image": "cirros (a84594f4-c281-48fd-a21a-acf25e4fb9ae)", 
  "OS-EXT-STS:vm_state": "building", 
  "OS-EXT-SRV-ATTR:instance_name": "instance-00000025", 
  "OS-SRV-USG:launched_at": null, 
  "flavor": "cilikan (d3fc4063-f239-4ede-975c-24e92ab18eaf)", 
  "id": "90c793bf-937d-43f0-858f-a44cd2a93773", 
  "volumes_attached": "", 
  "user_id": "2fe8b8d1cf68434eaace73007cc8b5cb", 
  "OS-DCF:diskConfig": "AUTO", 
  "accessIPv4": "", 
  "accessIPv6": "", 
  "progress": 0, 
  "OS-EXT-STS:power_state": "NOSTATE", 
  "OS-EXT-AZ:availability_zone": "nova", 
  "config_drive": "", 
  "status": "BUILD", 
  "updated": "2019-02-16T10:29:23Z", 
  "hostId": "", 
  "OS-EXT-SRV-ATTR:host": null, 
  "OS-SRV-USG:terminated_at": null, 
  "key_name": "mb00g", 
  "properties": "", 
  "project_id": "d4daffb2e58d4cc198d097a753c2511f", 
  "OS-EXT-SRV-ATTR:hypervisor_hostname": null, 
  "name": "gombal-01", 
  "created": "2019-02-16T10:14:15Z"
}

I'm tailing : ++ all /var/log/neutron/ ++ all /var/log/nova/

And could not find any error.

I try to list /var/log/libvirt/qemu/ just to check if there is any about my instance ... but there is not.

Kindly please tell me what to look at to trouble shoot this problem.

Sincerely

-bino-

Just for reference,

Last 200 lines of nova-scheduler log

[root@ceph-node4 ~]# tail -n 200 /var/log/nova/nova-scheduler.log 
2019-02-16 17:38:19.574 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.574 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] scheduler.query_placement_for_availability_zone = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.575 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] scheduler.workers              = 4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.576 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.577 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] cinder.auth_type               = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.577 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.578 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] cinder.catalog_info            = volumev3:cinderv3:publicURL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.579 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.579 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.580 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.581 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.582 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.582 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3032
2019-02-16 17:38:19.583 70793 DEBUG oslo_service.service [req-d850efa6-0a3f-4f7c-b3ac-c7c7feee7639 - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py ...
(more)
2019-02-11 00:43:35 -0600 received badge  Popular Question (source)
2019-02-11 00:14:34 -0600 asked a question Conflicting resource provider name

Dear all ...

I have a controller node with hostname 'ceph-node1' Curently, it's also function as a compute node.

[root@ceph-node1 ~]# openstack hypervisor list -f json
[
  {
    "Hypervisor Hostname": "ceph-node1", 
    "Host IP": "192.168.5.81", 
    "State": "up", 
    "ID": 3, 
    "Hypervisor Type": "QEMU"
  }
]

I also have a host with hostname 'nova-blade0101', this host will function only for compute node. Try to start openstack-nova-compute.service for the first time, but I got this error messages from the log.

2019-02-11 12:35:16.336 17965 ERROR nova.scheduler.client.report [req-d88d0b70-ec32-4e0d-8bfe-3b920d37d5d0 - - - - -] [req-ed36775b-1715-4a70-b709-83095a846f6c] Failed to create resource provider record in placement API for UUID 44c8d87b-50bd-47ae-8672-5f19d8be16f7. Got 409: {"errors": [{"status": 409, "request_id": "req-ed36775b-1715-4a70-b709-83095a846f6c", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: nova-blade0101 already exists.  ", "title": "Conflict"}]}.
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager [req-d88d0b70-ec32-4e0d-8bfe-3b920d37d5d0 - - - - -] Error updating resources for node nova-blade0101.: ResourceProviderCreationFailed: Failed to create resource provider nova-blade0101


2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager Traceback (most recent call last):
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7778, in _update_available_resource_for_node
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager     rt.update_available_resource(context, nodename)
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 721, in update_available_resource
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager     self._update_available_resource(context, resources)
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager     return f(*args, **kwargs)
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 798, in _update_available_resource
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager     self._update(context, cn)
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 68, in wrapped_f
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager     return Retrying(*dargs, **dkw).call(f, *args, **kw)
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 223, in call
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager     return attempt.get(self._wrap_exception)
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager     six.reraise(self.value[0], self.value[1], self.value[2])
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
2019-02-11 12:35:16.337 17965 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line ...
(more)
2019-02-08 23:50:16 -0600 commented question NeutronAdminCredentialConfigurationInvalid raised when 'Launch instance'

Solved. add project_domain_name Create server from cli without defining security-group.

My fault here.

Really appreciate your help

2019-02-08 23:19:58 -0600 commented question NeutronAdminCredentialConfigurationInvalid raised when 'Launch instance'

Dear Bernd Bausch. Hopefully know edited.

Yes, I'm trying to understand the installation guide you refered.

But no luck yet

2019-02-08 23:19:58 -0600 received badge  Commentator
2019-02-08 18:58:58 -0600 commented question NeutronAdminCredentialConfigurationInvalid raised when 'Launch instance'

Dear Bernd Baush. Thankyu for your response.

Yes I check that before I make this post. I add the [neutron] part of my nova.conf to the question..

all I found that looks like credentials is 'username' and 'password'. Is that the one that I have to verify? If so, it's there from the start.

2019-02-08 03:07:43 -0600 asked a question NeutronAdminCredentialConfigurationInvalid raised when 'Launch instance'

Dear All ...

I got nova error about 'NeutronAdminCredentialConfigurationInvalid'.

What I do is, login to horizon as admin, and try to create an instance.

Here is from /var/log/nova/nova-api.log right after I click 'launch instance' button.

2019-02-08 15:56:16.138 19148 INFO nova.osapi_compute.wsgi.server [req-914666f9-5a2d-4323-9f92-515276f75565 69a638fb9e5d4788b6f23d67180f85da 2568b941b3a54c13b57c930fe3805d3c - default default] 127.0.0.1 "GET /v2.1 HTTP/1.0" status: 302 len: 300 time: 3.1299040



2019-02-08 15:56:19.252 19148 INFO nova.osapi_compute.wsgi.server [req-a53933c1-f157-4c24-ab87-5f5b14839e84 69a638fb9e5d4788b6f23d67180f85da 2568b941b3a54c13b57c930fe3805d3c - default default] 127.0.0.1 "GET /v2.1/ HTTP/1.0" status: 200 len: 771 time: 3.1098912





2019-02-08 15:56:39.069 19147 ERROR nova.network.neutronv2.api [req-bc73b885-bab4-4116-8497-96bbe782ea4d 69a638fb9e5d4788b6f23d67180f85da 2568b941b3a54c13b57c930fe3805d3c - default default] Neutron client was not able to generate a valid admin token, please verify Neutron admin credential located in nova.conf: Unauthorized: 401-{u'error': {u'message': u'The request you have made requires authentication.', u'code': 401, u'title': u'Unauthorized'}}
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi [req-bc73b885-bab4-4116-8497-96bbe782ea4d 69a638fb9e5d4788b6f23d67180f85da 2568b941b3a54c13b57c930fe3805d3c - default default] Unexpected exception in API method: NeutronAdminCredentialConfigurationInvalid: Networking client is experiencing an unauthorized exception.
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi Traceback (most recent call last):
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi   File "/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 801, in wrapped
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi     return f(*args, **kwargs)
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi   File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, in wrapper
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi     return func(*args, **kwargs)
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi   File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, in wrapper
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi     return func(*args, **kwargs)
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi   File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, in wrapper
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi     return func(*args, **kwargs)
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi   File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, in wrapper
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi     return func(*args, **kwargs)
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi   File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, in wrapper
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi     return func(*args, **kwargs)
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi   File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, in wrapper
2019-02-08 15:56:39.070 19147 ERROR nova.api.openstack.wsgi     return func(*args, **kwargs)
2019-02-08 15:56:39.070 19147 ERROR ...
(more)
2019-02-08 01:45:40 -0600 received badge  Enthusiast
2019-02-07 08:35:32 -0600 received badge  Popular Question (source)
2019-02-07 03:04:28 -0600 asked a question neutron server: number of workers

Dear All.

Currently, I have [DEFAULT] api_worker=6 in my /etc/neutron/neutron.conf

But when I check systemctl status neutron-server.service , i got 39 workers. And there is 31 PGSQL connection for it. How to down size number of worker ? Well Actualy I wanto to set pgsql connection down to let's say 10

My system : - Openstack : Rocky - OS : Centos 7 - CPU 24.

Sincerely

-bino-

2019-02-07 02:47:03 -0600 commented question how keystone using gunicorn

hmmm last check ... my suspicion goes to my pgsql. When no other service run, keystone run well. But once I turn on all services (so there is 101 db connection) keystone going stuck for every 'post' request'.

2019-02-06 18:47:19 -0600 commented question how keystone using gunicorn

(continue). Second step, I set gunicorn to run with 25 workers and 25 threads (I have 12 core).

And magicaly every request run seamless.

Till now, I do not know where is the glitch/problem.

2019-02-06 18:45:00 -0600 commented question how keystone using gunicorn

Sir. First step I changed value of wgi.debug_middleware on keystone.conf to True. Found that openstack cli will first send 'get' request to keystone followed by a 'post' request. The first request is running well. The second request is received by keystone but keystone never send response. (contin)

2019-02-05 18:54:00 -0600 commented question how keystone using gunicorn

but with curl, I got :

{"version": {"status": "stable", "updated": "2018-10-15T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links": [{"href": "http://127.0.0.1:7001/v3/", "rel": "self"}]}}
2019-02-05 18:54:00 -0600 commented question how keystone using gunicorn

I try to re edit my post, but got red pop up box with text 'null value in column "ip_addr" violatesnot-null constraint ...'

2019-02-05 18:21:12 -0600 received badge  Editor (source)
2019-02-05 05:24:57 -0600 commented question how keystone using gunicorn

edited.

Kindly please check if te edit works

2019-02-05 05:24:57 -0600 commented question how keystone using gunicorn

I try to 'edit'

But it did't saved

I'll try once more

2019-02-05 00:19:11 -0600 asked a question how keystone using gunicorn

Dear All. I try to run keystone by gunicorn and nginx but first, I run it without nginx .. just for a test.

Here is my results : Run keystone in debug

[root@ceph-node1 ostackwsgi]# gunicorn --bind 127.0.0.1:7001 --log-level DEBUG kspublic:application
[2019-02-05 16:54:32 +0000] [875334] [DEBUG] Current configuration:
  proxy_protocol: False
  worker_connections: 1000
  statsd_host: None
  max_requests_jitter: 0
  post_fork: <function post_fork at 0x7f3d52c24230>
  errorlog: -
  enable_stdio_inheritance: False
  worker_class: sync
  ssl_version: 2
  suppress_ragged_eofs: True
  syslog: False
  syslog_facility: user
  when_ready: <function when_ready at 0x7f3d52c1bed8>
  pre_fork: <function pre_fork at 0x7f3d52c240c8>
  cert_reqs: 0
  preload_app: False
  keepalive: 2
  accesslog: None
  group: 0
  graceful_timeout: 30
  do_handshake_on_connect: False
  spew: False
  workers: 1
  proc_name: None
  sendfile: None
  pidfile: None
  umask: 0
  on_reload: <function on_reload at 0x7f3d52c1bd70>
  pre_exec: <function pre_exec at 0x7f3d52c247d0>
  worker_tmp_dir: None
  limit_request_fields: 100
  pythonpath: None
  on_exit: <function on_exit at 0x7f3d52c28050>
  config: None
  logconfig: None
  check_config: False
  statsd_prefix: 
  secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
  reload_engine: auto
  proxy_allow_ips: ['127.0.0.1']
  pre_request: <function pre_request at 0x7f3d52c24938>
  post_request: <function post_request at 0x7f3d52c24a28>
  forwarded_allow_ips: ['127.0.0.1']
  worker_int: <function worker_int at 0x7f3d52c24500>
  raw_paste_global_conf: []
  threads: 1
  max_requests: 0
  chdir: /opt/ostackwsgi
  daemon: False
  user: 0
  limit_request_line: 4094
  access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
  certfile: None
  on_starting: <function on_starting at 0x7f3d52c1bc08>
  post_worker_init: <function post_worker_init at 0x7f3d52c24398>
  child_exit: <function child_exit at 0x7f3d52c24b90>
  worker_exit: <function worker_exit at 0x7f3d52c24cf8>
  paste: None
  default_proc_name: kspublic:application
  syslog_addr: udp://localhost:514
  syslog_prefix: None
  ciphers: TLSv1
  worker_abort: <function worker_abort at 0x7f3d52c24668>
  loglevel: DEBUG
  bind: ['127.0.0.1:7001']
  raw_env: []
  initgroups: False
  capture_output: False
  reload: False
  limit_request_field_size: 8190
  nworkers_changed: <function nworkers_changed at 0x7f3d52c24e60>
  timeout: 30
  keyfile: None
  ca_certs: None
  tmp_upload_dir: None
  backlog: 2048
  logger_class: gunicorn.glogging.Logger
[2019-02-05 16:54:32 +0000] [875334] [INFO] Starting gunicorn 19.7.1
[2019-02-05 16:54:32 +0000] [875334] [DEBUG] Arbiter booted
[2019-02-05 16:54:32 +0000] [875334] [INFO] Listening at: http://127.0.0.1:7001 (875334)
[2019-02-05 16:54:32 +0000] [875334] [INFO] Using worker: sync
[2019-02-05 16:54:32 +0000] [875343] [INFO] Booting worker with pid: 875343
[2019-02-05 16:54:32 +0000] [875334] [DEBUG] 1 workers
[2019-02-05 16:55:20 +0000] [875343] [DEBUG] POST /v3/auth/tokens
[2019-02-05 16:56:24 +0000] [875334] [CRITICAL] WORKER TIMEOUT (pid:875975)
[2019-02-05 16:56:25 +0000] [876227] [INFO] Booting worker with pid: 876227
[2019-02-05 16:56:26 +0000] [876227] [DEBUG] GET /v3
[2019-02-05 16:56:26 +0000] [876227] [DEBUG] POST /v3/auth/tokens
[2019-02-05 16:56:57 +0000] [875334] [CRITICAL] WORKER TIMEOUT (pid:876227)
[2019-02-05 16:56:58 +0000] [876505] [INFO] Booting worker with pid: 876505
[2019-02-05 16:56:59 +0000] [876505] [DEBUG] POST /v3/auth/tokens
[2019-02-05 16:57:30 +0000] [875334] [CRITICAL] WORKER TIMEOUT (pid:876505)
[2019-02-05 16:57:31 +0000] [876772] [INFO] Booting worker with pid: 876772
[2019-02-05 16:57:33 +0000] [876772] [DEBUG] POST /v3/auth/tokens

here is I run openstack cli in verbose :

[root@ceph-node1 ~]# openstack user list -vv -d
START with options: [u'user', u'list', u'-vv ...
(more)
2019-02-05 00:19:11 -0600 commented question Horizon : alway back to login page

well my first fault is that I used django internal session engine for multi process, I change to db as backend .... it 'fix' the problem .... But I think I git another problem .... keystone. But I'll post in another thread.

2019-02-01 08:27:50 -0600 asked a question Horizon : alway back to login page

Dear All

Opening http://mycloud/dashboard I got http://mycloud/dashboard/auth/login/?next=/dashboard/ (http://mycloud/dashboard/auth/login/?...)

After auth succed, I got http://mycloud/dashboard/project/

Try to click on 'API Access', I'm thrown back to login page http://mycloud/dashboard/auth/login/?next=/dashboard/project/api_access/ (http://mycloud/dashboard/auth/login/?...)

anyware I click I always back to login page.

Kindly please give me some clue to fix this problem

Sincerely -bino-

Below is my openstack_dashboard/local/local_settings.py

import os
from django.utils.translation import ugettext_lazy as _
from horizon.utils import secret_key
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = True
DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'horizon',
        'USER': 'packstack',
        'PASSWORD': 'L3tM31n',
        'HOST': '192.168.5.80',
        'PORT': '5432',
    }
}
COMPRESS_OFFLINE = True
SESSION_TIMEOUT = 1800
WEBROOT = '/dashboard/'
LOGIN_URL = WEBROOT + 'auth/login/'
LOGOUT_URL = WEBROOT + 'auth/logout/'
LOGIN_REDIRECT_URL = WEBROOT
ALLOWED_HOSTS = ['*']
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
SECRET_KEY = secret_key.generate_or_read_from_file(
    os.path.join(LOCAL_PATH, '.secret_key_store'))
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
CACHES = {
    'default':{
        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
        'LOCATION': '127.0.0.1:11211',
    },
}
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
OPENSTACK_HOST = "127.0.0.1"
OPENSTACK_KEYSTONE_URL = "http://%s:7001/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"
OPENSTACK_KEYSTONE_BACKEND = {
    'name': 'native',
    'can_edit_user': True,
    'can_edit_group': True,
    'can_edit_project': True,
    'can_edit_domain': True,
    'can_edit_role': True,
}
OPENSTACK_HYPERVISOR_FEATURES = {
    'can_set_mount_point': False,
    'can_set_password': False,
    'requires_keypair': False,
    'enable_quotas': True
}
OPENSTACK_CINDER_FEATURES = {
    'enable_backup': False,
}
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_ipv6': True,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_fip_topology_check': True,
    'supported_vnic_types': ['*'],
    'physical_networks': [],
}
OPENSTACK_HEAT_STACK = {
    'enable_user_pass': True,
}
IMAGE_CUSTOM_PROPERTY_TITLES = {
    "architecture": _("Architecture"),
    "kernel_id": _("Kernel ID"),
    "ramdisk_id": _("Ramdisk ID"),
    "image_state": _("Euca2ools state"),
    "project_id": _("Project ID"),
    "image_type": _("Image Type"),
}
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
INSTANCE_LOG_LENGTH = 35
DROPDOWN_MAX_ITEMS = 30
TIME_ZONE = "UTC"
LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'console': {
            'format': '%(levelname)s %(name)s %(message)s'
        },
        'operation': {
            'format': '%(message)s'
        },
    },
    'handlers': {
        'null': {
            'level': 'DEBUG',
            'class': 'logging.NullHandler',
        },
        'console': {

            'level': 'DEBUG',
            'class': 'logging.StreamHandler',
            'formatter': 'console',
        },
        'operation': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'operation',
        },
    },
    'loggers': {
        'horizon': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'horizon.operation_log': {
            'handlers': ['operation'],
            'level': 'INFO',
            'propagate': False,
        },
        'openstack_dashboard': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'novaclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'cinderclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneauth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'glanceclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'neutronclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'swiftclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'oslo_policy': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'openstack_auth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },


        'django.db.backends': {
            'handlers': ['null'],
            'propagate': False,
        },
        'requests': {
            'handlers': ['null'],
            'propagate': False,
        },
        'urllib3': {
            'handlers': ['null'],
            'propagate': False,
        },
        'chardet.charsetprober': {
            'handlers': ['null'],
            'propagate': False,
        },
        'iso8601': {
            'handlers': ['null'],
            'propagate': False,
        },
        'scss': {
            'handlers': ['null'],
            'propagate': False,
        },
    },
}
SECURITY_GROUP_RULES = {
    'all_tcp': {
        'name': _('All TCP'),
        'ip_protocol': 'tcp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_udp': {
        'name': _('All UDP'),
        'ip_protocol': 'udp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_icmp': {
        'name': _('All ICMP'),
        'ip_protocol': 'icmp',
        'from_port': '-1',
        'to_port': '-1',
    },
    'ssh': {
        'name': 'SSH',
        'ip_protocol': 'tcp',
        'from_port': '22',
        'to_port': '22',
    },
    'smtp': {
        'name': 'SMTP',
        'ip_protocol': 'tcp',
        'from_port': '25 ...
(more)