发起问题

hamed 的档案 - activity

2020-04-05 03:25:46 -0500 问了问题 tripleo overcloud failed in step 5 : debug container stared step

hi i want install openstack stein on centos7 with 3 controller :
i run follow deploy command:

openstack overcloud deploy --control-flavor control --compute-flavor compute --templates /home/stack/generated-openstack-tripleo-heat-templates -e /home/stack/containers-prepare-parameter.yaml -e environment.yaml -e /home/stack/generated-openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/generated-openstack-tripleo-heat-templates/environments/network-environment.yaml -e /home/stack/templates/network-environment-overrides.yaml --timeout 600 --ntp-server 0.asia.pool.ntp.org --verbose

in step 5 i get follow error:

> > TASK [Start containers for step 5]
>     > ********************************************* task path:
>     > /var/lib/mistral/overcloud/common_deploy_steps_tasks.yaml:568
>     > ok: [overcloud-novacompute-0] =>
>     > {"censored": "the output has been
>     > hidden due to the fact that 'no_log:
>     > true' was specified for this result",
>     > "changed": false} Friday 03 April 2020
>     > 18:01:25 +0430 (0:11:19.419)      
>     > 1:54:53.976 **********
>     > 
>     > TASK [Debug output for task: Start
>     > containers for step 5]
>     > ********************** task path:
> /var/lib/mistral/overcloud/common_deploy_steps_tasks.yaml:585
>     > fatal: [overcloud-novacompute-0]:
>     > FAILED! => { 
>     >     "failed_when_result": true,
>     >     "outputs.stdout_lines | default([]) |
>     > union(outputs.stderr_lines |
>     > default([]))": [
>     >         "stdout: c73edcb8b695302d6bf7e0789d3ccac269110697517e54990789f6707b627b5b",
>     >         "",
>     >         "stderr: ",
>     >         "Created symlink from /etc/systemd/system/multi-user.target.wants/tripleo_nova_compute.service
>     > to
>     > /etc/systemd/system/tripleo_nova_compute.service.",
>     >         "Created symlink from /etc/systemd/system/timers.target.wants/tripleo_nova_compute_healthcheck.timer
>     > to
>     > /etc/systemd/system/tripleo_nova_compute_healthcheck.time
>     > r.",
>     >         "Created symlink from /etc/systemd/system/tripleo_nova_compute.service.requires/tripleo_nova_compute_healthcheck.timer
>     > to
>     > /etc/systemd/system/tripleo_nova_comput
>     > e_healthcheck.timer.",
>     >         "Error running ['podman', 'run', '--name',
>     > 'nova_wait_for_compute_service',
>     > '--label', 'config_id=tripleo_step5',
>     > '--label',
>     > 'container_name=nova_wait_for_comput
>     > e_service', '--label',
>     > 'managed_by=tripleo-Compute',
>     > '--label',
>     > 'config_data={\"start_order\": 4,
>     > \"image\":
>     > \"172.20.128.1:8787/tripleostein/centos-binary-nova-compute:
>     > current-tripleo\", \"environment\":
>     > [\"__OS_DEBUG=false\",
>     > \"TRIPLEO_CONFIG_HASH=ba8c4fb3937f19883243859e5a49a26f-ba8c4fb3937f19883243859e5a49a26f\"],
>     > \"command\": \"/co
>     > ntainer-config-scripts/pyshim.sh
>     > /container-config-scripts/nova_wait_for_compute_service.py\",
>     > \"user\": \"nova\", \"volumes\":
>     > [\"/etc/hosts:/etc/hosts:ro\",
>     > \"/etc/loc altime:/etc/localtime:ro\",
>     > \"/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\",
>     > \"/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro\",
>     > \"/
>     > etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\",
>     > \"/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\",
>     > \"/etc/pki/t
>     > ls/cert.pem:/etc/pki/tls/cert.pem:ro\",
>     > \"/dev/log:/dev/log\",
>     > \"/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\",
>     > \"/etc/puppet:/etc/puppet:ro\",
>     > \"/var/lib/confi
>     > g-data/nova_libvirt/etc/my.cnf.d/:/etc/my.cnf.d/:ro\",
>     > \"/var/lib/config-data/nova_libvirt/etc/nova/:/etc/nova/:ro\",
>     > \"/var/log/containers/nova:/var/log/nova\",
>     > \"/var/
>     > lib/container-config-scripts/:/container-config-scripts/\"],
>     > \"net\": \"host\", \"detach\":
>     > false}',
>     > '--conmon-pidfile=/var/run/nova_wait_for_compute_service.pid',
>     > '--lo g-driver', 'k8s-file',
>     > '--log-opt',
>     > 'path=/var/log/containers/stdouts/nova_wait_for_compute_service.log',
>     > '--env=__OS_DEBUG=false',
>     > '--env=TRIPLEO_CONFIG_HASH=ba8c4fb393
>     > 7f19883243859e5a49a26f-ba8c4fb3937f19883243859e5a49a26f',
>     > '--net=host', '--user=nova',
>     > '--volume=/etc/hosts:/etc/hosts:ro',
>     > '--volume=/etc/localtime:/etc/localtime:ro',
>     > '--volume=/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro',
>     > '--volume=/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro',
>     > '--volume=/etc/pk
>     > i/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro',
>     > '--volume=/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro',
>     > '--volume=/e
>     > tc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro',
>     > '--volume=/dev/log:/dev/log',
>     > '--volume=/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro',
>     > '--volume ...
(more)
2019-09-23 08:10:51 -0500 获得奖牌  著名问题 (source)
2019-09-15 21:10:24 -0500 获得奖牌  热门的问题 (source)
2019-08-09 06:06:28 -0500 获得奖牌  受欢迎的问题 (source)
2019-07-26 10:08:08 -0500 回答问题 Add metric to gnocchi

problem solved. i should update gnocchi_client.py and /publisher/data/gnocchi_resources.yaml in /usr/lib/python2.7/dist-packages/ceilometer directory. below links help us: (https://docs.openstack.org/ceilometer...)

2019-07-24 11:37:20 -0500 问了问题 Add metric to gnocchi

i enable LBaaS v2 on queens.
i need get feature of LBaaS v2 from ceilometer.
i add needed parameter to pollster.yaml file like below:

---
sources:
    - name: some_pollsters
      interval: 10
      meters:
        - cpu
        - cpu_l3_cache
        - memory.usage
        - network.incoming.bytes
        - network.incoming.packets
        - network.outgoing.bytes
        - network.outgoing.packets
        - disk.device.read.bytes
        - disk.device.read.requests
        - disk.device.write.bytes
        - disk.device.write.requests
        - hardware.cpu.util
        - hardware.memory.used
        - hardware.memory.total
        - hardware.memory.buffer
        - hardware.memory.cached
        - hardware.memory.swap.avail
        - hardware.memory.swap.total
        - hardware.system_stats.io.outgoing.blocks
        - hardware.system_stats.io.incoming.blocks
        - hardware.network.ip.incoming.datagrams
    - name: lb_pollsters
      interval: 1
      meters:
        - network.services.lb.outgoing.bytes
        - network.services.lb.incoming.bytes
        - network.services.lb.pool
        - network.services.lb.listener
        - network.services.lb.member
        - network.services.lb.health_monitor
        - network.services.lb.loadbalancer
        - network.services.lb.total.connections
        - network.services.lb.active.connections

in ceilomter log:

2019-07-24 20:52:05.877 15035 WARNING ceilometer.publisher.gnocchi [-] metric network.services.lb.loadbalancer is not handled by Gnocchi
2019-07-24 20:52:08.302 15035 WARNING ceilometer.publisher.gnocchi [-] metric network.services.lb.health_monitor is not handled by Gnocchi
2019-07-24 20:52:09.095 15035 WARNING ceilometer.publisher.gnocchi [-] metric network.services.lb.total.connections is not handled by Gnocchi
2019-07-24 20:52:09.883 15035 WARNING ceilometer.publisher.gnocchi [-] metric network.services.lb.listener is not handled by Gnocchi
2019-07-24 20:52:09.899 15035 WARNING ceilometer.publisher.gnocchi [-] metric network.services.lb.incoming.bytes is not handled by Gnocchi
2019-07-24 20:52:10.831 15035 WARNING ceilometer.publisher.gnocchi [-] metric network.services.lb.pool is not handled by Gnocchi
2019-07-24 20:52:10.834 15035 WARNING ceilometer.publisher.gnocchi [-] metric network.services.lb.active.connections is not handled by Gnocchi
2019-07-24 20:52:10.834 15035 WARNING ceilometer.publisher.gnocchi [-] metric network.services.lb.outgoing.bytes is not handled by Gnocchi

in gnocchi log:

2019-07-24 20:51:57.738559   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 250, in wrapper
2019-07-24 20:51:57.738566   File "/usr/lib/python2.7/dist-packages/tooz/coordination.py", line 186, in _beat_forever_until_stopped
2019-07-24 20:51:57.738573 <type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'StopWatch'
2019-07-24 20:51:57.829859 Exception in thread Thread-1 (most likely raised during interpreter shutdown):
2019-07-24 20:51:57.829895 Traceback (most recent call last):
2019-07-24 20:51:57.829906   File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
2019-07-24 20:51:57.829914   File "/usr/lib/python2.7/threading.py", line 754, in run
2019-07-24 20:51:57.829919   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 250, in wrapper
2019-07-24 20:51:57.829927   File "/usr/lib/python2.7/dist-packages/tooz/coordination.py", line 186, in _beat_forever_until_stopped
2019-07-24 20:51:57.829938 <type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'StopWatch'

how i can fix this problem and get LBaaS V2 parameter from gnnochi?
tancks

2019-05-28 10:17:23 -0500 获得奖牌  著名问题 (source)
2019-05-02 04:25:33 -0500 获得奖牌  著名问题 (source)
2019-04-16 02:32:40 -0500 问了问题 use other nova scheduler

I have an openstack with one controller and 9 compute. my openstack version is newton. i want 8 compute use nova newton and latest compute node use Rocky scheduler. for this goal, i create a VM on my controller by kvm and install nova package version rocky on it. now i want latest node user this new scheduler and other services like keystone, glance and so on use previously controller.
Are you have idea?
i want compute see Rocky nova as nova schadular. tnx

2019-04-16 02:24:04 -0500 获得奖牌  著名问题 (source)
2019-03-26 08:08:05 -0500 获得奖牌  受欢迎的问题 (source)
2019-03-26 08:08:05 -0500 获得奖牌  热门的问题 (source)
2019-01-27 11:40:54 -0500 获得奖牌  好问题 (source)
2019-01-26 07:08:25 -0500 获得奖牌  著名问题 (source)
2019-01-26 05:11:00 -0500 问了问题 gnocchi grafana

I was installed gnocchi and want use grafana for visualize gnocchi`s data. i install grafana and add gnocchi plugin into it. in keyston.conf and gnocchi.conf , set section [cors] like below :

[cors]
allowed_origin = http://controller:3000

i get below error when want connect grafana to datasource (gnocchi):

Gnocchi error: No response status code, is CORS correctly configured ? (detail: [object Object])

i use openstack queens on ubuntu 16.04. i don`t have any log.
please help me .

image description

2018-12-26 09:55:44 -0500 获得奖牌  著名问题 (source)
2018-12-20 02:40:48 -0500 获得奖牌  热门的问题 (source)
2018-12-02 07:19:20 -0500 获得奖牌  著名问题 (source)
2018-12-02 07:17:45 -0500 评论问题 Add Magnum dashboard to current horizon dashboard
2018-12-01 08:35:10 -0500 获得奖牌  学生 (source)
2018-11-25 10:21:10 -0500 获得奖牌  受欢迎的问题 (source)
2018-11-19 07:29:43 -0500 问了问题 ceilometer-notification-agent 500 internal error

hello
I use bellow command :

openstack alarm create --name cpu_hi \ 
--type gnocchi_resources_threshold --description 'CPU High Average' \
 --metric cpu_util \
--threshold 70.0 \
--comparison-operator gt \ 
--aggregation-method mean \ 
--granularity 300 \
--evaluation-periods 1 \ 
--resource-type instance \ 
--resource-id 97671a6c-5b88-46e5-ac6a-3f9a3b166eb8

output of openstack alarm list is :

6211374f-2b9b-4224-8822-12262c143545 | gnocchi_resources_threshold                | cpu_hi                                 | insufficient data | low      | True

and state of alarm fix in insufficient data and don`t change. i see tail of ceilometer notification agent log (tail -f /var/log/ceilimeter/ceilimeter-notification-agnet.log):

2018-11-19 16:55:21.540 3602 ERROR ceilometer.publisher.gnocchi [-] <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>500 Internal Server Error</title>
</head><body>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error or
misconfiguration and was unable to complete
your request.</p>
<p>Please contact the server administrator at 
 [no address given] to inform them of the time this error occurred,
 and the actions you performed just before this error.</p>
<p>More information about this error may be available
in the server error log.</p>
<hr>
<address>Apache/2.4.18 (Ubuntu) Server at controller Port 8041</address>
</body></html>
 (HTTP 500): ClientException: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
2018-11-17 07:23:04 -0500 获得奖牌  分类学者
2018-11-14 02:47:33 -0500 评论回答 gnocchi 500 internal server error

tanks. i have same problem, and use your answer and before answer to solve that.

2018-11-14 02:45:12 -0500 获得奖牌  热门的问题 (source)
2018-11-13 01:08:04 -0500 获得奖牌  受欢迎的问题 (source)
2018-11-12 12:49:27 -0500 评论回答 gnocchi 500 internal server error

tank you very mach. after your solution, i run bellow command:

apt install python-gnocchi

and add export bellow lines into admin-openrc file for 401 error unauthenticated :

GNOCCHI_ENDPOINT=http://controller:8041

export OS_AUTH_TYPE=password
2018-11-12 12:08:14 -0500 获得奖牌  学者 (source)
2018-11-12 01:06:54 -0500 问了问题 gnocchi 500 internal server error

Hi I install openstack queens and install gnocchi.
when input gnocchi-status command (or other gnocchi commands) get blow erorr :

 root@Controller:~# gnocchi status
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>500 Internal Server Error</title>
</head><body>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error or
misconfiguration and was unable to complete
your request.</p>
<p>Please contact the server administrator at 
 [no address given] to inform them of the time this error occurred,
 and the actions you performed just before this error.</p>
<p>More information about this error may be available
in the server error log.</p>
<hr>
<address>Apache/2.4.18 (Ubuntu) Server at localhost Port 8041</address>
</body></html>
 (HTTP 500)

vim /etc/gnocchi/gnocchi.conf :

[DEFAULT]
log_dir = /var/log/gnocchi
coordination_url = redis://controller:6379

[api]
auth_mode = keystone

[database]
backend = sqlalchemy

# MariaDB connection info
[indexer]
url = mysql+pymysql://gnocchi:hamed@controller/gnocchi

[storage]
driver = file
file_basepath = /var/lib/gnocchi
#coordination_url = redis://controller:6379


# Keystone auth info
[keystone_authtoken]
service_token_roles_required = true
auth_type = password
auth_url = http://controller:5000/v3
project_domain_name = Default
user_domain_name = Default
project_name = service
username = gnocchi
password = hamed
interface = internalURL
region_name = RegionOne

openstack version: queens
OS: ubuntu16.04

2018-09-22 07:34:25 -0500 获得奖牌  热门的问题 (source)
2018-08-18 01:16:12 -0500 获得奖牌  受欢迎的问题 (source)
2018-08-15 03:12:44 -0500 问了问题 error when launch a magnum cluster

hi
I use openstack newton and install magnum and other dependencies. I want deploy a cluster by k8s_fedora_atomic. I download fedora_atomic_newton. and my controller os is ubuntu16.04
I create a template cluster and when deploy own cluster from template, cluster status go to Created_Field state. below the my magnum-conductor log :

2018-08-14 14:26:28.999 26824 INFO magnum.conductor.handlers.cluster_conductor [req-bbf07a4c-a600-4797-8a40-7f3d6af64d6c 0f48db6ef8ee4d0c86ffe747579fe78c 7f5b52b3f61a4f71a00fc3f6c6d9232c default - -] Cluster has been deleted, stack_id: 53319edd-0a2e-4c3d-bca5-e30ef0544386
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall [req-7c3a7669-aa16-4b9a-b8f2-bd5570909c40 0f48db6ef8ee4d0c86ffe747579fe78c 7f5b52b3f61a4f71a00fc3f6c6d9232c default - -] Fixed interval looping call 'magnum.conductor.handlers.cluster_conductor.HeatPoller.poll_and_check' failed: ValueError: Field `master_addresses[0]' cannot be None
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall Traceback (most recent call last):
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall   File "/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 136, in _run_loop
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall   File "/usr/lib/python2.7/dist-packages/magnum/conductor/handlers/cluster_conductor.py", line 330, in poll_and_check
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall     self._sync_cluster_and_template_status(stack)
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall   File "/usr/lib/python2.7/dist-packages/magnum/conductor/handlers/cluster_conductor.py", line 399, in _sync_cluster_and_template_status
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall     self.cluster)
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall   File "/usr/lib/python2.7/dist-packages/magnum/drivers/common/template_def.py", line 325, in update_outputs
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall     output.set_output(stack, cluster_template, cluster)
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall   File "/usr/lib/python2.7/dist-packages/magnum/drivers/common/k8s_fedora_template_def.py", line 39, in set_output
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall     self).set_output(stack, cluster_template, cluster)
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall   File "/usr/lib/python2.7/dist-packages/magnum/drivers/common/template_def.py", line 122, in set_output
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall     setattr(cluster, self.cluster_attr, output_value)
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall   File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 72, in setter
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall     field_value = field.coerce(self, name, value)
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall   File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 195, in coerce
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall     return self._type.coerce(obj, attr, value)
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall   File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 637, in coerce
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall     coerced_list.extend(value)
2018-08-14 15:17:23.846 26824 ERROR oslo.service.loopingcall   File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 1190, in extend
2018-08-14 15:17:23.846 26824 ERROR oslo.service ...
(more)
2018-08-11 03:27:51 -0500 获得奖牌  热门的问题 (source)
2018-08-09 16:40:34 -0500 获得奖牌  受欢迎的问题 (source)
2018-08-07 06:26:41 -0500 评论问题 magnum coreos cluster error

controller os = ubuntu 16.04
I clone magnum form git (by -b newton-eol tag) and copy magnum directory to /user/lib/python2.7/dist-packages/ by below command :
cp -r magnum/magnum/ /usr/lib/python2.7/dist-packages/

2018-08-07 06:21:04 -0500 问了问题 magnum coreos cluster error

Hi
I use openstack newton and install magnum and other dependencies. I want deploy a cluster by coreos. I download CoreOS and set os_distro attribute to coreos.
I create a template cluster and when deploy own cluster from template, cluster status go to Created_Field state.
below the my magnum-conductor log :

{"explanation": "The server could not comply with the request since it is either malformed or otherwise incorrect.", "code": 400, "error": {"message": "The Parameter (kube_dashboard_enabled) was not defined in template.", "traceback": null, "type": "UnknownUserParameter"}, "title": "Bad Request"}
 log_http_response /usr/lib/python2.7/dist-packages/heatclient/common/http.py:155
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server [req-d96ad159-6a15-4fa0-a673-2f7298610326 - - - - -] Exception during message handling: InvalidParameterValue: ERROR: The Parameter (kube_dashboard_enabled) was not defined in template.
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 133, in _process_incoming
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 150, in dispatch
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 121, in _do_dispatch
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/magnum/conductor/handlers/cluster_conductor.py", line 176, in cluster_create
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server     raise e
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server InvalidParameterValue: ERROR: The Parameter (kube_dashboard_enabled) was not defined in template.
2018-08-07 15:30:11.881 23932 ERROR oslo_messaging.rpc.server
2018-06-22 16:19:04 -0500 获得奖牌  著名问题 (source)
2018-06-13 03:36:07 -0500 获得奖牌  热门的问题 (source)
2018-06-04 00:35:43 -0500 获得奖牌  热门的问题 (source)
2018-06-02 06:21:00 -0500 获得奖牌  受欢迎的问题 (source)
2018-06-01 03:45:20 -0500 获得奖牌  著名问题 (source)
2018-05-30 03:10:24 -0500 获得奖牌  热门的问题 (source)