Ask Your Question

Mutty Putty's profile - activity

2018-11-24 15:06:15 -0600 received badge  Student (source)
2017-03-01 15:44:31 -0600 received badge  Famous Question (source)
2017-02-25 15:08:11 -0600 received badge  Famous Question (source)
2017-02-16 05:08:03 -0600 received badge  Famous Question (source)
2017-01-10 23:10:45 -0600 commented answer Swarm cluster-create fails

Me to think the same, agent is unable to send the signal back to heat that stack successfully created. could you please tell me troubleshooting procedure in details? I have noticed that out of 5 stacks of the cluster, 2 stacks getting successfully completed quickly and after log time rest failed.

2017-01-10 23:10:45 -0600 received badge  Commentator
2017-01-10 23:07:47 -0600 commented answer Swarm cluster-create fails

@ Zaneb, Thank you so much for your reply. Just to make sure we are on the same page of understanding, Which agent are you taking about(is it agent inside swarm-cluster master)? I am using fedora-atomic-latest image following RDO centos 7 openstack newton guide.

2017-01-08 10:27:23 -0600 received badge  Notable Question (source)
2017-01-05 23:31:20 -0600 received badge  Famous Question (source)
2017-01-04 10:58:36 -0600 received badge  Popular Question (source)
2017-01-04 01:50:10 -0600 answered a question OpenStack newton http://controller/dashboard is unreachable CentOS 7

I was firewall which wasted my one whole day.. I have disable the firewall and it's working perfect. Could anyone please help me to unblock the firewall for dashboard service only?

2017-01-03 11:17:48 -0600 asked a question OpenStack newton http://controller/dashboard is unreachable CentOS 7

Hi everyone,

             I am trying to install OpenStack newton on centos 7 server. Everything is working fine But I am unable to connect dashboard.

Here is my all observation.

    [root@controller hpchost1]# service httpd status -l
Redirecting to /bin/systemctl status  -l httpd.service
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/httpd.service.d
           └─openstack-dashboard.conf
   Active: active (running) since Wed 2017-01-04 05:46:13 UTC; 20s ago
     Docs: man:httpd(8)
           man:apachectl(8)
  Process: 4113 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
  Process: 4157 ExecStartPre=/usr/bin/python /usr/share/openstack-dashboard/manage.py compress --force (code=exited, status=0/SUCCESS)
  Process: 4133 ExecStartPre=/usr/bin/python /usr/share/openstack-dashboard/manage.py collectstatic --noinput --clear (code=exited, status=0/SUCCESS)
 Main PID: 4180 (httpd)
   Status: "Total requests: 0; Current requests/sec: 0; Current traffic:   0 B/sec"
   CGroup: /system.slice/httpd.service
           ├─4180 /usr/sbin/httpd -DFOREGROUND
           ├─4182 /usr/sbin/httpd -DFOREGROUND
           ├─4183 (wsgi:keystone- -DFOREGROUND
           ├─4184 (wsgi:keystone- -DFOREGROUND
           ├─4185 (wsgi:keystone- -DFOREGROUND
           ├─4186 (wsgi:keystone- -DFOREGROUND
           ├─4187 (wsgi:keystone- -DFOREGROUND
           ├─4188 (wsgi:keystone- -DFOREGROUND
           ├─4189 (wsgi:keystone- -DFOREGROUND
           ├─4190 (wsgi:keystone- -DFOREGROUND
           ├─4191 (wsgi:keystone- -DFOREGROUND
           ├─4192 (wsgi:keystone- -DFOREGROUND
           ├─4193 /usr/sbin/httpd -DFOREGROUND
           ├─4194 /usr/sbin/httpd -DFOREGROUND
           ├─4195 /usr/sbin/httpd -DFOREGROUND
           ├─4196 /usr/sbin/httpd -DFOREGROUND
           └─4197 /usr/sbin/httpd -DFOREGROUND

Jan 04 05:45:59 controller python[4133]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/action-list/actions-delete-selected.template.html'
Jan 04 05:45:59 controller python[4133]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/action-list/action-list.scss'
Jan 04 05:45:59 controller python[4133]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/action-list/split-button.html'
Jan 04 05:45:59 controller python[4133]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/action-list/actions.controller.js'
Jan 04 05:45:59 controller python[4133]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/action-list/actions.directive.js'
Jan 04 05:45:59 controller python[4133]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/action-list/actions.detail.mock.html'
Jan 04 05:45:59 controller python[4133]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/action-list/actions.custom.mock.html'
Jan 04 05:45:59 controller python[4133]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/action-list/action-list.module.js'
Jan 04 05:45:59 controller python[4133]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/action-list/button-tooltip.directive.js'
Jan 04 05:46:13 controller systemd[1]: Started The Apache HTTP Server.

My httpd error.log

[Wed Jan 04 05:52:03.403879 2017] [core:notice] [pid 4601] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0
[Wed Jan 04 05:52:03.405399 2017] [suexec:notice] [pid 4601] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Jan 04 05:52 ...
(more)
2016-12-13 16:45:28 -0600 received badge  Notable Question (source)
2016-12-05 03:35:42 -0600 received badge  Notable Question (source)
2016-12-05 03:34:07 -0600 commented answer Swarm cluster-create fails

@ashish, did you get any answer?

2016-12-05 03:33:37 -0600 received badge  Notable Question (source)
2016-12-01 02:51:50 -0600 received badge  Popular Question (source)
2016-11-25 04:10:50 -0600 received badge  Popular Question (source)
2016-11-24 07:39:30 -0600 asked a question Swarm cluster-create fails

Hi Everyone,

             I have installed and configured the Container Infrastructure Management service, code-named magnum of Newton OpenStack on CentOS 7. But I am unable to create  Docker Swarm cluster.

Here is my all observation.

 [hpchost1@controller ~]$ magnum cluster-show swarm-cluster_test
+---------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property            | Value                                                                                                                                                                                                                      |
+---------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| status              | CREATE_FAILED                                                                                                                                                                                                              |
| cluster_template_id | 15bdcb55-a334-4699-ad86-f98ffa338589                                                                                                                                                                                       |
| uuid                | 3e6c1f12-e9e9-4fbd-8262-d32d89aa7c3e                                                                                                                                                                                       |
| stack_id            | 89003fb7-a094-4ed8-b744-00121e8cd577                                                                                                                                                                                       |
| status_reason       | Timed out                                                                                                                                                                                                                  |
| created_at          | 2017-01-10T10:48:11+00:00                                                                                                                                                                                                  |
| name                | swarm-cluster_test                                                                                                                                                                                                         |
| updated_at          | 2017-01-10T11:48:19+00:00                                                                                                                                                                                                  |
| discovery_url       | https://discovery.etcd.io/77b79dca8dbece069155914d61f77d9b                                                                                                                                                                 |
| faults              | {'swarm_masters': 'CREATE aborted (Task create from ResourceGroup "swarm_masters" Stack "swarm-cluster_test-nftqb3b77za6" [89003fb7-a094-4ed8-b744-00121e8cd577] Timed out)', '0': 'resources[0]: Stack CREATE cancelled'} |
| api_address         | -                                                                                                                                                                                                                          |
| coe_version         | -                                                                                                                                                                                                                          |
| master_addresses    | []                                                                                                                                                                                                                         |
| create_timeout      | 60                                                                                                                                                                                                                         |
| node_addresses      | []                                                                                                                                                                                                                         |
| master_count        | 1                                                                                                                                                                                                                          |
| container_version   | -                                                                                                                                                                                                                          |
| node_count          | 1                                                                                                                                                                                                                          |
+---------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

next...

 [hpchost1@controller ~]$ heat stack-list -n | grep swarm-cluster_test
WARNING (shell) "heat stack-list" is deprecated, please use "openstack stack list" instead
| 89003fb7-a094-4ed8-b744-00121e8cd577 | swarm-cluster_test-nftqb3b77za6                                                                            | CREATE_FAILED   | 2017-01-10T10:48:09Z | None                 | None                                 |
| 10655a90-845e-4b7b-8fce-2f7e7a97ae15 | swarm-cluster_test-nftqb3b77za6-swarm_masters-ukvpffafljws                                                 | CREATE_FAILED   | 2017-01-10T10:48:29Z | None                 | 89003fb7-a094-4ed8-b744-00121e8cd577 |
| eeebab48-bfe8-449c-a39f-341d99e88074 | swarm-cluster_test-nftqb3b77za6-swarm_masters-ukvpffafljws-0-v2vaqnbd3h52                                  | CREATE_FAILED   | 2017-01-10T10:48:38Z | None                 | 10655a90-845e-4b7b-8fce-2f7e7a97ae15 |
| dd19704c-9261-4a6c-8ad2-6f667c78697e | swarm-cluster_test-nftqb3b77za6-swarm_masters-ukvpffafljws-0-v2vaqnbd3h52-etcd_address_switch-hzykf3thzdha | CREATE_COMPLETE | 2017-01-10T10:48:44Z | None                 | eeebab48-bfe8-449c-a39f-341d99e88074 |
| 03c01386-71e2-41c6-ad28-6d63c66ee110 | swarm-cluster_test-nftqb3b77za6-swarm_masters-ukvpffafljws-0-v2vaqnbd3h52-api_address_switch-lprrlk3rh6p6  | CREATE_COMPLETE | 2017-01-10T10:48:48Z | None                 | eeebab48-bfe8-449c-a39f-341d99e88074 |

next...

[hpchost1@controller ~]$ heat resource-list swarm-cluster_test-nftqb3b77za6-swarm_masters-ukvpffafljws-0-v2vaqnbd3h52 | grep "FAILED"
    WARNING (shell) "heat resource-list" is deprecated, please use "openstack stack resource list" instead
    | master_wait_condition               |                                      | OS::Heat::WaitCondition                      | CREATE_FAILED   | 2017-01-10T10:48:39Z |

    [hpchost1@controller ~]$ heat resource-list swarm-cluster_test-nftqb3b77za6-swarm_masters-ukvpffafljws | grep "FAILED"
    WARNING (shell) "heat resource-list" is deprecated, please use "openstack stack resource list" instead
    | 0             | eeebab48-bfe8-449c-a39f-341d99e88074 | file:///usr/lib/python2.7/site-packages/magnum/drivers/swarm_fedora_atomic_v1/templates/swarmmaster.yaml | CREATE_FAILED   | 2017-01-10T10:48:29Z |

    [hpchost1@controller ~]$ heat resource-list swarm-cluster_test-nftqb3b77za6 | grep "FAILED"
    WARNING (shell) "heat resource-list" is deprecated, please use "openstack stack resource list" instead
    | swarm_masters       | 10655a90-845e-4b7b-8fce-2f7e7a97ae15                                                | OS::Heat::ResourceGroup                         | CREATE_FAILED   | 2017-01-10T10:48:11Z |
    [hpchost1@controller ~]$

next...

 [hpchost1@controller ~]$ heat resource-show swarm-cluster_test-nftqb3b77za6-swarm_masters-ukvpffafljws-0-v2vaqnbd3h52 OS::Heat::WaitCondition
WARNING (shell) "heat resource-show" is deprecated, please use "openstack stack resource show" instead
Stack or resource not found: swarm-cluster_test-nftqb3b77za6-swarm_masters-ukvpffafljws-0-v2vaqnbd3h52 OS::Heat::WaitCondition
[hpchost1@controller ~]$ heat resource-show swarm-cluster_test-nftqb3b77za6 OS::Heat::ResourceGroup
WARNING (shell) "heat resource-show" is deprecated, please use "openstack stack resource show" instead
Stack or resource not found: swarm-cluster_test-nftqb3b77za6 OS::Heat::ResourceGroup
[hpchost1@controller ~]$

[hpchost1@controller ~]$ heat resource-show swarm-cluster_test-nftqb3b77za6 swarm_masters
WARNING (shell) "heat resource-show" is deprecated, please use "openstack stack resource show" instead
+------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property               | Value                                                                                                                                                                      |
+------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| attributes             | {                                                                                                                                                                          |
|                        |   "attributes": null,                                                                                                                                                      |
|                        |   "refs": null,                                                                                                                                                            |
|                        |   "refs_map": null,                                                                                                                                                        |
|                        |   "removed_rsrc_list": []                                                                                                                                                  |
|                        | }                                                                                                                                                                          |
| creation_time          | 2017-01-10T10:48:11Z                                                                                                                                                       |
| description            |                                                                                                                                                                            |
| links                  | http://controller:8004/v1/282838c02c784f7ab8c89dd37ccfa87c/stacks/swarm-cluster_test-nftqb3b77za6/89003fb7-a094-4ed8-b744-00121e8cd577/resources/swarm_masters (self)      |
|                        | http://controller:8004/v1/282838c02c784f7ab8c89dd37ccfa87c/stacks/swarm-cluster_test-nftqb3b77za6/89003fb7-a094-4ed8-b744-00121e8cd577 (stack)                             |
|                        | http://controller:8004/v1/282838c02c784f7ab8c89dd37ccfa87c/stacks/swarm-cluster_test-nftqb3b77za6-swarm_masters-ukvpffafljws/10655a90-845e-4b7b-8fce-2f7e7a97ae15 (nested) |
| logical_resource_id    | swarm_masters                                                                                                                                                              |
| physical_resource_id   | 10655a90-845e-4b7b-8fce-2f7e7a97ae15                                                                                                                                       |
| required_by            | etcd_address_switch                                                                                                                                                        |
|                        | api_address_switch                                                                                                                                                         |
| resource_name          | swarm_masters                                                                                                                                                              |
| resource_status        | CREATE_FAILED                                                                                                                                                              |
| resource_status_reason | CREATE aborted (Task create from ResourceGroup "swarm_masters" Stack "swarm-cluster_test-nftqb3b77za6" [89003fb7-a094-4ed8-b744-00121e8cd577] Timed out)                   |
| resource_type          | OS::Heat::ResourceGroup                                                                                                                                                    |
| updated_time           | 2017-01-10T10:48:11Z                                                                                                                                                       |
+------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

heat-engine.log file on controller node..

2017-01-10 11:48:09.256 4105 INFO heat.engine.scheduler [req-d44de78e-2835-43ad-8669-bfeaa96ccb20 - - - - -] Task create from ResourceGroup "swarm_masters" Stack "swarm-cluster_test-nftqb3b77za6" [89003fb7-a094-4ed8-b744-00121e8cd577] timed out
2017-01-10 11:48:09.286 4102 INFO heat.engine.service [req-d0e8b866-3e6e-44f0-a9f3-b2be5c40a981 641b42e296d544fdbf9fe7e8bfc65c57 282838c02c784f7ab8c89dd37ccfa87c - - -] Starting cancel of updating stack swarm-cluster_test-nftqb3b77za6-swarm_masters-ukvpffafljws
2017-01-10 11:48:09.336 4102 INFO heat.engine.stack [req-d0e8b866-3e6e-44f0-a9f3-b2be5c40a981 641b42e296d544fdbf9fe7e8bfc65c57 282838c02c784f7ab8c89dd37ccfa87c - - -] Stack CREATE FAILED (swarm-cluster_test-nftqb3b77za6-swarm_masters-ukvpffafljws): Stack CREATE cancelled
2017-01-10 11:48:09.383 4105 INFO heat.engine.stack [req-d44de78e-2835-43ad-8669-bfeaa96ccb20 - - - - -] Stack CREATE FAILED ...
(more)
2016-11-24 05:30:24 -0600 received badge  Popular Question (source)
2016-11-23 10:35:54 -0600 answered a question Cinder-volume going down from up
I have made rabbitmq durable queue to true. That's solved my problem.
2016-11-23 10:31:32 -0600 asked a question Openstack newton baremetal service, ironic installation doubte

Hi everyone,

I am new to use OpenStack baremetal service name, Ironic. I am trying to set up ironic service of OpenStack newton on Ubuntu 16.04 using (http://docs.openstack.org/project-ins...).

According to the guide, components( ironic-api, ironic-conduct and python-ironicclient) run on one node, typically the Compute Service’s compute node. But as I have install database service on controller node, So I have to create database for ironic on controller node in addition with ironic user and identity service endpoints and associated stuff on controller node. Please correct me if I am wrong.

My queries are

  1. According to guide, does it mean that I need to install ironic-api ironic-conductor and python-ironicclient only on one compute node of my cloud environment?
  2. Will it be a problem if I install those component on controller node?
  3. Whether object storage service swift is mandatory for ironic service?
  4. Whether same compute node can be use to provision both bare metal and VM instance?

Thanks in advance.

Any kind of help will be appreciated.

2016-11-23 01:22:36 -0600 commented question Cinder-volume going down from up

thanks @Bernd Bausch. I have made rabbitmq durable queue to true. That's solved my problem.

2016-11-21 09:56:23 -0600 commented question Cinder-volume going down from up

Thank you so much @Bernd Bausch. I saw the mentioned blog earlier also. I don't know about durable queues in rabbit message queue. So I thought that if I change the rabbitmq configuration setting that will impact my overall environment. That's why I was afraid of changing. Any way I will give a try.

2016-11-20 07:45:02 -0600 asked a question Cinder-volume going down from up

I am deploying openstack newton on Ubuntu 16.04.

My cinder volume service suddenly goes down from up state.

If I restart services again it is show up for few second then goes to down. I have checked time service and message queue services. But I could not find out the proper reason.

Time sychronization output:

root@hpc-cluster-host3:/home/hpchost3# chronyc sources

210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample

^? controller                    0  10     0   10y     +0ns[   +0ns] +/-    0ns

root@controller:/home/hpchost1# rabbitmqctl status

Status of node rabbit@controller ...
[{pid,2366},

 {running_applications,[{rabbit,"RabbitMQ","3.5.7"},

                        {mnesia,"MNESIA  CXC 138 12","4.13.3"},
                        {xmerl,"XML parser","1.3.10"},
                        {os_mon,"CPO  CXC 138 46","2.4"},
                        {sasl,"SASL  CXC 138 11","2.7"},
                        {stdlib,"ERTS  CXC 138 10","2.8"},
                        {kernel,"ERTS  CXC 138 10","4.2"}]},
 {os,{unix,linux}},

 {erlang_version,"Erlang/OTP 18 [erts-7.3] [source] [64-bit] [smp:32:32] [async-threads:64] [kernel-poll:true]\n"},
 {memory,[{total,690022024},

          {connection_readers,2326320},
          {connection_writers,621008},
          {connection_channels,2070648},
          {connection_other,7142160},
          {queue_procs,5266552},
          {queue_slave_procs,0},
          {plugins,0},
          {other_proc,15665616},
          {mnesia,947344},
          {mgmt_db,0},
          {msg_index,202288},
          {other_ets,1163008},
          {binary,607076944},
          {code,16972695},
          {atom,654217},
          {other_system,29913224}]},
 {alarms,[]},
 {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]},
 {vm_memory_high_watermark,0.4},
 {vm_memory_limit,40444759244},
 {disk_free_limit,50000000},
 {disk_free,833303617536},
 {file_descriptors,[{total_limit,65436},
                    {total_used,246},
                    {sockets_limit,58890},
                    {sockets_used,244}]},
 {processes,[{limit,1048576},{used,3414}]},
 {run_queue,0},
 {uptime,536692}]

root@controller:/home/hpchost1# ps -ef | grep 2366

rabbitmq  2366  1597  5 Nov14 ?        07:28:21 /usr/lib/erlang/erts-7.3/bin/beam.smp -W w -A 64 -P 1048576 -K true -B i -- -root /usr/lib/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.5.7/sbin/../ebin -noshell -noinput -s rabbit boot -sname rabbit@controller -boot start_sasl -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit error_logger {file,"/var/log/rabbitmq/rabbit@controller.log"} -rabbit sasl_error_logger {file,"/var/log/rabbitmq/rabbit@controller-sasl.log"} -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/lib/rabbitmq_server-3.5.7/sbin/../plugins" -rabbit plugins_expand_dir "/var/lib/rabbitmq/mnesia/rabbit@controller-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit@controller" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672
rabbitmq  2639  2366  0 Nov14 ?        00:00:06 inet_gethost 4
root     18126 19972  0 18:30 pts/0    00:00:00 grep --color=auto 2366

root@hpc-cluster-host3:/home/hpchost3# lsof -i :5672|grep hpc-cluster-host3

cinder-vo 5872 cinder    9u  IPv4  23304      0t0  TCP hpc-cluster-host3:36232->controller:amqp (ESTABLISHED)

Cinder Service list on controller Node:

hpchost1@controller:~$ cinder service-list

+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host                  | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller            | nova | enabled | up    | 2016-11-20T11:09:54.000000 | -               |
| cinder-volume    | hpc-cluster-host3@lvm | nova | enabled | up    | 2016-11-20T11:09:27.000000 | -               |
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+

hpchost1@controller:~$ cinder service-list

+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host                  | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller            | nova | enabled | up    | 2016-11-20T11:11:15.000000 | -               |
| cinder-volume    | hpc-cluster-host3@lvm | nova | enabled | down  | 2016-11-20T11:09:27.000000 | -               |
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+

cinder-volume log file

2016-11-20 16:39:27.724 5859 WARNING oslo_reports.guru_meditation_report [-] Guru ...
(more)
2016-11-20 04:21:06 -0600 received badge  Enthusiast
2016-11-19 10:44:32 -0600 commented answer Cinder Volume service is down on Controller Node

When I restart cinder related services on both controller and storage node.

Initially cinder-volume is showing up for few seconds..

After that it is going down..

I think, it is related to thread starting.. Please help me, I am stuck.

@bcollins, thanks.

2016-11-19 10:30:37 -0600 commented answer Cinder Volume service is down on Controller Node

root@hpc-cluster-host3:/home/hpchost3# pvdisplay

--- Physical volume --

PV Name /dev/sda4 VG Name cinder-volumes

PV Size 725.70 GiB / not usable 4.57 MiB Allocatable yes PE Size 4.00 MiB Total PE 185777

2016-11-19 10:28:49 -0600 commented answer Cinder Volume service is down on Controller Node

it's not a time issue

here is output

chronyc sources

210 Number of sources = 1

MS Name/IP address Stratum Poll Reach LastRx Last sample

^? controller 0 7 0 10y +0ns[ +0ns] +/-

2016-11-18 08:44:45 -0600 answered a question Cinder Volume service is down on Controller Node

Hi everyone,

           I got the same error, "cinder volume is down". if I restart service, it is showing up for few second then again goes to down state.

here is my cinder-volume.log

2016-11-18 19:00:26.629 2245 CRITICAL cinder [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] AttributeError: 'NoneType' object has no attribute 'cleanup'

2016-11-18 19:00:26.629 2245 ERROR cinder Traceback (most recent call last):

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/bin/cinder-volume", line 10, in <module>

2016-11-18 19:00:26.629 2245 ERROR cinder sys.exit(main())

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/cinder/cmd/volume.py", line 109, in main

2016-11-18 19:00:26.629 2245 ERROR cinder launcher.launch_service(server)

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 520, in launch_service

2016-11-18 19:00:26.629 2245 ERROR cinder self._start_child(wrap)

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 494, in _start_child

2016-11-18 19:00:26.629 2245 ERROR cinder self.launcher.wait() 2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 236, in wait 2016-11-18 19:00:26.629 2245 ERROR cinder self.services.wait() 2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 687, in wait 2016-11-18 19:00:26.629 2245 ERROR cinder service.wait()

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 419, in wait

2016-11-18 19:00:26.629 2245 ERROR cinder self.rpcserver.wait()

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 268, in wrapper

2016-11-18 19:00:26.629 2245 ERROR cinder log_after, timeout_timer)

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 188, in run_once

2016-11-18 19:00:26.629 2245 ERROR cinder post_fn = fn()

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 267, in <lambda>

2016-11-18 19:00:26.629 2245 ERROR cinder states[state].run_once(lambda: fn(self, args, *kwargs),

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 452, in wait

2016-11-18 19:00:26.629 2245 ERROR cinder self.listener.cleanup()

2016-11-18 19:00:26.629 2245 ERROR cinder AttributeError: 'NoneType' object has no attribute 'cleanup'

2016-11-18 19:00:26.629 2245 ERROR cinder

2016-11-18 19:00:26.636 2230 INFO oslo_service.service [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] Caught SIGTERM, stopping children

2016-11-18 19:00:26.642 2230 INFO oslo_service.service [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] Waiting on 1 children to exit

2016-11-18 19:00:26.651 2230 INFO oslo_service.service [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] Child 2245 killed by signal 15

2016-11-18 19:00:28.207 3310 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers

SIGUSR1 and SIGUSR2 by default for ... (more)

2016-11-18 08:34:47 -0600 answered a question why is the cinder-volume service down ?

Hi everyone,

           I got the same error, "cinder volume is down". if I restart service, it is showing up for few second then again goes to down state.

here is my cinder-volume.log

2016-11-18 19:00:26.629 2245 CRITICAL cinder [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] AttributeError: 'NoneType' object has no attribute 'cleanup'

2016-11-18 19:00:26.629 2245 ERROR cinder Traceback (most recent call last):

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/bin/cinder-volume", line 10, in <module>

2016-11-18 19:00:26.629 2245 ERROR cinder sys.exit(main())

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/cinder/cmd/volume.py", line 109, in main

2016-11-18 19:00:26.629 2245 ERROR cinder launcher.launch_service(server)

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 520, in launch_service

2016-11-18 19:00:26.629 2245 ERROR cinder self._start_child(wrap)

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 494, in _start_child

2016-11-18 19:00:26.629 2245 ERROR cinder self.launcher.wait() 2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 236, in wait 2016-11-18 19:00:26.629 2245 ERROR cinder self.services.wait() 2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 687, in wait 2016-11-18 19:00:26.629 2245 ERROR cinder service.wait()

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 419, in wait

2016-11-18 19:00:26.629 2245 ERROR cinder self.rpcserver.wait()

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 268, in wrapper

2016-11-18 19:00:26.629 2245 ERROR cinder log_after, timeout_timer)

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 188, in run_once

2016-11-18 19:00:26.629 2245 ERROR cinder post_fn = fn()

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 267, in <lambda>

2016-11-18 19:00:26.629 2245 ERROR cinder states[state].run_once(lambda: fn(self, args, *kwargs),

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 452, in wait

2016-11-18 19:00:26.629 2245 ERROR cinder self.listener.cleanup()

2016-11-18 19:00:26.629 2245 ERROR cinder AttributeError: 'NoneType' object has no attribute 'cleanup'

2016-11-18 19:00:26.629 2245 ERROR cinder

2016-11-18 19:00:26.636 2230 INFO oslo_service.service [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] Caught SIGTERM, stopping children

2016-11-18 19:00:26.642 2230 INFO oslo_service.service [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] Waiting on 1 children to exit

2016-11-18 19:00:26.651 2230 INFO oslo_service.service [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] Child 2245 killed by signal 15

2016-11-18 19:00:28.207 3310 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers

SIGUSR1 and SIGUSR2 by default for ... (more)

2016-11-18 08:32:08 -0600 received badge  Editor (source)
2016-11-18 08:28:28 -0600 answered a question volume service is down

Hi everyone,

           I got the same error, "cinder volume is down". if I restart service, it is showing up for few second then again goes to down state.

here is my cinder-volume.log

2016-11-18 19:00:26.629 2245 CRITICAL cinder [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] AttributeError: 'NoneType' object has no attribute 'cleanup'

2016-11-18 19:00:26.629 2245 ERROR cinder Traceback (most recent call last):

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/bin/cinder-volume", line 10, in <module>

2016-11-18 19:00:26.629 2245 ERROR cinder sys.exit(main())

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/cinder/cmd/volume.py", line 109, in main

2016-11-18 19:00:26.629 2245 ERROR cinder launcher.launch_service(server)

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 520, in launch_service

2016-11-18 19:00:26.629 2245 ERROR cinder self._start_child(wrap)

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 494, in _start_child

2016-11-18 19:00:26.629 2245 ERROR cinder self.launcher.wait() 2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 236, in wait 2016-11-18 19:00:26.629 2245 ERROR cinder self.services.wait() 2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 687, in wait 2016-11-18 19:00:26.629 2245 ERROR cinder service.wait()

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 419, in wait

2016-11-18 19:00:26.629 2245 ERROR cinder self.rpcserver.wait()

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 268, in wrapper

2016-11-18 19:00:26.629 2245 ERROR cinder log_after, timeout_timer)

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 188, in run_once

2016-11-18 19:00:26.629 2245 ERROR cinder post_fn = fn()

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 267, in <lambda>

2016-11-18 19:00:26.629 2245 ERROR cinder states[state].run_once(lambda: fn(self, args, *kwargs),

2016-11-18 19:00:26.629 2245 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 452, in wait

2016-11-18 19:00:26.629 2245 ERROR cinder self.listener.cleanup()

2016-11-18 19:00:26.629 2245 ERROR cinder AttributeError: 'NoneType' object has no attribute 'cleanup'

2016-11-18 19:00:26.629 2245 ERROR cinder

2016-11-18 19:00:26.636 2230 INFO oslo_service.service [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] Caught SIGTERM, stopping children

2016-11-18 19:00:26.642 2230 INFO oslo_service.service [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] Waiting on 1 children to exit

2016-11-18 19:00:26.651 2230 INFO oslo_service.service [req-8c979f13-7cc1-4e02-81ec-0ae8dc16e097 - - - - -] Child 2245 killed by signal 15

2016-11-18 19:00:28.207 3310 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers

SIGUSR1 and SIGUSR2 by default for ... (more)

2016-11-14 07:48:56 -0600 commented answer HTTPNotFound (HTTP 404) error when run ceilometer meter-list

Thanks, I solved the problem. app.wsgi is in different directory.

2016-11-14 07:24:29 -0600 commented answer HTTPNotFound (HTTP 404) error when run ceilometer meter-list

Hi, I encountered the same problem when I run ceilometer meter-list. I tried to follow your solution but I could not find /usr/lib/python2.7/site-packages/ceilometer/api/app.wsgi file. I am using ubuntu xenial with OpenStack newton.

Any kind of help will be appreciated. Thanks in advance.

2016-07-29 05:36:04 -0600 received badge  Famous Question (source)