shoubam's profile - activity

2017-10-11 06:19:57 -0600 received badge  Enlightened (source)
2017-10-11 06:19:57 -0600 received badge  Good Answer (source)
2014-11-22 01:54:36 -0600 marked best answer glance requests: 404 Not Found

I am following this guide, and have trouble doing anything with glance. I first stumpled across this when I tried to add an image like this:

[root@cloud ~]# glance -d image-create --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu" <precise-server-cloudimg-amd64.img 
   HTTP/1.1 404 Not Found
    date: Fri, 05 Jul 2013 15:27:46 GMT
    content-length: 52
    content-type: text/plain; charset=UTF-8
    x-openstack-request-id: req-ba4ea9c8-ccab-486c-ada2-5a8ac00d8d5c

    404 Not Found

    The resource could not be found.



    Request returned failure status.
    404 Not Found
    The resource could not be found.
        (HTTP 404)

Doing an glance image-list gives the same error. In /var/log/glance/api.log, I find

   2013-07-05 17:27:46.613 1653 DEBUG glance.api.middleware.version_negotiation [-] Determining version of request: POST //v1/images Accept:  process_request /usr/lib/python2.7/site-packages/glance/api/middleware/version_negotiation.py:46
    2013-07-05 17:27:46.618 1653 DEBUG glance.api.middleware.version_negotiation [-] Using url versioning process_request /usr/lib/python2.7/site-packages/glance/api/middleware/version_negotiation.py:59
    2013-07-05 17:27:46.619 1653 DEBUG glance.api.middleware.version_negotiation [-] Matched version: v1 process_request /usr/lib/python2.7/site-packages/glance/api/middleware/version_negotiation.py:71
    2013-07-05 17:27:46.620 1653 DEBUG glance.api.middleware.version_negotiation [-] new uri /v1/images process_request /usr/lib/python2.7/site-packages/glance/api/middleware/version_negotiation.py:72

While doing the image-create command, I can see in top that glance is working for a while, being at the top of the process list. So it seems that glance is connecting and working, but then somehow fails.

On a sidenote, not sure if thats related: when stopping the glance daemons, I get

    2013-07-05 16:05:39.703 3624 CRITICAL glance [-] [Errno 13] Permission denied
    2013-07-05 16:05:39.703 3624 TRACE glance Traceback (most recent call last):
    2013-07-05 16:05:39.703 3624 TRACE glance   File "/usr/bin/glance-api", line 65, in <modu
    le>
    2013-07-05 16:05:39.703 3624 TRACE glance     server.wait()
    2013-07-05 16:05:39.703 3624 TRACE glance   File "/usr/lib/python2.7/site-packages/glance
    /common/wsgi.py", line 271, in wait
    2013-07-05 16:05:39.703 3624 TRACE glance     self.wait_on_children()
    2013-07-05 16:05:39.703 3624 TRACE glance   File "/usr/lib/python2.7/site-packages/glance
    /common/wsgi.py", line 244, in wait_on_children
    2013-07-05 16:05:39.703 3624 TRACE glance     pid, status = os.wait()
    2013-07-05 16:05:39.703 3624 TRACE glance   File "/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 210, in kill_children
    2013-07-05 16:05:39.703 3624 TRACE glance     os.killpg(pgid, signal.SIGTERM)
    2013-07-05 16:05:39.703 3624 TRACE glance OSError: [Errno 13] Permission denied
    2013-07-05 16:05:39.703 3624 TRACE glance

I had a look at the code, and its the os.killpg(pgid, signal.SIGTERM that causes this, but nonetheless, the daemon is stopped.

2014-09-21 01:27:19 -0600 marked best answer cinder-volume localhost:5672 unreachable

I am using devstack on ubuntu 12.04. When starting the rejoin-stack.sh after a reboot, the scripts hangs at the command

cd /opt/stack/cinder && /usr/local/bin/cinder-volume --config-file /etc/cinder/cinder.conf

with the message

2013-07-08 16:55:12.935 ERROR cinder.openstack.common.rpc.common   [req-94a7b477-794b-4679-864d-dd6deb7ba5e8 None None] AMQP server on localhost:5672 is  unreachable: Socket closed. Trying again in 1 seconds.
2013-07-08 16:55:13.939 INFO cinder.openstack.common.rpc.common [req-94a7b477-794b-4679-864d-dd6deb7ba5e8 None None] Reconnecting to AMQP server on localhost:5672
2013-07-08 16:55:14.006 DEBUG amqp [-] Start from server, version: 0.9, properties: {u'information': u'Licensed under the MPL.  See http://www.rabbitmq.com/', u'product': u'RabbitMQ', u'copyright': u'Copyright (C) 2007-2011 VMware, Inc.', u'capabilities': {u'exchange_exchange_bindings': True, u'consumer_cancel_notify': True, u'publisher_confirms': True, u'basic.nack': True}, u'platform': u'Erlang/OTP', u'version': u'2.7.1'}, mechanisms: [u'PLAIN', u'AMQPLAIN'], locales: [u'en_US'] from (pid=4554) _start /usr/local/lib/python2.7/dist-packages/amqp/connection.py:706

It is true that there is no one listening (at least on ipv4):

root@devstack:~# netstat -tulpen |grep 5672
tcp6       0      0 :::5672                 :::*                    LISTEN      104        9733        1285/beam.smp

What am I missing? What could I do to further debug this?

2014-06-06 04:04:12 -0600 received badge  Great Question (source)
2014-04-23 03:26:24 -0600 received badge  Nice Question (source)
2014-01-23 18:42:55 -0600 marked best answer glance Authorization Failed with Errno 104

I am following this guide and have configured everything like in the guide. When I try something like glance image-list, I get

Authorization Failed: [Errno 104] Connection reset by peer

As I am new to openstack, I am not quite sure how to debug this. Whom is glance connecting to, keystone, or the mysql database? If its keystone, how can I check which credentials glance is using (I guess the ones provided in /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf ?) How can I check the credentials with the verifyer directly? ... any hints would be appreciated.

I also found:

# keystone user-list
WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).
[Errno 104] Connection reset by peer

Maybe there is a connection to the glance authorization error?

2013-11-29 16:50:44 -0600 received badge  Nice Question (source)
2013-09-19 08:14:16 -0600 received badge  Famous Question (source)
2013-09-04 08:14:12 -0600 received badge  Good Question (source)
2013-08-29 12:46:52 -0600 received badge  Notable Question (source)
2013-08-27 09:39:16 -0600 received badge  Nice Question (source)
2013-08-25 19:30:13 -0600 received badge  Famous Question (source)
2013-08-23 08:22:38 -0600 received badge  Popular Question (source)
2013-08-22 19:29:33 -0600 received badge  Famous Question (source)
2013-08-22 06:02:14 -0600 received badge  Famous Question (source)
2013-08-20 08:44:23 -0600 received badge  Notable Question (source)
2013-08-20 08:13:22 -0600 asked a question horizon not displaying console

Using grizzly on centos 6.4, I have this strange thing with horizon: when I try to view the console of an instance (novncproxy), all I get is

Instance Console

If console is not responding to keyboard input: click the grey status bar below. Click here to show only console

And then nothing. I can "Click here", which gets me a new browser window, with the message

Server disconnected (code: 1006)

and a greyed out Send CtrlAltDel button. Now when I reload this window (typing Ctrl+Shift+r in firefox), I get a functioning console.

When I do this with chrome, I get the console window within horizon, but still "Failed to connect to server (code: 1006)". Then I do the "click here" thingy, a new browser tab opens, and the console is working.

So while I have the basic functionality, I would greatly apreciate to have the console wokring _inside_ horizon _without_ the need to reload. Any hints? Anything I could debug?

2013-08-20 07:57:17 -0600 answered a question nova-novncproxy not connecting to instance

So it turns out this issue is related to myself sitting in an office with a webproxy. I have firefox configured to automatic detect proxy setting for the network. In general it works fine, I can connect to outside websites AND openstack horizon. horizon itself works fine, but it does not show the console window. I can load the console in an extra window, still no conncetion. But if I disable webproxy alltogether, and then reload the window with the console, it works.

So its kind of solved, but I would really like to know why this works like this. I will make a sperate question.

2013-08-19 02:16:51 -0600 received badge  Popular Question (source)
2013-08-13 19:57:48 -0600 received badge  Taxonomist
2013-08-10 20:56:47 -0600 received badge  Notable Question (source)
2013-08-10 20:56:47 -0600 received badge  Famous Question (source)
2013-08-10 20:56:47 -0600 received badge  Popular Question (source)
2013-08-10 10:29:42 -0600 received badge  Popular Question (source)
2013-08-10 10:29:42 -0600 received badge  Famous Question (source)
2013-08-10 10:29:42 -0600 received badge  Notable Question (source)
2013-08-09 09:49:53 -0600 received badge  Notable Question (source)
2013-08-08 04:36:27 -0600 asked a question nova-novncproxy not connecting to instance

Following up on my earlier question: I have openstack grizzly on centos 6.4. Things are working, except for console connection to the instances. Instances are up and running, and I can connect to their console with a vnc viewer, e.g. remmina. But I can not connect to their console via novnc. All it gives me is a "Server disconnected (code: 1006)".

hosts involved:

192.168.222.10 haproxy
192.168.222.17 all nova services except nova-compute
192.168.222.140 nova-compute

From an experimental devstack install, I can see that novncproxy is connecting directly to the vnc port of the instance. With this system, this is not the case, novnc is not attempting to connect to the instances console, as I can see from an empty tcpdump on the compute host:

tcpdump -i br-admin-222 port 5907
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br-admin-222, link-type EN10MB (Ethernet), capture size 65535 bytes

I am sure its the right port, because I can connect to that port with remmina and see the console. Also, I can connect to that port with telnet from the host running nova-novncproxy:

$ telnet 192.168.222.140 5907
Trying 192.168.222.140...
Connected to 192.168.222.140.
Escape character is '^]'.
RFB 003.008

So its not a networking issue.

nova.conf on nova-novncproxy host is

novncproxy_port=6080
novncproxy_base_url=http://192.168.222.10:6080/vnc_auto.html
novncproxy_host=192.168.222.17

nova.conf on nova-compute host is

vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.222.140
novncproxy_base_url=http://192.168.222.10:6080/vnc_auto.html

If someone could answer any of the following questions, I would be greatfull:

  • how does nova-novncproxy know about which host and which port to connect to for a certain instance? Where is this stored, and how does that information get to novnc?
  • I have started nova-novncproxy in the foreground to get more info with --debug and --verbose, but won't get any errors or warnings. How can I get more info on where exactly this is failing?
  • Any other hints on how to debug this?
2013-08-08 02:17:27 -0600 marked best answer nova-compute and nova-scheduler keep going down

nova-compute and nova-scheduler keep going into "down" state. After disabeling and enabeling them, they are listed as up for a short period of time (looks like on refresh cycle), but then stay down, and I don't have access to my instances any more.

    $ nova service-list;date
    +------------------+---------------+----------+---------+-------+----------------------------+
    | Binary           | Host          | Zone     | Status  | State | Updated_at                 |
    +------------------+---------------+----------+---------+-------+----------------------------+
    | nova-compute     | xxxxxx-0001   | nova     | enabled | down  | 2013-08-02T13:32:33.000000 |
    | nova-conductor   | osint-nova-01 | internal | enabled | up    | 2013-08-05T07:33:46.000000 |
    | nova-console     | osint-nova-01 | internal | enabled | up    | 2013-08-05T07:33:40.000000 |
    | nova-consoleauth | osint-nova-01 | internal | enabled | up    | 2013-08-05T07:33:40.000000 |
    | nova-scheduler   | osint-nova-01 | internal | enabled | down  | 2013-08-05T06:50:21.000000 |
    +------------------+---------------+----------+---------+-------+----------------------------+

The processes are running though:

ps aux |grep nova-compute
nova      8079  0.1  0.0 354668 43952 ?        S    15:28   0:00 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova.conf --logfile /var/log/nova/compute.log

$ ps aux |grep scheduler
nova     18130  0.0  2.6 381808 50684 ?        S    08:49   0:01 /usr/bin/python /usr/bin/nova-scheduler --config-file /etc/nova/nova.conf --logfile /var/log/nova/scheduler.log

and /var/log/nova/compute.log tell me

    2013-08-02 15:28:50.794 8079 DEBUG nova.servicegroup.api [-] ServiceGroup driver defined as an instance of db __new__ /usr/lib/python2.6/site-packages/nova/servicegroup/api.py:61
    2013-08-02 15:28:50.908 8079 INFO nova.manager [-] Skipping periodic task _periodic_update_dns because its interval is negative
    2013-08-02 15:28:51.004 8079 INFO nova.virt.driver [-] Loading compute driver 'libvirt.LibvirtDriver'
    2013-08-02 15:28:51.093 DEBUG nova.openstack.common.rpc.amqp [req-981212fc-da44-4652-a598-36f3c510d08d None None] Making synchronous call on conductor ... multicall /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:583
    2013-08-02 15:28:51.094 DEBUG nova.openstack.common.rpc.amqp [req-981212fc-da44-4652-a598-36f3c510d08d None None] MSG_ID is c9fcef41e877470189f39d3d2eeb340a multicall /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:586
    2013-08-02 15:28:51.095 DEBUG nova.openstack.common.rpc.amqp [req-981212fc-da44-4652-a598-36f3c510d08d None None] UNIQUE_ID is 91380e8de6c44b66965da9dc7ec3d4f0. _add_unique_id /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:337
    2013-08-02 15:28:51.095 DEBUG nova.openstack.common.rpc.amqp [req-981212fc-da44-4652-a598-36f3c510d08d None None] Pool creating new connection create /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:75
    2013-08-02 15:28:51.112 INFO nova.openstack.common.rpc.common [req-981212fc-da44-4652-a598-36f3c510d08d None None] Connected to AMQP server on osint-mq-01:5672

which I don't find very helpfull. Also /var/log/nova $ grep -i error * gives no output.

Scheduler and compute are not running on the same host. On the scheduler host, logs tell me:

root@nova-01:/var/log/nova $ grep -i error * |grep 2013-08-05 |grep -v api.log
    /var/log/nova $ grep -i error * |grep 2013-08-05 |grep -v api.log
scheduler.log:2013-08-05 08:29:41.173 17137 DEBUG nova.service [-] publish_errors : False wait /usr/lib/python2.6/site-packages/nova/service.py:205
scheduler.log:2013-08-05 08:29:41.193 17137 DEBUG nova.service [-] fatal_exception_format_errors : False wait /usr/lib/python2.6/site-packages/nova/service.py:205

Clocks are in sync thanks to ntp. BUT: the ... (more)

2013-08-08 02:17:26 -0600 received badge  Nice Answer (source)
2013-08-08 02:16:57 -0600 received badge  Famous Question (source)
2013-08-07 13:23:31 -0600 received badge  Popular Question (source)
2013-08-07 07:40:25 -0600 edited question novnc: Server disconnected (code: 1006)

Using openstack grizzly on centos 6.4, I have a strange problem with console connections to the clients. I am using novnc to view clients console. It has been working, but just recently stopped. The only message I get is "Server disconnected (code: 1006)" on the console window.

The configuration is

novncproxy_port=6080
novncproxy_base_url=http://192.168.222.10:6080/vnc_auto.html
novncproxy_host=192.168.222.17

but since it has worked before, I don't think its related to this config.

In the compute log I find lots of ERRORS like

compute.log:2013-08-06 10:46:12.051 41171 ERROR nova.compute.manager [-] Instance 38c8c8bd-bf42-f7c2-7371-99c2b80c61a8 found in the hypervisor, but not in the database

but non of the instances I am using shows up, so I am guessing this is unrelated (see this question)

For debugging purposes, I startet nova-novncproxy by hand with the following command:

python /usr/bin/nova-novncproxy --verbose --debug  --web /usr/share/novnc/

After loading the console page, I see

    162: 192.168.10.117: new handler Process
     162: 192.168.10.117: "GET /vnc_auto.html?token=f36824e5-ad18-46ab-9d09-c93966d9b44e&title=cirros01(7b8eb08a-d308-4fad-ac86-3f3567f49a33) HTTP/1.1" 200 -
     163: 192.168.10.117: new handler Process
     164: 192.168.10.117: new handler Process
     163: 192.168.10.117: "GET /include/base.css HTTP/1.1" 200 -
     164: 192.168.10.117: "GET /include/vnc.js HTTP/1.1" 200 -
     165: 192.168.10.117: new handler Process
     166: 192.168.10.117: new handler Process
     167: 192.168.10.117: new handler Process
     168: 192.168.10.117: new handler Process
     165: 192.168.10.117: "GET /include/util.js HTTP/1.1" 200 -
     169: 192.168.10.117: new handler Process
     170: 192.168.10.117: new handler Process
     171: 192.168.10.117: new handler Process
     166: 192.168.10.117: "GET /include/webutil.js HTTP/1.1" 200 -
     167: 192.168.10.117: "GET /include/base64.js HTTP/1.1" 200 -
     172: 192.168.10.117: new handler Process
     173: 192.168.10.117: new handler Process
     169: 192.168.10.117: "GET /include/des.js HTTP/1.1" 200 -
     168: 192.168.10.117: "GET /include/websock.js HTTP/1.1" 200 -
     170: 192.168.10.117: "GET /include/input.js HTTP/1.1" 200 -
     172: 192.168.10.117: "GET /include/rfb.js HTTP/1.1" 200 -
     173: 192.168.10.117: "GET /include/jsunzip.js HTTP/1.1" 200 -
     171: 192.168.10.117: "GET /include/display.js HTTP/1.1" 200 -

nova-compute runns on a different machine, but I can see that there is a vnc listener:

$ lsof -i |grep 590 |grep -v localhost
qemu-kvm  59941    qemu   15u  IPv4 191922025      0t0  TCP *:5907 (LISTEN)
$ telnet localhost 5907
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
RFB 003.008

and I can connect to it from the machine running novncproxy:

$ telnet 192.168.222.140 5907
Trying 192.168.222.140...
Connected to 192.168.222.140.
Escape character ...
(more)
2013-08-07 06:39:30 -0600 received badge  Notable Question (source)
2013-08-07 03:47:51 -0600 commented question novnc: Server disconnected (code: 1006)

I did `nova-manage db sync`. It did not help with the error message and also not with the actual problem.