Unable to login to new instance via console/SSH [closed]
I am following this guide ( http://docs.openstack.org/icehouse/install-guide/install/apt/content/ (http://docs.openstack.org/icehouse/in...) ) to start an openstack environment and was able to start a Cirros instance.
However, i can not login to the Cirros Instance using ssh keypair "ssh -i key.pem cirros@ip-address" (it still asking for password, i can login using cirros default password) and observe there is no .ssh/authorized_keys stored in my instance. I thought it was Cirros bug, but i've also tried boot instance using Ubuntu Cloud image and the SSH using keypair is also not working (although i've assign the keypair too)
I'm also tried another means to inject ssh keypair (and also create user at boot time) using examples from http://cloudinit.readthedocs.org/en/latest/topics/examples.html (http://cloudinit.readthedocs.org/en/l...) and use it using --user-data at nova boot, none of them worked (user creation also not working).
Now i am wondering if my environment has the correct configuration to inject ssh keypair or cloud-init files to the newly created instances.
How can i verify? i don't know to begin with. I just want to easily login to my instance by any means (console/SSH)
Update log when booting the instance:
2014-08-06 10:25:54,216 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: http error [500]
2014-08-06 10:25:55,383 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: http error [500]
2014-08-06 10:25:56,664 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: http error [500]
is there something wrong with my metadata service? it is actually up and running
root@controller:~# lsof -i:8775
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nova-api 1084 nova 9u IPv4 13534 0t0 TCP *:8775 (LISTEN)
nova-api 1775 nova 9u IPv4 13534 0t0 TCP *:8775 (LISTEN)
nova-api 1776 nova 9u IPv4 13534 0t0 TCP *:8775 (LISTEN)
nova-api 1777 nova 9u IPv4 13534 0t0 TCP *:8775 (LISTEN)
nova-api 1778 nova 9u IPv4 13534 0t0 TCP *:8775 (LISTEN)
Also add error message in the network node /etc/neutron/metadata-agent.log
2014-08-06 19:12:23.430 3759 TRACE neutron.agent.metadata.agent endpoint_type=self.endpoint_type)
2014-08-06 19:12:23.430 3759 TRACE neutron.agent.metadata.agent File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 81, in url_for 2014-08-06 19:12:23.430 3759 TRACE neutron.agent.metadata.agent raise exceptions.EndpointNotFound() 2014-08-06 19:12:23.430 3759 TRACE neutron.agent.metadata.agent EndpointNotFound: Could not find Service or Region in Service Catalog. 2014-08-06 19:12:23.430 3759 TRACE neutron.agent.metadata.agent 2014-08-06 19:12:23.432 3759 INFO eventlet.wsgi.server [req-31d4c1ac-0d5d-4f09-ad50-d80a23d362a9 None] 10.0.99.30,<local> - - [06/Aug/2014 19:12:23] "GET /2009-04-04/meta-data/instance-id HTTP/1.1" 500 229 0.201596
I've doublecheck the auth_region is matching with one registered in keystone (RegionOne)
Hi All,
Does the metadata service (in this case the controller node) need to be reachable (can be ping) from the instance?