Ask Your Question
0

ssh public key can't be injected into instance

asked 2014-10-29 05:13:04 -0500

D20K gravatar image

updated 2014-10-29 09:37:10 -0500

I installed ICEhouse on Window8.1 Host and VMware VMs - 4 node ( Controller, Compute, Network, Block Storage).

But I cannot connect to the instance with ssh private key. So I check instance console log and check like following.

when instance is launched, (I checked console log lists) even though instance connects to http://169.254.169.254/2009-04-04/ins... , the log shows fail message.

checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 18.98. request failed
failed 2/20: up 21.51. request failed
failed 3/20: up 23.76. request failed
failed 4/20: up 26.02. request failed
failed 5/20: up 28.28. request failed
failed 6/20: up 30.51. request failed
failed 7/20: up 32.93. request failed
failed 8/20: up 35.16. request failed
failed 9/20: up 37.38. request failed

As I know, when instance is made, nova-api-metadata service can be connected and then ssh public key is injected.

So I checked ssh pulbic key is not injected into instance correctly.

=== sshd host keys ===
-----BEGIN SSH HOST KEY KEYS-----
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgnKr+B121EUSFAhTWSljdvK8RjBZfm7FhcKtMq0HpIHukUdx06gwqJIcAPztF2f3PsyvNwunVuTkoehvH+9Dnu/IfXBsESY1LFzrIjtINLxOzh6zYnAt8DQbUmINEQSXykbUWfsX1c+tn6cgkXSX2kVEBv7qRgOHuDGpR+vpj146Gi0= root@cirros
ssh-dss AAAAB3NzaC1kc3MAAACBAKF8eavLiWdZLzr9ODrya0QTvEv+QCh4p58pzNAVduiy5N32m3sSuyLTfa0gHP+b9D0QmbCfQVNXj1uLW/sUMCe5/0leIwLhQOn1B4yAkPQKEXiXTTAepvKBa9Lyotv/0SNyaHICrgBsC8ElK8XR1ArX6c6fyP03O32/+g/g3IBNAAAAFQCWQXaKsZz7qYh7tk8dakZPDzXkuQAAAIEAm0/CIfBy5AhhPiqVbOoBMBYeCofAQ0mAKjv7g8IVWYGosp8/lspqYP6975DBF4wsQq67eTl6KRI6oFDtJdHOcCy0ATcx/f/eoJjM85E16fkz+VfXEcKFDFG/XHJeUfsBjScmC2N0z8m90GTijcO6r88pnG2QCp0cSNXKr7ij7nIAAACAP9c62Q3JZxMYTtdoLhwC0EY85dEBxmaUO2mmDXajTKxNPrPFAHuUO+lInw9O8JAhzEwo3mo7chTesoXWziUQB3uhkhTw3j8JHN5HBBvCJtzcRaisp1NhwwaR9leiBGdF+XicPOo4okzZiCx1pAlVEvRM0PXfMqIvgc0ZvvfjyHw= root@cirros
-----END SSH HOST KEY KEYS-----

So I checked metadata section in admin guide, http://docs.openstack.org/admin-guide... , the guide said in multi-host mode, I should inject metadata_host = 127.0.0.1 in /etc/nova/nova.conf.

Is this correct configuration? and I am not sure the setting should be in control node or compute node.

In network node

$ ip netns exec qrouter-router-id   iptables -t nat -S | grep 169.254
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697

$ ip netns exec qrouter-router-id  netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      42020/python

$ ps -ef | grep 42020
root     42020     1  0 17:03 ?        00:00:00 /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/166fa743-68e0-41ee-961f-91d039999fa2.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=166fa743-68e0-41ee-961f-91d039999fa2 --state_path=/var/lib/neutron --metadata_port=9697 --verbose --log-file=neutron-ns-metadata-proxy-166fa743-68e0-41ee-961f-91d039999fa2.log --log-dir=/var/log/neutron

In Controller Node

root@controller:~# netstat -lntp | grep 8775
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      12893/python

root@controller:~# ps -ef | grep 12893
nova     12893     1  1 22:28 ?        00:00:32 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
nova     12899 12893  0 22:28 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
nova     12900 12893  0 22:28 ?        00:00:07 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
nova     12907 12893  0 22:28 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
edit retag flag offensive close merge delete

Comments

On my Multi Node IceHouse ML2&OVS&VXLAN system in nova.conf
metadata_host=192.168.1.127 ( IP of Controller). Create any ssh keypair in Dashboard and assign to instance. You have another problem - access to nova metadata server.

dbaxps gravatar imagedbaxps ( 2014-10-29 05:40:03 -0500 )edit

Message should look like checking http://169.254.169.254/2009-04-04/met...

dbaxps gravatar imagedbaxps ( 2014-10-29 05:42:28 -0500 )edit

@dbaxps nova.conf in all nodes added "metadata_host=(IP of Controller)" ?

D20K gravatar imageD20K ( 2014-10-29 08:06:46 -0500 )edit

It's correct. Does CirrOS VM still complaining 169.254.169.254 at booting up ?

dbaxps gravatar imagedbaxps ( 2014-10-29 08:11:06 -0500 )edit

@dbaxps yes, the problem not fixed..

D20K gravatar imageD20K ( 2014-10-29 08:32:57 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2014-10-29 09:24:01 -0500

bishoy gravatar image

make sure that the nova-api-metadata is stopped in case that you have in nova.conf enabled_apis=..,..,..,metadata. Also what is the OS of the image, try to use prebulit image template. in case that you built it and it is a centos or redhat image. please show us the output of cat /etc/sysconfig/network

edit flag offensive delete link more

Comments

yes, I have in nova.conf "enabled_apis=...,metadata" already. I launched cirros and ubuntu 14.04 cloud server, both show same metadata erros.

D20K gravatar imageD20K ( 2014-10-29 09:42:07 -0500 )edit

So stove nova-api-metadata on all nodes if you have it running, please check on all nodes if you have defined the metadata host as the controller on all the computes as well and use the external or public ip of the controller and restart the services. Use the key again in launching the instance

bishoy gravatar imagebishoy ( 2014-10-29 10:35:26 -0500 )edit

make sure that you have the firewall configured correctly in nova.conf: firewall_driver=nova.virt.firewall.NoopFirewallDriver security_group_api=neutron

bishoy gravatar imagebishoy ( 2014-10-29 10:42:52 -0500 )edit
0

answered 2014-10-29 08:49:02 -0500

dbaxps gravatar image

updated 2014-10-29 08:49:56 -0500

Forced to use answer field due to formatting option

Please, verify on network node

$ ip netns exec qrouter-router-id   iptables -t nat -S | grep 169.254
$ ip netns exec qrouter-router-id  netstat -antp
If it will show pid1 of process on 9697
$ ps -ef | grep pid1

Please verify on controller node

$ netstat -lntp | grep 8775
If it will show pid2  of process on 8775
$ ps -ef | grep pid2
edit flag offensive delete link more

Comments

all shows well..

D20K gravatar imageD20K ( 2014-10-29 09:13:40 -0500 )edit

Please, update your original question with output of commands above.

dbaxps gravatar imagedbaxps ( 2014-10-29 09:17:18 -0500 )edit

@dbaxps thanks, I just update question.

D20K gravatar imageD20K ( 2014-10-29 09:44:11 -0500 )edit

Please, add outputs route -n && ip netns exec qrouter-router-id route -n on network node.

dbaxps gravatar imagedbaxps ( 2014-10-29 09:50:09 -0500 )edit

As asked by bishoy post entry enabled_apis from nova.conf && check service nova-api-metadata status

dbaxps gravatar imagedbaxps ( 2014-10-29 09:54:19 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-10-29 05:13:04 -0500

Seen: 3,309 times

Last updated: Oct 29 '14