Ask Your Question
0

Openstack Rocky keypair issues

asked 2019-04-24 03:43:14 -0500

JeffBannister gravatar image

I have installed Openstack Rocky using Packstack. I cannot connect to instances using keypairs - the instance refuse when connecting with the public key. I've a similar setup with Ocata and it works fine. I've tried with a number of ssh clients and the problem is the same. Could someone provide some guidance as to where I should look to solve this? I've tried verbose on ssh in Linux and looking at Wireshark but can't really see anything that would suggest what's going wrong. TIA

edit retag flag offensive close merge delete

Comments

I've noticed that in the cirros instance created in Rocky, there is no .ssh directory in the instance so no authorized_keys file. There is in Ocata

JeffBannister gravatar imageJeffBannister ( 2019-04-24 03:53:41 -0500 )edit

One of the staps in documentation

$ . demo-openrc
$ ssh-keygen -q -N ""
$ openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
tjoen gravatar imagetjoen ( 2019-04-24 05:08:03 -0500 )edit

You launched the instance with the --key-name parameter? If yes, one reason for not getting the key is a problem with the metadata API. Use openstack network agent list and systemctl status *neutron* to see if the metadata agent is healthy.

My Rocky Packstack has no keypair problems,

Bernd Bausch gravatar imageBernd Bausch ( 2019-04-24 05:35:35 -0500 )edit

You can also check instances’ console logs for errors regarding metadata and/or keys.

Bernd Bausch gravatar imageBernd Bausch ( 2019-04-24 05:36:18 -0500 )edit

Thanks Bernd, I looked at the logs for cirros and for an ubuntu instance I created. There are errors around sshd - this is for cirros:

Starting dropbear sshd: failed to get instance-id of datasource
WARN: generating key of type ecdsa failed!
JeffBannister gravatar imageJeffBannister ( 2019-04-24 20:16:54 -0500 )edit

2 answers

Sort by » oldest newest most voted
0

answered 2019-04-26 00:26:03 -0500

JeffBannister gravatar image

updated 2019-04-26 00:27:06 -0500

Checked the router and the NAT table is fine - same as yours. My setup is that I have 3 nodes: controller, compute & network. I did a Wireshark trace on the network node as follows:

  • capture on a mirror port on br-int, capture on interface between network (10.0.1.31) & controller (10.0.1.11)
  • on the instance (192.168.101.108) I did curl http://169.254.169.254/ which gives a 500 Internal Error

I can't attach the Wireshark trace file but I've attached a screen grab of the output (image). After the HTTP GET to 169, there's a set of AMQP messages between the network node and the controller which I'm assuming is for the meta-data - they don't seem to be in error. But then 169 returns a 500 error.

edit flag offensive delete link more

Comments

So I've looked at an older one that works (Ocata) and I can see the network node issuing the GET to the controller, but my Rocky doesn't

JeffBannister gravatar imageJeffBannister ( 2019-04-26 01:13:55 -0500 )edit

Ahh, my network node is going to 127.0.0.1 for the meta-data even though metadata_agent.ini says nova_metadata_ip=10.0.1.11. I can see the error in the metadata log file that says it's trying 127.0.0.1 port 8775. But I don't know why it's doing that.

JeffBannister gravatar imageJeffBannister ( 2019-04-26 01:36:02 -0500 )edit

Fixed! The metadata_agent.ini must now refer to the metadata server by nova_metadata_host=IP addr or name rather than nova_metadata_ip (since Queens): github

JeffBannister gravatar imageJeffBannister ( 2019-04-26 03:05:47 -0500 )edit
0

answered 2019-04-25 05:09:28 -0500

updated 2019-04-25 05:13:19 -0500

Here is how the metadata traffic is NAT’ed. I enter the network namespace for a router that is connected to the external network (10.1.1.0/24) and print the netfilter nat table:

[stack@rocky ~(alice)]$ openstack router list
+--------------------------------------+-----------------+--------+-------+-------------+------+----------------------------------+
| ID                                   | Name            | Status | State | Distributed | HA   | Project                          |
+--------------------------------------+-----------------+--------+-------+-------------+------+----------------------------------+
| aad9670b-2eaa-4134-b350-c1acc6e9ac65 | saturn-router-2 | ACTIVE | UP    | None        | None | 6376262d8f524f368ba4fe14d683d5eb |
| f39ad645-0f7b-4296-9d3c-79cdf393553a | saturn-router   | ACTIVE | UP    | None        | None | 6376262d8f524f368ba4fe14d683d5eb |
+--------------------------------------+-----------------+--------+-------+-------------+------+----------------------------------+
[stack@rocky ~(alice)]$ sudo ip netns exec qrouter-f39ad645-0f7b-4296-9d3c-79cdf393553a /bin/bash
[root@rocky stack(alice)]# ip r
default via 10.1.1.1 dev qg-9115266f-e1 
10.1.1.0/24 dev qg-9115266f-e1 proto kernel scope link src 10.1.1.19 
10.100.100.0/24 dev qr-32f354fe-e9 proto kernel scope link src 10.100.100.1 
[root@rocky stack(alice)]# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 10.1.1.11/32 -j DNAT --to-destination 10.100.100.11
-A neutron-l3-agent-POSTROUTING ! -i qg-9115266f-e1 ! -o qg-9115266f-e1 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697

The last line takes care of metadata traffic.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

Stats

Asked: 2019-04-24 03:43:14 -0500

Seen: 86 times

Last updated: Apr 26