Ask Your Question
0

neutron client authentication failed

asked 2013-11-26 22:54:43 -0600

arunuke gravatar image

Hello All,

I am running Havana on a Fedora 19 system and I am running into some issues in nova-compute when booting an instance. I am getting a "neutron client authentication failed: Could not find Service or Region in Service Catalog" error in the nova-compute log file (note 1). However, the endpoints do exist when run from the controller node (note 2). I am also able to use the neutron client to print previously created networks and all other related information (note 3), so I am not quite sure if it's the endpoint that is causing issues. Nova.conf appears to have the right settings to indicate where the neutron is running (note 4) while the api-paste.ini file also the right information (note 5). To verify, I ran keystone commands from the compute host and I am able to see the endpoints (note 6).

It does not appear to be the issue described in https://bugs.launchpad.net/nova/+bug/1241275 (https://bugs.launchpad.net/nova/+bug/...) since the stack traces are different. From the looks of it, in my case, there is an attempt to get the token and it fails to locate even the endpoint. Any thoughts or suggestions in this regard are greatly appreciated.

Notes:

  1. nova-compute log

2013-11-26 14:02:56.921 2493 ERROR nova.network.neutronv2 [req-662e2363-cdcb-495e-96c3-d9852fcfea59 None None] Neutron client authentication failed: Could not find Service or Region in Service Catalog. 2013-11-26 14:02:56.922 2493 ERROR nova.compute.manager [req-662e2363-cdcb-495e-96c3-d9852fcfea59 None None] Instance failed network setup after 1 attempt(s) 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager Traceback (most recent call last): 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1228, in _allocate_network_async 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager dhcp_options=dhcp_options) 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/api.py", line 49, in wrapper 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager res = f(self, context, args, *kwargs) 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 358, in allocate_for_instance 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager LOG.exception(msg, port_id) 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 323, in allocate_for_instance 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager port_req_body) 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 392, in _populate_neutron_extension_values 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager self._refresh_neutron_extensions_cache() 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 375, in _refresh_neutron_extensions_cache 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager neutron = neutronv2.get_client ... (more)

edit retag flag offensive close merge delete

6 answers

Sort by ยป oldest newest most voted
0

answered 2013-12-01 07:23:36 -0600

arunuke gravatar image

Issue resolved : I enabled debug mode and I found that the tenant name was set to null in the requests that go out. I went back and looked up my nova.conf file and I see that I had used neutron_admin_tenant instead of neutron_admin_tenant_name. compute did not have a valid tenant name and couldn't find the endpoint. I fixed this, restarted compute and now I am able to create instances as expected.

Thanks to Yaguang for all his assistance.

edit flag offensive delete link more
0

answered 2013-12-01 07:16:30 -0600

arunuke gravatar image

Just a quick addendum. The ids in the service-list and the endpoint-list do not seem to match since I had re-created the setup after I posted the original message. I verified the values in my new setup and they are matching.

I am able to get information from the neutron server (running on the cloud controller) using admin/admin/admin credentials and also neutron/neutron/service credentials using the neutron client. It fails only when nova tries to talk to the cloud controller.

edit flag offensive delete link more
0

answered 2013-11-30 21:12:06 -0600

arunuke gravatar image

Hello Yaguang,

Please find the information listed below. The information was retrieved from the compute host.

[root@viserion utils]# cat keystone_remote.bash source variables.env

export OS_USERNAME=admin export OS_PASSWORD=$ADMINPASS export OS_TENANT_NAME=admin export OS_AUTH_URL=http://$OS_CLOUD_CONTROLLER:5000/v2.0

[root@viserion utils]# keystone user-list +----------------------------------+---------+---------+-------+ | id | name | enabled | email | +----------------------------------+---------+---------+-------+ | e53da56b118b47b7a2effcbb84e1fd64 | admin | True | | | 813e6a725f074cd98b2e70821efff038 | cinder | True | | | e41024d64afe4ef5bfe0b23e4ec921cc | demo | True | | | f729ec74d25e421db22b7a8230d015f2 | glance | True | | | 8502991328d54fa78e4fed5123cb3b35 | neutron | True | | | 774b59b1240b4a82844d08b002ccb3f9 | nova | True | | +----------------------------------+---------+---------+-------+ [root@gwr27 viserion]# keystone tenant-list +----------------------------------+---------+---------+ | id | name | enabled | +----------------------------------+---------+---------+ | 781670306e6b4c50bc332e511aca54c4 | admin | True | | a280078745bd482081f7d567f0a6f24f | demo | True | | ef7607ce7f4f4e5b8b1c9533ebf9c11b | service | True | +----------------------------------+---------+---------+ [root@gwr27 viserion]# keystone service-list +----------------------------------+----------+----------+-------------+ | id | name | type | description | +----------------------------------+----------+----------+-------------+ | 4efbe45a80544d6595e99c4d679cd00b | cinder | volume | Volume V2 | | 090a3ba5f0c54558837de983aff299b6 | cinder2 | volumev2 | Volume V2 | | fc5a5af784764bf9bc2100103eba0121 | ec2 | ec2 | EC2 | | 97ddca39d0d94bb9841ca7fe7914db58 | glance | image | Image | | 41a1f3a32bf74fafafe65dbb588a8085 | keystone | identity | Identity | | 480efc9c07084aefba05f1b61461584e | neutron | network | Network | | 7e902d2371b8441cb2d0683ee1dc2b99 | nova | compute | Compute | +----------------------------------+----------+----------+-------------+

edit flag offensive delete link more
0

answered 2013-11-28 04:44:16 -0600

heut2008 gravatar image

may be something is wrong with your keystone , can you put out the output of keystone user-list and keystone tenant-list with admin credentials?

edit flag offensive delete link more
0

answered 2013-11-27 18:19:22 -0600

arunuke gravatar image

Hello Yaguang/All,

Thanks for taking the time to respond to my request. I had originally tried with neutron_region_name set to RegionOne on just the compute, but after your suggestion, I set it on both the compute and controller nova.conf files and restarted all the servers. Unfortunately, I am still seeing the same issue on the compute side.

My settings on the nova.conf file on the compute side are as follows

[DEFAULT] rpc_backend = nova.openstack.common.rpc.impl_qpid qpid_hostname = 192.168.122.61 glance_host = 192.168.122.61 my_ip = 192.168.122.133 vncserver_listen = 192.168.122.133 vncserver_proxyclient_address = 192.168.122.133 auth_strategy = keystone auth_host = 192.168.122.61 admin_user = nova admin_tenant_name = service admin_password = nova network_api_class = nova.network.neutronv2.api.API neutron_url = http://192.168.122.61:9696 neutron_auth_strategy = keystone neutron_admin_tenant = service neutron_admin_username = neutron neutron_admin_password = neutron neutron_admin_auth_url = http://192.168.122.61:35357/v2.0 #firewall_driver = nova.virt.firewall.NooPFirewallDriver firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver security_group_api = neutron compute_driver = nova.virt.libvirt.LibvirtDriver neutron_region_name = RegionOne libvirt_type = qemu

I wasn't sure if the firewall driver was creating the issue, so I tried using the NooPFirewallDriver, but that failed too indicating 'model object has no attribute NooPFirewallDriver'. I have the same setting on my controller and all services are running fine.

Incidentally, I noticed the following set of messages on my keystone.log on the controller

2013-11-27 09:39:51.865 2456 WARNING keystone.common.wsgi [-] Could not find project, service. 2013-11-27 09:39:51.910 2456 WARNING keystone.common.wsgi [-] Could not find user, neutron. 2013-11-27 09:54:15.982 2456 WARNING keystone.common.wsgi [-] Could not find project, demo. 2013-11-27 09:54:16.011 2456 WARNING keystone.common.wsgi [-] Could not find user, demo. 2013-11-27 10:10:09.862 2456 WARNING keystone.common.wsgi [-] Could not find service, neutron.

I am guessing there is something else that I need to pay attention to. Any inputs in this regard would be very helpful.

edit flag offensive delete link more
0

answered 2013-11-27 03:08:21 -0600

heut2008 gravatar image

you need to specify neutron endpoint region in nova.conf, add neutron_region_name = RegionOne to nova.conf then restart nova-compute and nova-api .

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-11-26 22:54:43 -0600

Seen: 288 times

Last updated: Dec 01 '13