Ask Your Question
1

nova ERROR: [Errno 111] Connection refused

asked 2014-05-25 19:20:54 -0500

trentonknight gravatar image

I am attempting to use nova on my compute node. My node is able to communicate without an issue using the keystoneclient and the glanceclient so I do not believe it is a networking issue. As additional proof I am able to connect from my compute node to the controller node and authenticate with mysql:

[root@compute1 ~]# keystone user-list
+----------------------------------+--------+---------+-------------------------+
|                id                |  name  | enabled |          email          |
+----------------------------------+--------+---------+-------------------------+
| 83b481b332df43338266a1498fd1b2d3 | admin  |   True  | trentonknight@gmail.com |
| e950bf8e0f454ed497ab7522e15ec41e |  demo  |   True  | trentonknight@gmail.com |
| ad9c57efc3f0415696b9d3be1de7d5be | glance |   True  | trentonknight@gmail.com |
| 1d82c51681d14c80ad49d3fc092ad06a |  nova  |   True  | trentonknight@gmail.com |
+----------------------------------+--------+---------+-------------------------+
[root@compute1 ~]# mysql -unova -popenstack -h controller -D nova
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> SHOW TABLES;
+--------------------------------------------+
| Tables_in_nova                             |
+--------------------------------------------+
| agent_builds                               |
| aggregate_hosts                            |
| aggregate_metadata                         |
| aggregates                                 |
| block_device_mapping                       |
| bw_usage_cache                             |
| cells                                      |
| certificates                               |
| compute_nodes                              |
| console_pools                              |
| consoles                                   |
| dns_domains                                |
| fixed_ips                                  |
| floating_ips                               |
| instance_actions                           |
| instance_actions_events                    |
| instance_faults                            |
| instance_group_member                      |
| instance_group_metadata                    |
| instance_group_policy                      |
| instance_groups                            |
| instance_id_mappings                       |
| instance_info_caches                       |
| instance_metadata                          |
| instance_system_metadata                   |
| instance_type_extra_specs                  |
| instance_type_projects                     |
| instance_types                             |
| instances                                  |
| iscsi_targets                              |
| key_pairs                                  |
| migrate_version                            |
| migrations                                 |
| networks                                   |
| pci_devices                                |
| project_user_quotas                        |
| provider_fw_rules                          |
| quota_classes                              |
| quota_usages                               |
| quotas                                     |
| reservations                               |
| s3_images                                  |
| security_group_default_rules               |
| security_group_instance_association        |
| security_group_rules                       |
| security_groups                            |
| services                                   |
| shadow_agent_builds                        |
| shadow_aggregate_hosts                     |
| shadow_aggregate_metadata                  |
| shadow_aggregates                          |
| shadow_block_device_mapping                |
| shadow_bw_usage_cache                      |
| shadow_cells                               |
| shadow_certificates                        |
| shadow_compute_nodes                       |
| shadow_console_pools                       |
| shadow_consoles                            |
| shadow_dns_domains                         |
| shadow_fixed_ips                           |
| shadow_floating_ips                        |
| shadow_instance_actions                    |
| shadow_instance_actions_events             |
| shadow_instance_faults                     |
| shadow_instance_group_member               |
| shadow_instance_group_metadata             |
| shadow_instance_group_policy               |
| shadow_instance_groups                     |
| shadow_instance_id_mappings                |
| shadow_instance_info_caches                |
| shadow_instance_metadata                   |
| shadow_instance_system_metadata            |
| shadow_instance_type_extra_specs           |
| shadow_instance_type_projects              |
| shadow_instance_types                      |
| shadow_instances                           |
| shadow_iscsi_targets                       |
| shadow_key_pairs                           |
| shadow_migrate_version                     |
| shadow_migrations                          |
| shadow_networks                            |
| shadow_pci_devices                         |
| shadow_project_user_quotas                 |
| shadow_provider_fw_rules                   |
| shadow_quota_classes                       |
| shadow_quota_usages                        |
| shadow_quotas                              |
| shadow_reservations                        |
| shadow_s3_images                           |
| shadow_security_group_default_rules        |
| shadow_security_group_instance_association |
| shadow_security_group_rules                |
| shadow_security_groups                     |
| shadow_services                            |
| shadow_snapshot_id_mappings                |
| shadow_snapshots                           |
| shadow_task_log                            |
| shadow_virtual_interfaces                  |
| shadow_volume_id_mappings                  |
| shadow_volume_usage_cache                  |
| shadow_volumes                             |
| snapshot_id_mappings                       |
| snapshots                                  |
| task_log                                   |
| virtual_interfaces                         |
| volume_id_mappings                         |
| volume_usage_cache                         |
| volumes                                    |
+--------------------------------------------+
108 rows in set (0.00 sec)

mysql>

As you can see I can connect to the controller node at least with keystone, mysql. I can also use the glanceclient without an issue. I have disabled selinux and iptables. The following nova services are running:

openstack-nova-api (pid  1217) is running...
openstack-nova-cert (pid  1225) is running...
openstack-nova-conductor (pid  1233) is running...
openstack-nova-console is stopped
openstack-nova-consoleauth (pid  1241) is running...
openstack-nova-metadata-api is stopped
openstack-nova-novncproxy (pid  1249) is running...
openstack-nova-scheduler (pid  1257) is running...
openstack-nova-spicehtml5proxy is stopped
openstack-nova-xvpvncproxy is stopped

My admin-openrc.sh is the following:

export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0

My nova.conf has the following:

[DEFAULT]
rpc_backend = qpid
qpid_hostname = controller
my_ip = 10.0.0.11
vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11
auth_strategy = keystone

[database]
connection = mysql://nova:openstack@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_protocol = http
auth_port = 35357
admin_user = nova
admin_tenant_name = service
admin_password = openstack

When I run the keystone endpoint-list command I can see: http://controller:8774/v2/%(tenant_id)s. The following is the error I now get when attempting to run just about all nova commands:

[root@compute1 nova]# nova --debug ...
(more)
edit retag flag offensive close merge delete

Comments

What say commands nestat -antp | grep 8874 and iptables-save | grep 8774 on Controller ?

dbaxps gravatar imagedbaxps ( 2014-05-25 21:54:18 -0500 )edit

No I do not see port 8874 on either the controller nor the compute node? However, I have iptables disabled. I think you are on to something as port 8774 is the only one NOT listening:

[root@controller ~]# netstat -antp | grep 35357
tcp        0      0 0.0.0.0:35357               0.0.0.0:*                   LISTEN      1355/python         
[root@controller ~]# netstat -antp | grep 5000
tcp        0      0 0.0.0.0:5000                0.0.0.0:*                   LISTEN      1355/python         
[root@controller ~]# netstat -antp | grep 9292
tcp        0      0 10.0.0.11:9292              0.0.0.0:*                   LISTEN      1336/python         
[root@controller ~]# netstat -antp | grep 8774

I reinstalled the nova client on the controller side but still nothing. My services on the compute node at least APPEAR to be running?

trentonknight gravatar imagetrentonknight ( 2014-05-26 08:48:42 -0500 )edit

Yes, openstack-nova-api is supposed to listen at 8774

[root@icehouse1 Downloads(keystone_admin)]# iptables-save | grep 8774 
-A INPUT -p tcp -m multiport --dports 8773,8774,8775 -m comment --comment "001 novaapi incoming" -j ACCEPT
[root@icehouse1 Downloads(keystone_admin)]# netstat -antp | grep 8774
tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      1262/python         
tcp        0      0 192.168.1.127:8774      192.168.1.127:40282     ESTABLISHED 3123/python         
tcp        0      0 192.168.1.127:8774      192.168.1.127:49405     ESTABLISHED 3124/python         
tcp        0      0 192.168.1.127:8774      192.168.1.127:40044     ESTABLISHED 3123/python         
tcp        0      0 192.168.1.127:40279     192.168.1.127:8774      ESTABLISHED 1205/python         
tcp        0      0 192.168.1.127:49405     192.168.1.127:8774      ESTABLISHED 5561/python         
tcp        0      0 192.168.1.127:40282     192.168.1.127:8774      ESTABLISHED ...
(more)
dbaxps gravatar imagedbaxps ( 2014-05-26 09:10:56 -0500 )edit
1

Start and keep running all Nova services . Enable ipv4 iptables firewall ( on RH products) or tune UFW on Ubuntu to support properly port status for openstack services as required. I have samples only for Fedora 20.

dbaxps gravatar imagedbaxps ( 2014-05-26 09:15:09 -0500 )edit

3 answers

Sort by ยป oldest newest most voted
2

answered 2014-05-26 09:23:34 -0500

dbaxps gravatar image

Start and keep running all Nova services . Enable ipv4 iptables firewall ( on RH products) or tune UFW on Ubuntu to support properly port status for openstack services as required

edit flag offensive delete link more

Comments

Here are my services:

[root@compute1 ~]# service --status-all
auditd (pid  1026) is running...
Stopped
cgred is stopped
crond (pid  1185) is running...
Table: filter
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     all      ::/0                 ::/0                state RELATED,ESTABLISHED 
2    ACCEPT     icmpv6    ::/0                 ::/0                
3    ACCEPT     all      ::/0                 ::/0                
4    ACCEPT     tcp      ::/0                 ::/0                state NEW tcp dpt:22 
5    REJECT     all      ::/0                 ::/0                reject-with icmp6-adm-prohibited 

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination         
1    REJECT     all      ::/0                 ::/0                reject-with icmp6-adm-prohibited 

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

Table: filter
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
2    ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           
3    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0 ...
(more)
trentonknight gravatar imagetrentonknight ( 2014-05-26 10:27:27 -0500 )edit

Try to repoduce on your Controller:-
Pid may differ from1262 (should be 1193)
[root@icehouse1 neutron(keystone_admin)]$ ps -ef | grep nova-api
nova 1262 1 0 06:33 ? 00:04:12 /usr/bin/python /usr/bin/nova-api
nova 2915 1262 0 06:34 ? 00:00:00 /usr/bin/python /usr/bin/nova-api
nova 2919 1262 0 06:34 ? 00:00:00 /usr/bin/python /usr/bin/nova-api
nova 3123 1262 0 06:34 ? 00:00:43 /usr/bin/python /usr/bin/nova-api
nova 3124 1262 0 06:34 ? 00:00:26 /usr/bin/python /usr/bin/nova-api
nova 3158 1262 0 06:34 ? 00:00:01 /usr/bin/python /usr/bin/nova-api
nova 3159 1262 0 06:34 ? 00:00:01 /usr/bin/python /usr/bin/nova-api
[root@icehouse1 neutron(keystone_admin)]$ netstat -anpt | grep 1262
tcp 0 0 0.0.0.0:8773 0.0.0.0:* LISTEN 1262/python

tcp ...












(more)

dbaxps gravatar imagedbaxps ( 2014-05-26 10:54:43 -0500 )edit

dbaxps I get the following on the compute node:

  [root@compute1 ~]# ps -ef | grep nova-api
            nova      1790     1  3 11:48 ?        00:00:01 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
            nova      1798  1790  0 11:48 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
            nova      1799  1790  0 11:48 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
            root      1803  1303  0 11:49 pts/0    00:00:00 grep nova-api

Netstat shows the following:

[root@compute1 ~]# netstat -anpt | grep 1790
        tcp        0      0 0.0.0.0:8773                0.0.0.0:*                   LISTEN      1790/python         
        tcp        0      0 0.0.0.0:8774                0.0.0.0:*                   LISTEN      1790/python

The controller node is as follows:

  [root@controller ~]# ps -ef | grep nova-api
    root      1616  1405  0 13:30 pts ...
(more)
trentonknight gravatar imagetrentonknight ( 2014-05-26 12:21:05 -0500 )edit

When you will be done also make glance checks :-
$ service openstack-glance-api status
$ service openstack-glance-registry status
$ ps -ef | grep glance
$ curl http://CONTROLLER-IP:9292

dbaxps gravatar imagedbaxps ( 2014-05-26 12:51:40 -0500 )edit

OK, I will. Thanks again dbaxps your advice has been amazing!

trentonknight gravatar imagetrentonknight ( 2014-05-26 14:33:14 -0500 )edit
0

answered 2014-05-26 02:23:44 -0500

chen-li gravatar image

Enable iptables for port 8774 on controller node:

 iptables -I INPUT 1 -p tcp --dport 8774 -j ACCEPT
edit flag offensive delete link more

Comments

Thank you Chen-Li but I actually have iptables completely disabled on both nodes at this stage. I also have SELinux disabled.

trentonknight gravatar imagetrentonknight ( 2014-05-26 08:05:30 -0500 )edit
0

answered 2014-06-24 03:36:22 -0500

Hi trenton.

I've been having the very same problem you got (considering you're using icehouse).

I have a controller with debian wheezy. I installed keystone and glance and the services registered to keystone succeffully. I can upload image, manage policy, and ect. When it came to installing nova, however, I stumbled upon this [111] ConnectionError problem, and the traceback indicated the same file as you : /usr/lib/python2.6/site-packages/requests/adapters.py", exept I had the last version, python2.7, hence a different path. My guest is that since openstack for Debian Wheezy use python2.6 & python 2.7, the adaptater could'nt handle the requests written in python 2.6 because he was missing libs for some reasons, hence nova weren't actually listening on port 8774.

The problem got resolved after installing the package python-software-properties. I didn't installed it when setting up the controller node because it wasn't specified in the Debian Wheezy install guide. Maybe something to fix on the doc ?

Also I noticed another thing on the Debian Wheezy guide : when running "apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient", it installed the meta-package nova-common instead.

I've been using the following repo :

  • http://archive.gplhost.com/debian icehouse main
  • http://archive.gplhost.com/debian icehouse-backports main

A repository update or bogus maybe ?

Hope this help.

Regards.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-05-25 19:20:54 -0500

Seen: 7,621 times

Last updated: Jun 24 '14