azriel's profile - activity

2016-12-29 16:40:57 -0600 received badge  Famous Question (source)
2016-04-02 14:15:08 -0600 marked best answer IceHouse - neutron didn't create db tables

Hi,

I'm doing a manual installaiton on Ubuntu 14.10 and for some reason Neutron didn't create the db tables. i'm getting an error on the server.log saying -

ProgrammingError: (ProgrammingError) (1146, "Table 'neutron.ml2_gre_allocations' doesn't exist") 'SELECT ml2_gre_allocations.gre_id AS ml2_gre_allocations_gre_id, ml2_gre_allocations.allocated AS ml2_gre_allocations_allocated \nFROM ml2_gre_allocations' ()

netstat -ntlp

root@icehouse:~# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN      1873/python     
tcp        0      0 0.0.0.0:8773            0.0.0.0:*               LISTEN      1861/python     
tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      1861/python     
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      1861/python     
tcp        0      0 0.0.0.0:9191            0.0.0.0:*               LISTEN      1740/python     
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      2549/beam.smp   
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      1734/python     
tcp        0      0 10.210.154.101:3306     0.0.0.0:*               LISTEN      2142/mysqld     
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      28706/memcached 
tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      1749/python     
tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      2935/dnsmasq    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1808/sshd       
tcp        0      0 0.0.0.0:35357           0.0.0.0:*               LISTEN      1734/python     
tcp6       0      0 :::5672                 :::*                    LISTEN      2549/beam.smp   
tcp6       0      0 :::80                   :::*                    LISTEN      28561/apache2   
tcp6       0      0 :::4369                 :::*                    LISTEN      2349/epmd       
tcp6       0      0 :::22                   :::*                    LISTEN      1808/sshd

mysql db -

mysql> use neutron;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+-------------------+
| Tables_in_neutron |
+-------------------+
| alembic_version   |
+-------------------+
1 row in set (0.00 sec)

neutron server.log

2014-11-02 11:08:24.334 32654 TRACE neutron 
2014-11-02 11:08:24.880 32664 INFO neutron.common.config [-] Logging enabled!
2014-11-02 11:08:24.891 32664 INFO neutron.common.config [-] Config paste file: /etc/neutron/api-paste.ini
2014-11-02 11:08:24.994 32664 INFO neutron.manager [-] Loading core plugin: ml2
2014-11-02 11:08:25.077 32664 INFO neutron.plugins.ml2.managers [-] Configured type driver names: ['gre']
2014-11-02 11:08:25.081 32664 INFO neutron.plugins.ml2.managers [-] Loaded type driver names: ['gre']
2014-11-02 11:08:25.082 32664 INFO neutron.plugins.ml2.managers [-] Registered types: ['gre']
2014-11-02 11:08:25.082 32664 INFO neutron.plugins.ml2.managers [-] Tenant network_types: ['gre']
2014-11-02 11:08:25.083 32664 INFO neutron.plugins.ml2.managers [-] Configured extension driver names: []
2014-11-02 11:08:25.083 32664 INFO neutron.plugins.ml2.managers [-] Loaded extension driver names: []
2014-11-02 11:08:25.084 32664 INFO neutron.plugins.ml2.managers [-] Registered extension drivers: []
2014-11-02 11 ...
(more)
2015-09-08 08:22:25 -0600 marked best answer Juno - error while launching instance

Hi all,

It seems that i'm getting an error while trying to launch a new instance:

on nova-conductor log i'm getting many timeouts, see logs below.

"2014-11-02 09:43:18.802 5000 TRACE nova.scheduler.driver MessagingTimeout: Timed out waiting for a reply to message ID c59c2cb151314ce6b5ee6c25058a99af"
  • Ubuntu 14.04.01
  • Juno, all in one, manual installation.

root@icehouse:~# keystone service-list

+----------------------------------+----------+----------+-------------------------+
|                id                |   name   |   type   |       description       |
+----------------------------------+----------+----------+-------------------------+
| ab6302bfbca241b794b11239a32bb2c9 |  glance  |  image   | OpenStack Image Service |
| 6b483a0cf9594a4da987cd4fa1f5aa7d | keystone | identity |    OpenStack Identity   |
| 84f3fe9f93584f8ebeb2b284e5dcf727 | neutron  | network  |   OpenStack Networking  |
| e12d40b834854291a832b8d84c742b8e |   nova   | compute  |    OpenStack Compute    |
+----------------------------------+----------+----------+-------------------------+

root@icehouse:~# keystone endpoint-list

+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+
|                id                |   region  |                publicurl                |               internalurl               |                 adminurl                |            service_id            |
+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+
| 9424b0fecdb44bae8c26aa967f67eef2 | regionOne |          http://controller:9292         |          http://controller:9292         |          http://controller:9292         | ab6302bfbca241b794b11239a32bb2c9 |
| 988788edfba247d68806a2355dbf7b8a | regionOne |          http://controller:9696         |          http://controller:9696         |          http://controller:9696         | 84f3fe9f93584f8ebeb2b284e5dcf727 |
| aba9b6a195ab498c93cfaf466a67b18e | regionOne |       http://controller:5000/v2.0       |       http://controller:5000/v2.0       |       http://controller:35357/v2.0      | 6b483a0cf9594a4da987cd4fa1f5aa7d |
| bbf5eb29bcc146beb087940b7a1a4807 | regionOne | http://controller:8774/v2/%(tenant_id)s | http://controller:8774/v2/%(tenant_id)s | http://controller:8774/v2/%(tenant_id)s | e12d40b834854291a832b8d84c742b8e |
+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+

root@icehouse:~# nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At
nova-cert        icehouse                             internal         enabled    :-)   2014-10-27 15:11:38
nova-consoleauth icehouse                             internal         enabled    :-)   2014-10-27 15:11:38
nova-scheduler   icehouse                             internal         enabled    :-)   2014-10-27 15:11:38
nova-conductor   icehouse                             internal         enabled    :-)   2014-10-27 15:11:38
nova-compute     icehouse                             nova             enabled    :-)   2014-10-27 15:11:37

nova conductor log -

2014-10-29 18:10:23.079 5084 ERROR nova.scheduler.driver [req-7e066556-1b52-4a55-bbac-4c841140ba3e None] Exception during scheduler.run_instance
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver Traceback (most recent call last):
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 614, in build_instances
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     request_spec, filter_properties)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 49, in select_destinations
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     context, request_spec, filter_properties)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 35, in __run_method
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     return getattr(self.instance, __name)(*args, **kwargs)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 34, in select_destinations
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     context, request_spec, filter_properties)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 108, in select_destinations
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     request_spec=request_spec, filter_properties=filter_properties)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 152, in call
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     retry=self.retry)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in _send ...
(more)
2015-04-22 18:17:30 -0600 received badge  Stellar Question (source)
2015-04-22 00:14:52 -0600 received badge  Notable Question (source)
2015-04-15 07:35:38 -0600 received badge  Notable Question (source)
2015-04-15 07:35:38 -0600 received badge  Famous Question (source)
2015-03-27 20:03:36 -0600 received badge  Famous Question (source)
2015-03-19 13:40:11 -0600 received badge  Popular Question (source)
2015-03-19 04:22:41 -0600 commented question linuxbridge plugin vif bind failed

setting type local won't do vlan encapsulation? can you please elaborate on type local, and what will be my external NIC? flat?

2015-03-19 01:54:25 -0600 commented question linuxbridge plugin vif bind failed

Can't I have my physical interface mapping as flat and internal networks as type vlan?

2015-03-16 23:58:37 -0600 received badge  Notable Question (source)
2015-03-16 10:26:36 -0600 asked a question linuxbridge plugin vif bind failed

Hi,

  • I'm using Ubuntu 14.04 with Juno all-in-one installation.
  • I'm having trouble to bind tenant network (type vlan) to a router (which is connected to a flat network).
  • I'm using the linuxbridge agent with ml2 plugin, with flat network configuration for my external traffic (which is provisioned ok) and tenant network types is set to vlan where I get :
  • "vif_type: binding_failed"
  • "In _notify_port_updated(), no bound segment for port"
  • "not defined on plugin"

logs - http://paste.openstack.org/show/192647/

ml2conf - http://paste.openstack.org/show/192584/

neutron conf - http://paste.openstack.org/show/192649/

mysql DB - http://paste.openstack.org/show/192607/

ps awux | grep neutron (output) - http://paste.openstack.org/show/192650/

nova.conf - http://paste.openstack.org/show/192652/

2015-03-16 02:46:24 -0600 received badge  Popular Question (source)
2015-03-15 23:35:32 -0600 received badge  Popular Question (source)
2015-03-15 09:29:58 -0600 asked a question ml2 plugin with linuxbridge and vlans

Hi,

I'm using Juno on Ubuntu 14.04 on an all-in-one installation and i'm trying to bring up the ml2 plugin with linux bridge mechanism driver, but once I set the router as GW I get the following -

server.log -

2015-03-15 16:42:26.039 4287 WARNING neutron.plugins.ml2.managers [req-29d3c3e9-c342-40f1-969c-b27033f3b129 None] Failed to bind port 05eddd66-f967-49a6-b51e-e8cf0723ab38 on host controller
2015-03-15 16:42:26.065 4287 WARNING neutron.plugins.ml2.plugin [req-29d3c3e9-c342-40f1-969c-b27033f3b129 None] In _notify_port_updated(), no bound segment for port 05eddd66-f967-49a6-b51e-e8cf0723ab38 on network 4299988f-a842-4fe7-b092-56d20716deeb
2015-03-15 16:42:29.218 4287 WARNING neutron.plugins.ml2.rpc [req-f6c22f06-4225-4e55-872b-28b941c9163c None] Device tap05eddd66-f9 requested by agent lbd4ae5203a3aa on network 4299988f-a842-4fe7-b092-56d20716deeb not bound, vif_type: binding_failed

linuxbridge-agent.log -

2015-03-15 16:42:29.222 2191 INFO neutron.plugins.linuxbridge.agent.linuxbridge_neutron_agent [req-f6c22f06-4225-4e55-872b-28b941c9163c None] Device tap05eddd66-f9 not defined on plugin

I've installed: - apt-get install neutron-plugin-linuxbridge-agent

configured the following -

/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan
tenant_network_types = vlan
mechanism_drivers = linuxbridge

[ml2_type_flat]
flat_networks = physnet3

[ml2_type_vlan]
network_vlan_ranges = physnet3,physnet2:100:200

[ml2_type_gre]
[ml2_type_vxlan]

[securitygroup]
enable_security_group = True
firewall_driver=neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[linux_bridge]
physical_interface_mappings = physnet3:p3p2

[database]
sql_connection = mysql://neutron:password@controller/neutron

[ovs]

/etc/neutron/neutron.conf

[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = password
auth_strategy = keystone
core_plugin = ml2
service_plugins = router,lbaas
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_auth_url = http://controller:35357/v2.0
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = 4bc54e811c844e9786b5bba9e72b8073
nova_admin_password = password
verbose = True
lock_path = $state_path/lock

[matchmaker_redis]

[matchmaker_ring]

[quotas]

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = password

[database]
connection = mysql://neutron:password@controller/neutron

[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

/etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
use_namespaces = True

/etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True


MariaDB [neutron]> select * from ml2_port_bindings;
+--------------------------------------+------------+----------------+--------+---------+-----------+-------------+---------+
| port_id                              | host       | vif_type       | driver | segment | vnic_type | vif_details | profile |
+--------------------------------------+------------+----------------+--------+---------+-----------+-------------+---------+
| d91d1817-68b1-4493-97bc-46e5f0260abd | controller | binding_failed | NULL   | NULL    | normal    |             |         |
+--------------------------------------+------------+----------------+--------+---------+-----------+-------------+---------+
2015-03-11 11:21:34 -0600 asked a question IceHouse with neutron linux-bridge plugin

Hi,

I'm using Ubuntu 14.04 with IceHouse installation, all-in-one. Is it possible to use linux bridge plugin? instead of the ml2? if so can you please provide a link to how to install & configure?

I've installed: - apt-get install neutron-plugin-linuxbridge-agent

Configuered - /etc/neutron/plugin/linuxbridge/linuxbridge_conf.ini - tenant_network_type = vlan - network_vlan_ranges = physnet1:1000:2000 - physical_interface_mappings = physnet1:p3p2

edited /etc/neutron/neutron.conf - core_plugin = neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2

anything else i'm missing?

2014-12-31 03:29:05 -0600 received badge  Famous Question (source)
2014-12-29 11:05:36 -0600 answered a question IceHouse - no valid host was found - RamFilter

I've restarted all of my services and the issue is solved.

2014-12-29 10:02:13 -0600 received badge  Notable Question (source)
2014-12-29 09:23:39 -0600 received badge  Popular Question (source)
2014-12-29 09:20:45 -0600 commented answer IceHouse - no valid host was found - RamFilter

Please see the output of it at the original thread. If I sum all my current installed machines on open-stack it won't get to these numbers...

2014-12-29 08:49:16 -0600 commented answer IceHouse - no valid host was found - RamFilter

which command are you referring to? and it still seems odd, like instances that were deleted were never deleted and are still consuming resources... on my current node I don't have many instances. very little with very little amount of resources. no way it consumed all.

2014-12-29 08:49:16 -0600 received badge  Commentator
2014-12-29 07:43:38 -0600 commented answer IceHouse - no valid host was found - RamFilter

I've searched for this values, they don't exist in nova.conf (I've installed it manually) checking ram used according to horizon it's 92GB used, but according to free it's 42GB. disk space according to horizon it's 1.3TB according to df -h only 100G are in use.

2014-12-29 06:51:22 -0600 asked a question IceHouse - no valid host was found - RamFilter

Hi,

I'm using Openstack IceHouse on top of one physical machine (all in one) for quite a while now, and today for some reason I couldn't deploy anymore instances. I'm getting an error on nova scheduler of ram filter returned 0 hosts, which according to my hyper-visor there is still plenty more.

2014-12-29 14:48:21.622 3657 INFO nova.scheduler.filter_scheduler [req-3ff6eea6-f528-4094-be75-3237607d4d51 a0236c9ed453408aa9806b3ae1e2c80e 2750049295984353a978bbabcc8136ec] Attempting to build 1 instance(s) uuids: [u'a40966d5-ebc6-488e-bdc4-3db87bfbe253']
2014-12-29 14:48:21.629 3657 WARNING nova.scheduler.host_manager [req-3ff6eea6-f528-4094-be75-3237607d4d51 a0236c9ed453408aa9806b3ae1e2c80e 2750049295984353a978bbabcc8136ec] Host has more disk space than database expected (-37gb > -1089gb)
2014-12-29 14:48:21.639 3657 INFO nova.filters [req-3ff6eea6-f528-4094-be75-3237607d4d51 a0236c9ed453408aa9806b3ae1e2c80e 2750049295984353a978bbabcc8136ec] Filter RamFilter returned 0 hosts
2014-12-29 14:48:21.639 3657 WARNING nova.scheduler.driver [req-3ff6eea6-f528-4094-be75-3237607d4d51 a0236c9ed453408aa9806b3ae1e2c80e 2750049295984353a978bbabcc8136ec] [instance: a40966d5-ebc6-488e-bdc4-3db87bfbe253] Setting instance to ERROR state.

According to Horizon -> Admin panel - > system panel -> hyper-visors, it seems like i'm over-committing already.. but I definitely don't have that amount of instance that's consuming that amount of resource, and top on the hyper-visors shows a different image.

root@controller:~#  nova host-describe controller
+------------+----------------------------------+-----+-----------+---------+
| HOST       | PROJECT                          | cpu | memory_mb | disk_gb |
+------------+----------------------------------+-----+-----------+---------+
| controller | (total)                          | 24  | 64382     | 211     |
| controller | (used_now)                       | 38  | 94736     | 1300    |
| controller | (used_max)                       | 11  | 22528     | 160     |
| controller | 2078ed30be9d4052868422fd8c069b9d | 3   | 6144      | 60      |
| controller | 57bf30be07be41a5920539bab8661329 | 2   | 4096      | 20      |
| controller | 2750049295984353a978bbabcc8136ec | 6   | 12288     | 80      |
+------------+----------------------------------+-----+-----------+---------+
2014-12-12 07:11:24 -0600 received badge  Favorite Question (source)
2014-12-05 04:58:36 -0600 received badge  Famous Question (source)
2014-12-05 04:31:21 -0600 received badge  Famous Question (source)
2014-12-02 06:26:18 -0600 received badge  Notable Question (source)
2014-11-25 16:01:34 -0600 received badge  Notable Question (source)
2014-11-19 11:09:21 -0600 received badge  Popular Question (source)
2014-11-19 10:19:29 -0600 marked best answer Juno Authentication to Horizon error - too many connections

Hi,

I'm using Juno on ubuntu 14.04 and I'm not able to log in anymore via the dashboard. I'm getting errors like "An error occurred authenticating. Please try again later." " not able to authenticate with any project"

keystone-all.log

2014-10-28 15:24:35.833 3358 ERROR keystone.common.wsgi [-] (OperationalError) (1040, 'Too many connections') None None
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi Traceback (most recent call last):
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi   File "/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 223, in __call__
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi     result = method(context, **params)
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi   File "/usr/lib/python2.7/dist-packages/keystone/token/controllers.py", line 100, in authenticate
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi     context, auth)
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi   File "/usr/lib/python2.7/dist-packages/keystone/token/controllers.py", line 287, in _authenticate_local
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi     username, CONF.identity.default_domain_id)
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi   File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 202, in wrapper
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi     return f(self, *args, **kwargs)
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi   File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 213, in wrapper
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi     return f(self, *args, **kwargs)
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi   File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 600, in get_user_by_name
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi     ref = driver.get_user_by_name(user_name, domain_id)
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi   File "/usr/lib/python2.7/dist-packages/keystone/identity/backends/sql.py", line 146, in get_user_by_name
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi     session = sql.get_session()
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi   File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 192, in get_session
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi     return _get_engine_facade().get_session(expire_on_commit=expire_on_commit)
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi   File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 176, in _get_engine_facade
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi     _engine_facade = db_session.EngineFacade.from_config(CONF)
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi   File "/usr/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py", line 795, in from_config
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi     retry_interval=conf.database.retry_interval)
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi   File "/usr/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py", line 711, in __init__
2014-10-28 15:24:35.833 3358 TRACE keystone.common.wsgi     **engine_kwargs)
2014-10-28 15:24 ...
(more)
2014-11-19 10:19:27 -0600 commented answer Juno Authentication to Horizon error - too many connections

set the max_connection value under /etc/mysql/my.cnf

2014-11-19 10:18:50 -0600 received badge  Famous Question (source)
2014-11-19 08:44:03 -0600 received badge  Scholar (source)
2014-11-19 07:30:00 -0600 answered a question Juno MessagingTimeout launching an instance

I't seems like I was able to resolve the issue - 1. verified that rabbitmq is running correct. 2. verified all OpenStack services configuration are using the correct rabbitmq configuration (glance registry and api were incorrect) 3. commented all qpid configuration from glance registry and api. 4. set the hyper-visor localhost hosts with 'controller'.

2014-11-18 09:00:33 -0600 asked a question Juno MessagingTimeout launching an instance

Hi, I'm running Ubuntu 14.10 with fresh Juno installation (manual installation) and i'm not able to spin new instances. I'm keep getting this error on nova-conductor.log

2014-11-18 16:46:58.827 4829 ERROR nova.scheduler.driver [req-41e94c9e-423d-4b88-acce-fc18ee9bad6f None] Exception during scheduler.run_instance
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver Traceback (most recent call last):
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 614, in build_instances
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver     request_spec, filter_properties)
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 49, in select_destinations
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver     context, request_spec, filter_properties)
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 35, in __run_method
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver     return getattr(self.instance, __name)(*args, **kwargs)
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 34, in select_destinations
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver     context, request_spec, filter_properties)
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 108, in select_destinations
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver     request_spec=request_spec, filter_properties=filter_properties)
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 152, in call
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver     retry=self.retry)
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in _send
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver     timeout=timeout, retry=retry)
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 408, in send
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver     retry=retry)
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 397, in _send
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver     result = self._waiter.wait(msg_id, timeout)
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 285, in wait
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver     reply, ending = self._poll_connection(msg_id, timeout)
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 235, in _poll_connection
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver     % msg_id)
2014-11-18 16:46:58.827 4829 TRACE nova.scheduler.driver MessagingTimeout: Timed out waiting for ...
(more)