Ask Your Question
1

Juno - error while launching instance

asked 2014-10-27 10:24:01 -0600

azriel gravatar image

updated 2014-11-02 01:49:24 -0600

Hi all,

It seems that i'm getting an error while trying to launch a new instance:

on nova-conductor log i'm getting many timeouts, see logs below.

"2014-11-02 09:43:18.802 5000 TRACE nova.scheduler.driver MessagingTimeout: Timed out waiting for a reply to message ID c59c2cb151314ce6b5ee6c25058a99af"
  • Ubuntu 14.04.01
  • Juno, all in one, manual installation.

root@icehouse:~# keystone service-list

+----------------------------------+----------+----------+-------------------------+
|                id                |   name   |   type   |       description       |
+----------------------------------+----------+----------+-------------------------+
| ab6302bfbca241b794b11239a32bb2c9 |  glance  |  image   | OpenStack Image Service |
| 6b483a0cf9594a4da987cd4fa1f5aa7d | keystone | identity |    OpenStack Identity   |
| 84f3fe9f93584f8ebeb2b284e5dcf727 | neutron  | network  |   OpenStack Networking  |
| e12d40b834854291a832b8d84c742b8e |   nova   | compute  |    OpenStack Compute    |
+----------------------------------+----------+----------+-------------------------+

root@icehouse:~# keystone endpoint-list

+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+
|                id                |   region  |                publicurl                |               internalurl               |                 adminurl                |            service_id            |
+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+
| 9424b0fecdb44bae8c26aa967f67eef2 | regionOne |          http://controller:9292         |          http://controller:9292         |          http://controller:9292         | ab6302bfbca241b794b11239a32bb2c9 |
| 988788edfba247d68806a2355dbf7b8a | regionOne |          http://controller:9696         |          http://controller:9696         |          http://controller:9696         | 84f3fe9f93584f8ebeb2b284e5dcf727 |
| aba9b6a195ab498c93cfaf466a67b18e | regionOne |       http://controller:5000/v2.0       |       http://controller:5000/v2.0       |       http://controller:35357/v2.0      | 6b483a0cf9594a4da987cd4fa1f5aa7d |
| bbf5eb29bcc146beb087940b7a1a4807 | regionOne | http://controller:8774/v2/%(tenant_id)s | http://controller:8774/v2/%(tenant_id)s | http://controller:8774/v2/%(tenant_id)s | e12d40b834854291a832b8d84c742b8e |
+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+

root@icehouse:~# nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At
nova-cert        icehouse                             internal         enabled    :-)   2014-10-27 15:11:38
nova-consoleauth icehouse                             internal         enabled    :-)   2014-10-27 15:11:38
nova-scheduler   icehouse                             internal         enabled    :-)   2014-10-27 15:11:38
nova-conductor   icehouse                             internal         enabled    :-)   2014-10-27 15:11:38
nova-compute     icehouse                             nova             enabled    :-)   2014-10-27 15:11:37

nova conductor log -

2014-10-29 18:10:23.079 5084 ERROR nova.scheduler.driver [req-7e066556-1b52-4a55-bbac-4c841140ba3e None] Exception during scheduler.run_instance
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver Traceback (most recent call last):
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 614, in build_instances
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     request_spec, filter_properties)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 49, in select_destinations
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     context, request_spec, filter_properties)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 35, in __run_method
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     return getattr(self.instance, __name)(*args, **kwargs)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 34, in select_destinations
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     context, request_spec, filter_properties)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 108, in select_destinations
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     request_spec=request_spec, filter_properties=filter_properties)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 152, in call
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver     retry=self.retry)
2014-10-29 18:10:23.079 5084 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in _send ...
(more)
edit retag flag offensive close merge delete

9 answers

Sort by ยป oldest newest most voted
2

answered 2014-11-06 23:57:25 -0600

Ram.Meena gravatar image

updated 2014-11-27 05:46:52 -0600

Hi,

The issue seems to be related with AMQP server. I have also configured the openstack with Juno release and my rabbitmq settings are as below:-

rpc_backend=rabbit
rabbit_host=controller
rabbit_user=guest
rabbit_password=Password

Where password is specific to your installation and you may try changing the rpc_backend to 'rabbit'. You need to make the similar settings in all your openstack nodes configuration files.

I would suggest you to verify the 'RabbitMQ status' and make sure that all OpenStack nodes are able to ping/connect with RabbitMQ server.

To verify if RabbitMQ service is running on your rabbit server, run below command: -

#systemctl status rabbitmq-server.service -l

To verify the RabbitMQ server status run the below command: -

#rabbitmqctl status

This command should print a detailed output of various information related to AMQP server. If the status is good then you need to make sure that rabbitmq port is allowed to make connection between your openstack nodes through firewall.

On RabbitMQ server you may run the below command to check if your compute node is able to establish connection with rabbit server:-

   # lsof -i :5672|grep 'compute_node_ip'

Make sure that there is no time lag between your openstack node, all the nodes should be synced with a central server time using NTP.

I would also suggest you to change the debug mode 'true' in nova configuration file as below:

debug=True

Restart the nova compute service and check the logs again for more detailed messages.

edit flag offensive delete link more
1

answered 2014-10-29 11:02:42 -0600

azriel gravatar image

updated 2014-10-29 11:57:53 -0600

I'm still facing that issue with timed out. not able to boot it not from horizon or nova.

2014-10-29 18:04:11.813 5453 INFO oslo.messaging._drivers.impl_rabbit [req-aef12a6a-1b8c-4c56-8a21-d306c6b24613 ] Connecting to AMQP server on controller:5672
2014-10-29 18:04:11.832 5453 INFO oslo.messaging._drivers.impl_rabbit [req-aef12a6a-1b8c-4c56-8a21-d306c6b24613 ] Connected to AMQP server on controller:5672
2014-10-29 18:05:11.854 5453 ERROR nova.scheduler.driver [req-aef12a6a-1b8c-4c56-8a21-d306c6b24613 None] Exception during scheduler.run_instance
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver Traceback (most recent call last):
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 614, in build_instances
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver     request_spec, filter_properties)
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 49, in select_destinations
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver     context, request_spec, filter_properties)
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 35, in __run_method
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver     return getattr(self.instance, __name)(*args, **kwargs)
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 34, in select_destinations
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver     context, request_spec, filter_properties)
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 108, in select_destinations
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver     request_spec=request_spec, filter_properties=filter_properties)
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 152, in call
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver     retry=self.retry)
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in _send
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver     timeout=timeout, retry=retry)
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 408, in send
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver     retry=retry)
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 397, in _send
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver     result = self._waiter.wait(msg_id, timeout)
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 285, in wait
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver     reply, ending = self._poll_connection(msg_id, timeout)
2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 235, in _poll_connection ...
(more)
edit flag offensive delete link more

Comments

Relevant error: 2014-10-29 18:05:11.854 5453 TRACE nova.scheduler.driver MessagingTimeout: Timed out waiting for a reply to message ID 0388f5f5060241499fa4ffd53d395b6a

Check nova-conductor logs, verify rabbitmq status. Repost this as an update to your original question.

mpetason gravatar imagempetason ( 2014-10-29 11:11:13 -0600 )edit

Please find nova conductor log and rabbitmq status above..

azriel gravatar imageazriel ( 2014-10-29 11:56:12 -0600 )edit
1

answered 2014-10-27 10:27:45 -0600

mpetason gravatar image

Are you trying to launch instances using Volumes, or are you interacting with Volumes at all? The error message looks to be talking about Cinder -

"[Mon Oct 27 15:00:20.322762 2014] [:error] [pid 4257:tid 140382927374080] Recoverable error: Invalid service catalog service: volume"

Since Cinder is not installed/configured you will not be able to use Volumes. The correct option to launch an instance would be "launch from image" not anything that requires a volume to be configured.

edit flag offensive delete link more

Comments

Yes, it seems to be complaining about volume service, what line do you use to launch the instance?

xtrill gravatar imagextrill ( 2014-10-27 12:09:25 -0600 )edit

You could use something like:

nova boot --flavor FLAVOR_ID --image IMAGE_ID --key-name KEY_NAME --security-groups SEC_GROUP INSTANCE_NAME

http://docs.openstack.org/user-guide/...

mpetason gravatar imagempetason ( 2014-10-27 12:15:46 -0600 )edit

You'll have to add in networking too. Did you configure Horizon? For new users it is usually easier to just launch through the Wizard in the dashboard.

mpetason gravatar imagempetason ( 2014-10-27 12:18:55 -0600 )edit

I'm using Juno for the OpenStack, and no i'm not using Cinder. - I'm using the local storage on my HW. - I'm using Horizon to launch the instance ("from image") and getting that error while the creation dialog, but able to launch the instance as well. - Launching an instance from nova seems ok.

azriel gravatar imageazriel ( 2014-10-28 02:33:14 -0600 )edit

So the error message is accurate then. Recoverable error which tells you the Cinder service isn't setup in the catalog. You should be fine then.

mpetason gravatar imagempetason ( 2014-10-28 10:09:35 -0600 )edit
0

answered 2015-03-18 16:38:43 -0600

I had this problem from time to time on Ubuntu 14. Don't really know what's going on, all services work normally. Suppose this is a bug of Juno release. Restarting services help me sometime

service nova-api restart service nova-cert restart service nova-consoleauth restart service nova-scheduler restart service nova-conductor restart service nova-novncproxy restart service openvswitch-switch restart service nova-compute restart service neutron-server restart service neutron-plugin-openvswitch-agent restart service neutron-l3-agent restart service neutron-dhcp-agent restart service neutron-metadata-agent restart

edit flag offensive delete link more
0

answered 2014-12-05 05:09:53 -0600

I got the same error while launching an instance also in Juno. It seems like failed in oslo.messaging. This problem is random in my environment. Anyone fixed it ?

edit flag offensive delete link more
0

answered 2014-10-30 06:29:53 -0600

That error is no way related to Cinder. Could you please share the nova-compute.log after executing the nova-boot command. And also paste the command you used.

edit flag offensive delete link more

Comments

nova boot --flavor m1.small --image Ubuntu-Yaron.A --nic net-id=c780f5af-c6e3-4c21-9ba6-fa8fe7fe49db --security-group default ubuntu_cmd_03

https://dl.dropboxusercontent.com/u/108810311/launch/nova-compute.log
https://dl.dropboxusercontent.com/u/108810311/launch/nova-conductor.log
azriel gravatar imageazriel ( 2014-11-02 01:46:45 -0600 )edit

I'm having the same problem, installed Juno from scratch and I get the same error in the /var/log/apache2/error.log file when I spin up new instances using Horizon. When I do it from command line i don't see none of that. In both situations instances run happily. Anyone fixed the annoying message?

caddo gravatar imagecaddo ( 2014-11-15 10:55:34 -0600 )edit
0

answered 2014-10-31 14:03:09 -0600

rrottach gravatar image

updated 2014-10-31 14:04:40 -0600

When I converted to Juno from Icehouse, Juno required cinder for volumes. I added an additional 1TB drive to all of my compute nodes. Once I added the drive and installed Cinder I could launch instances. I use the following steps to install Cinder. I use OS Ubuntu 14.04

**************** Cinder (Block Storage) Setup *******

Cinder requires a separate drive installed for the volumes

Create the Cinder database and user on the Controller

mysql -u root -p << EOF CREATE DATABASE cinder default character set utf8; GRANT ALL PRIVILEGES ON cinder.* TO 'cinderdbadmin'@'%' IDENTIFIED BY 'cinder password'; EOF

mysql -u root -p << EOF GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder password'; EOF

mysql -u root -p << EOF GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder password'; EOF

Get credentials

source admin-openrc

Create User

keystone user-create --name=cinder --pass=cinder password --email=admin email keystone user-role-add --user=cinder --tenant=service --role=admin

Register service for version 1

keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"

keystone endpoint-create --service-id=$(keystone service-list | awk '/ volume / {print $2}') --publicurl=http://controller:8776/v1/%(tenant_id)s --internalurl=http://controller:8776/v1/%(tenant_id)s --adminurl=http://controller:8776/v1/%(tenant_id)s

Register service for version 2

keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"

keystone endpoint-create --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') --publicurl=http://controller:8776/v2/%(tenant_id)s --internalurl=http://controller:8776/v2/%(tenant_id)s --adminurl=http://controller:8776/v2/%(tenant_id)s

Install services on Controller

sudo apt-get install cinder-api cinder-scheduler python-cinderclient

Add user to cinder group on Controller

sudo usermod -aG cinder ubuntu

Logout and in to refresh group privledges

sudo chmod 770 /etc/cinder sudo chmod 770 /var/log/cinder

Edit the cinder.conf file

sudo pico /etc/cinder/cinder.conf

under [default]

auth_strategy = keystone

rpc_backend = cinder.openstack.common.rpc.impl_kombu

rabbit_host = controller

rabbit_port = 5672

rabbit_userid = rabbit user

rabbit_password = rabbit password

[database]

connection = mysql://cinderdbadmin:cinderpassword@controller/cinder

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = cinder

admin_password = admin password

Restart the services

sudo service cinder-scheduler restart

sudo service cinder-api restart

Create Database tables

sudo cinder-manage db sync

sudo rm -f /var/lib/cinder/cinder.sqlite

Install services on Compute Node

sudo apt-get install lvm2

create cinder-volumes on Compute Node

sudo lshw -C disk - to find the drives on the box. Use sudo fdisk to create a partition and mkfs.ext /dev/sdb1

sudo pvcreate /dev/sdb1

sudo vgcreate cinder-volumes /dev/sdb1

Edit lvm.conf under devices

sudo pico /etc/lvm/lvm.conf

Cinder Filter

filter = [ "a/sda1/", "a/sdb1/", "r/.*/"]

Check pvdisplay

sudo pvdisplay

Install cinder-volume

sudo apt-get install cinder-volume

Edit cinder.conf under [default]

sudo pico /etc/cinder/cinder.conf

rpc_backend = cinder.openstack.common.rpc.impl_kombu

rabbit_host = controller

rabbit_port = 5672

rabbit_userid = rabbit user

rabbit_password = rabbit password

glance_host = controller

[database]

connection = mysql://cinderdbadmin:cinderpassword@controller/cinder

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = cinder

admin_password = admin password

Restart the services ... (more)

edit flag offensive delete link more

Comments

thanks but it seems like the Cinder error is just a warning. i'm having timeouts on my nova component, thanks a lot anyway!

azriel gravatar imageazriel ( 2014-11-02 01:47:42 -0600 )edit
0

answered 2015-05-13 23:38:53 -0600

isabyr gravatar image

updated 2015-05-13 23:42:22 -0600

@shor52rus After fresh reboot of controller you should check your nova-scheduler.log, if there are these kind of errors:

OperationalError: (OperationalError) (2003, "Can't connect to MySQL server on 'controller' (111)") None None

then it may be this https://ask.openstack.org/en/question/53602/services-start-up-priority-at-boot-on-controller/ (problem)

edit flag offensive delete link more
0

answered 2015-02-11 03:25:36 -0600

Madko gravatar image

updated 2015-02-11 04:01:36 -0600

same problem here, any fix/news? Here is the relevant part in nova.log on my compute node:

2015-02-11 10:48:43.091 1067 ERROR oslo.messaging.rpc.dispatcher [req-9be93728-5e5e-423d-9797-ceef24994330 ] Exception during message handling: Timed out waiting for a reply to message ID 9b98aebd76dc43b3a0239daa68673050
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher     payload)
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher     return f(self, context, *args, **kw)
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 298, in decorated_function
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher     pass
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 284, in decorated_function
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 348, in decorated_function
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-02-11 10:48:43.091 1067 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 326, in decorated_function
2015-02-11 ...
(more)
edit flag offensive delete link more

Comments

Having issue as well, creating volumes works just fine... Creating an instance with or without using volumes still failing

ethode gravatar imageethode ( 2015-03-30 11:02:51 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

7 followers

Stats

Asked: 2014-10-27 10:24:01 -0600

Seen: 11,119 times

Last updated: May 13 '15