2019-01-18 04:24:42 -0600
| received badge | ● Popular Question
(source)
|
2019-01-18 04:24:42 -0600
| received badge | ● Famous Question
(source)
|
2019-01-18 04:24:42 -0600
| received badge | ● Notable Question
(source)
|
2018-03-29 08:52:53 -0600
| received badge | ● Famous Question
(source)
|
2018-03-29 08:52:53 -0600
| received badge | ● Notable Question
(source)
|
2018-03-29 08:52:53 -0600
| received badge | ● Popular Question
(source)
|
2017-08-30 21:10:15 -0600
| commented question | NoSuchTableError: `compute_nodes` did you create nova, nova_api and cell databases? |
2017-08-30 21:08:09 -0600
| commented question | Ocata installation - unable to launch instance (default default] Unexpected exception in API method) Did you configure placement_api? Simillar to https://ask.
openstack.org/en/question/102256/how-to-configure-placement-service-for-compute-node-on-ocata/ |
2017-08-30 17:13:03 -0600
| commented answer | mysqlworkbench settings for openstack nova database port Thank you.;
username:nova and password what ever is used for NOVA_DBPASS |
2017-08-30 15:45:13 -0600
| answered a question | OCATA- Unable to delete host aggregate Figured out the tables.. DB is nova_api and tables are aggregates and aggregate_metadata.. due to foreign key need to delete from aggregate_metadate before deleting from aggregates. MariaDB [(none)]> use nova_api;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [nova_api]> select * from aggregates;
+---------------------+---------------------+----+--------------------------------------+----------+
| created_at | updated_at | id | uuid | name |
+---------------------+---------------------+----+--------------------------------------+----------+
| 2017-08-22 01:46:32 | 2017-08-30 14:02:31 | 1 | e4a4a16e-7513-42fc-9808-32eab5ce6077 | compute |
| 2017-08-30 17:41:34 | NULL | 2 | 6caa9100-012a-40fa-81fd-3b9de6554789 | allnodes |
+---------------------+---------------------+----+--------------------------------------+----------+
MariaDB [nova_api]> delete from aggregate_hosts where aggregate_id=1
MariaDB [nova_api]> select * from aggregate_metadata;
+---------------------+------------+----+--------------+-------------------+----------+
| created_at | updated_at | id | aggregate_id | key | value |
+---------------------+------------+----+--------------+-------------------+----------+
| 2017-08-22 01:47:14 | NULL | 1 | 1 | availability_zone | nova2 |
| 2017-08-30 17:41:34 | NULL | 2 | 2 | availability_zone | allnodes |
+---------------------+------------+----+--------------+-------------------+----------+
2 rows in set (0.00 sec)
MariaDB [nova_api]> delete from aggregate_metadata where id =1;
MariaDB [nova_api]> delete from aggregates where id =1;
Query OK, 1 row affected (0.02 sec)
|
2017-08-30 09:23:20 -0600
| received badge | ● Notable Question
(source)
|
2017-08-30 09:23:20 -0600
| received badge | ● Famous Question
(source)
|
2017-08-30 09:23:20 -0600
| received badge | ● Popular Question
(source)
|
2017-08-30 09:20:13 -0600
| asked a question | mysqlworkbench settings for openstack nova database port What would be the port I should be using if I were to connect to openstack DB's (nova, neutron,nova_api etc) via mysqlbench?
I understand I can use "mysql -u root -p" but would like to know the settings using mysqlwork bench.  |
2017-08-30 08:55:30 -0600
| asked a question | OCATA- Unable to delete host aggregate After creating a host aggregate, I removed all hosts from compute and deleted the compute node as well. However I am not able to delete the host aggregate using Horizon Database. I am suspecting a possible corruption in DB. Is there a table I could use to delete the hostaggregate from DB? (OCATA Release.) "Error: Unable to delete host aggregate:" Thanks. |
2016-10-05 04:20:21 -0600
| received badge | ● Self-Learner
(source)
|
2016-08-22 02:19:31 -0600
| received badge | ● Student
(source)
|
2016-08-22 02:07:05 -0600
| received badge | ● Popular Question
(source)
|
2016-08-22 02:07:05 -0600
| received badge | ● Famous Question
(source)
|
2016-08-22 02:07:05 -0600
| received badge | ● Notable Question
(source)
|
2016-06-07 10:48:56 -0600
| received badge | ● Famous Question
(source)
|
2016-04-18 10:05:38 -0600
| asked a question | vm with private ip using heat template Trying to create a VM and specify the private IP address in the heat template using fixed_ips tag
getting the following error ERROR: Property error: : resources.my_instance.properties.networks[0].fixed_ip: : "192.168.10.105" does not validate ip_addr (constraint not found) --tried switching the type from string to ip_addr with no success
--tried setting the constraints manually, also no success Any suggestions? |
2016-02-15 23:40:31 -0600
| received badge | ● Notable Question
(source)
|
2016-02-15 23:40:31 -0600
| received badge | ● Famous Question
(source)
|
2016-02-15 23:40:31 -0600
| received badge | ● Popular Question
(source)
|
2016-01-11 07:56:55 -0600
| received badge | ● Notable Question
(source)
|
2015-11-02 17:00:00 -0600
| received badge | ● Famous Question
(source)
|
2015-10-22 03:13:20 -0600
| received badge | ● Famous Question
(source)
|
2015-10-13 09:49:17 -0600
| received badge | ● Notable Question
(source)
|
2015-10-10 09:36:10 -0600
| received badge | ● Popular Question
(source)
|
2015-10-09 19:21:46 -0600
| answered a question | No IP address for instance, bridge interfaces down Question is, should these interfaces br-int, br-tun should show as up? reinstalling the network node and recreating the private networks helped in attaining the IP address to instances, bth private and floating. However the br-int,br-tun and br-ex state still shows as down. |
2015-10-08 21:18:47 -0600
| asked a question | No IP address for instance, bridge interfaces down My setup .. which has controller, network, 3 compute nodes with different management network, and tunnel networks was working fine initially. After adding the 5th node as storage (i.e installed cinder), instances are not receiving the IP addresses.
Further digging showed all the bridge interfaces showing as down on all the nodes. I tried bringing them manually and no luck. rebooted all the boxes and has the same result. Logs does't show any thing obvious. Any one have ideas why the bridge interfaces are being down and logs to verify to narrow down the issue? [root@network ~]#
7: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether d2:37:4c:a5:a3:e0 brd ff:ff:ff:ff:ff:ff
8: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 0c:c4:7a:4a:e9:03 brd ff:ff:ff:ff:ff:ff
9: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether de:3a:14:9f:c0:47 brd ff:ff:ff:ff:ff:ff
10: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 52:50:47:cc:e7:49 brd ff:ff:ff:ff:ff:ff
|
2015-10-06 18:13:29 -0600
| received badge | ● Notable Question
(source)
|
2015-10-06 10:30:03 -0600
| marked best answer | neutron-openvswitch-agent not starting up on compute node Hello, I am trying to setup JUNO with two node/two interfaces each setup. Went through plenty of blogs on net. I am also using packstack for setting this up. After the installation is succeeded I changed the interfaces so that br-ex on controller node can reach a subnet I wanted to reach.
Launch of instance is failing and further verification showed that the neutron-openvswitch-agent is not running on the compute.
Can some one suggest what to verify for correcting my configuration? I am not seeing ml2 folder on compute node (i can see it on controller/network node). Is this expected? [root@controller-network ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+
| 03659441-fc17-4af2-8b53-f7867a653173 | Open vSwitch agent | controller-network | :-) | True | neutron-openvswitch-agent |
| 2e9fdcae-32db-489d-a153-09905243cb26 | DHCP agent | controller-network | :-) | True | neutron-dhcp-agent |
| 68011b95-b3c6-4d92-a679-8d1cc70a3e15 | Open vSwitch agent | compute | xxx | True | neutron-openvswitch-agent |
| 68a020f7-f5e6-4247-9f23-db970c29b1d3 | L3 agent | controller-network | :-) | True | neutron-l3-agent |
| e02bc776-850f-4b04-808e-2d8b9be9e2ab | Metadata agent | controller-network | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+
root@compute ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+
| 03659441-fc17-4af2-8b53-f7867a653173 | Open vSwitch agent | controller-network | :-) | True | neutron-openvswitch-agent |
| 2e9fdcae-32db-489d-a153-09905243cb26 | DHCP agent | controller-network | :-) | True | neutron-dhcp-agent |
| 68011b95-b3c6-4d92-a679-8d1cc70a3e15 | Open vSwitch agent | compute | xxx | True | neutron-openvswitch-agent |
| 68a020f7-f5e6-4247-9f23-db970c29b1d3 | L3 agent | controller-network | :-) | True | neutron-l3-agent |
| e02bc776-850f-4b04-808e-2d8b9be9e2ab | Metadata agent | controller-network | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+
root@controller-network ~]# egrep -v '(^$|^#)' /etc/neutron/plugin.ini
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =10:100
vxlan_group =224.0.0.1
[securitygroup]
enable_security_group = True
[root@controller-network ~]# egrep -v '(^$|^#)' /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =10:100
vxlan_group =224.0.0.1
[securitygroup]
enable_security_group = True
[root@controller-network ~]# egrep -v '(^$|^#)' /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 13.12.15.45
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@compute ~(keystone_admin)]# egrep -v '(^$|^#)' /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =13.12.15.47
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
|
2015-10-06 10:13:32 -0600
| marked best answer | Kilo - Adding second compute node I am trying to make a second compute node operational. however, so far not succeeded in launching the instance. I followed similar steps to add a compute node from openstack documentation. Error: Failed to perform requested operation on instance "Instancetest", the instance has an error status: Please try again later [Error: No valid host was found. There are not enough hosts available.]. below are few logs I noticed, can any one suggest what I could be missing? [root@compute2 nova]# neutron agent-list
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 0cb8d0d4-140a-4446-add8-7881f0a07dda | DHCP agent | network | :-) | True | neutron-dhcp-agent |
| 2a164cb6-0c6d-418a-ab7d-f68a0f3a3032 | L3 agent | network | :-) | True | neutron-l3-agent |
| abbe20eb-8f49-43e6-a0d7-d2625ef07084 | Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent |
| d4e1b8a7-f826-4213-9f5f-9ab936d4f004 | Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent |
| ec116376-6d3f-4623-b5b5-78736ac41a5a | Metadata agent | network | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
From the nova conductor log on Controller: 2015-10-05 21:22:57.378 4737 WARNING nova.scheduler.utils [req-78fe3a75-24fc-48c8-8041-44414315acea 96771cdf5d2844ed919a172556dec532 e4b31c40067e473c8c07fe5ff1021ac7 - - -] [instance: 8e3ae49a-27e4-4c06-8813-c0b900c4b9e5] Setting instance to ERROR state.
2015-10-05 22:27:52.634 4738 ERROR nova.scheduler.utils [req-f335dd94-73fc-489d-b6bd-cb7187a79c1c 96771cdf5d2844ed919a172556dec532 e4b31c40067e473c8c07fe5ff1021ac7 - - -] [instance: baf256cf-3c0f-49ea-9535-f1aea5bb0827] Error from last host: compute2 (node compute2): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2235, in _do_build_and_run_instance\n filter_properties)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2381, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance baf256cf-3c0f-49ea-9535-f1aea5bb0827 was re-scheduled: Unexpected vif_type=binding_failed\n']
2015-10-05 22:27:52.652 4738 WARNING nova.scheduler.utils [req-f335dd94-73fc-489d-b6bd-cb7187a79c1c 96771cdf5d2844ed919a172556dec532 e4b31c40067e473c8c07fe5ff1021ac7 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 142, in inner
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 86, in select_destinations
filter_properties)
File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 80, in select_destinations
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
2015-10-05 22:27:52.653 4738 WARNING nova.scheduler.utils [req-f335dd94-73fc-489d-b6bd-cb7187a79c1c 96771cdf5d2844ed919a172556dec532 e4b31c40067e473c8c07fe5ff1021ac7 - - -] [instance: baf256cf-3c0f-49ea-9535-f1aea5bb0827] Setting instance to ERROR state.
From /var/log/nova/nova-compute.log 2015-10-05 22:46:34.397 1092 INFO nova.scheduler.client.report [req-f107233e-76a4-4dd3-aa39-8739920e327f - - - - -] Compute_service record updated for ('compute2', 'compute2')
2015-10-05 22:46:34.397 1092 INFO nova.compute.resource_tracker [req-f107233e-76a4-4dd3-aa39-8739920e327f - - - - -] Compute_service record updated for compute2:compute2
2015-10-05 22:47:35.127 1092 INFO nova.compute.resource_tracker [req-f107233e-76a4-4dd3-aa39-8739920e327f - - - - -] Auditing locally available compute resources for node compute2
2015-10-05 22:47:35.313 1092 INFO nova.compute.resource_tracker [req-f107233e-76a4-4dd3-aa39-8739920e327f - - - - -] Total usable vcpus: 8, total allocated vcpus: 0
2015-10-05 22:47:35.313 1092 INFO nova.compute.resource_tracker [req-f107233e-76a4-4dd3-aa39-8739920e327f - - - - -] Final resource view: name=compute2 phys_ram=31986MB used_ram=5632MB phys_disk=464GB used_disk=42GB total_vcpus=8 used_vcpus=0 pci_stats=<nova.pci.stats.PciDeviceStats object at 0x5a43b10>
|
2015-10-06 10:13:06 -0600
| received badge | ● Editor
(source)
|
2015-10-06 09:08:10 -0600
| received badge | ● Popular Question
(source)
|
2015-10-06 08:57:13 -0600
| commented question | Kilo - Adding second compute node Compute1: Auditing locally available compute resources for node compute1
2015-10-06 09:56:31.233 1079 INFO nova.compute.resource_tracker [req-152e1f80-6382-4d71-8fed-7b631c2eaf77 - - - - -] Total usable vcpus: 8, total allocated vcpus: 9
2015-10-06 09:56:31.234 1079 INFO nova.compute.resource_tr |
2015-10-06 08:54:04 -0600
| commented question | Kilo - Adding second compute node Compute1 Logs looks clean: Compute 2 however keep throwing this exception: 2015-10-06 09:52:21.665 6478 ERROR oslo_messaging._drivers.impl_rabbit [-] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.
Is this a concerned message in the log? |
2015-10-06 07:09:42 -0600
| received badge | ● Popular Question
(source)
|
2015-10-05 22:55:09 -0600
| answered a question | Kilo - Adding second compute node Is it possible that the hypervisor stats and the dashboard display is misleading? On the two compute nodes the number of CPUS are 8+8 = 16.. I am getting the error after reaching the instances more than the number of CPUs present(if vCPU~CPU). If thats the case why would the errored instances appear as if spawning to the second compute node? Instance Overview
Information
Name
compute2test-3
ID
62c1e1e9-e9fb-4a5e-ab86-baca549f287e
Status
Error
Availability Zone
nova
Created
Oct. 6, 2015, 3:44 a.m.
Time Since Created
9 minutes
Host
compute2
Fault
Message
No valid host was found. There are not enough hosts available
 [root@compute1 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 58
Model name: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
Stepping: 9
CPU MHz: 3499.933
BogoMIPS: 6600.13
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7
[root@compute1 ~]#
[root@compute2 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 58
Model name: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
Stepping: 9
CPU MHz: 1675.265
BogoMIPS: 6600.18
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7
Answer: Thanks Raffie.. From the logs you suggested.. tail -f /var/log/{nova,neutron}/*.log on Compute2 node the Node is attempting to connect to amqp server and failing.. and further checking the config
[root@compute2 ~]# egrep -v '(^#|^$)' /etc/neutron/neutron.conf I noticed I entered rabbit details at the wrong place... **Incorrect for my setup:**
[oslo_messaging_amqp]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
**Correct for my Setyp**
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack
After Correcting openvswitch agent on compute2 correctly established. [root@compute2 ~]# neutron agent-list
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 0cb8d0d4-140a-4446-add8-7881f0a07dda | DHCP agent | network | :-) | True | neutron-dhcp-agent |
| 29951dc3-8070-4fe9-8e08-d24fb420c0dd | Open vSwitch agent | compute2 | :-) | True | neutron-openvswitch-agent |
| 2a164cb6-0c6d-418a-ab7d-f68a0f3a3032 | L3 agent | network | :-) | True | neutron-l3-agent |
| abbe20eb-8f49-43e6-a0d7-d2625ef07084 | Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent |
| d4e1b8a7-f826-4213-9f5f-9ab936d4f004 | Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent |
| ec116376-6d3f-4623-b5b5-78736ac41a5a | Metadata agent | network | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
|
2015-10-05 22:04:50 -0600
| marked best answer | How to fix 'Something went wrong!" kilo dashboard issue? I keep getting the below error using dashboard on kilo. I updated various files following the google search openstack search but still cant figure out the cause. Any one has ideas? Something went wrong!
An unexpected error has occurred. Try refreshing the page. If that doesn't help, contact your local administrator. /var/log/httpd/tail -f error_log
httpd logs: [Mon Oct 05 18:18:22.452598 2015] [:error] [pid 5632] Login successful for user "admin".
[Mon Oct 05 18:18:22.453204 2015] [:error] [pid 5632] Internal Server Error: /dashboard/auth/login/
[Mon Oct 05 18:18:22.453215 2015] [:error] [pid 5632] Traceback (most recent call last):
[Mon Oct 05 18:18:22.453219 2015] [:error] [pid 5632] File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 132, in get_response
[Mon Oct 05 18:18:22.453222 2015] [:error] [pid 5632] response = wrapped_callback(request, *callback_args, **callback_kwargs)
[Mon Oct 05 18:18:22.453224 2015] [:error] [pid 5632] File "/usr/lib/python2.7/site-packages/django/views/decorators/debug.py", line 76, in sensitive_post_parameters_wrapper
[Mon Oct 05 18:18:22.453227 2015] [:error] [pid 5632] return view(request, *args, **kwargs)
[Mon Oct 05 18:18:22.453229 2015] [:error] [pid 5632] File "/usr/lib/python2.7/site-packages/django/utils/decorators.py", line 110, in _wrapped_view
[Mon Oct 05 18:18:22.453231 2015] [:error] [pid 5632] response = view_func(request, *args, **kwargs)
[Mon Oct 05 18:18:22.453233 2015] [:error] [pid 5632] File "/usr/lib/python2.7/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func
[Mon Oct 05 18:18:22.453235 2015] [:error] [pid 5632] response = view_func(request, *args, **kwargs)
[Mon Oct 05 18:18:22.453237 2015] [:error] [pid 5632] File "/usr/lib/python2.7/site-packages/openstack_auth/views.py", line 111, in login
[Mon Oct 05 18:18:22.453239 2015] [:error] [pid 5632] **kwargs)
[Mon Oct 05 18:18:22.453242 2015] [:error] [pid 5632] File "/usr/lib/python2.7/site-packages/django/views/decorators/debug.py", line 76, in sensitive_post_parameters_wrapper
[Mon Oct 05 18:18:22.453244 2015] [:error] [pid 5632] return view(request, *args, **kwargs)
[Mon Oct 05 18:18:22.453246 2015] [:error] [pid 5632] File "/usr/lib/python2.7/site-packages/django/utils/decorators.py", line 110, in _wrapped_view
[Mon Oct 05 18:18:22.453257 2015] [:error] [pid 5632] response = view_func(request, *args, **kwargs)
[Mon Oct 05 18:18:22.453259 2015] [:error] [pid 5632] File "/usr/lib/python2.7/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func
[Mon Oct 05 18:18:22.453261 2015] [:error] [pid 5632] response = view_func(request, *args, **kwargs)
[Mon Oct 05 18:18:22.453263 2015] [:error] [pid 5632] File "/usr/lib/python2.7/site-packages/django/contrib/auth/views.py", line 51, in login
[Mon Oct 05 18:18:22.453265 2015] [:error] [pid 5632] auth_login(request, form.get_user())
[Mon Oct 05 18:18:22.453267 2015] [:error] [pid 5632] File "/usr/lib/python2.7/site-packages/django/contrib/auth/__init__.py ... (more) |