Ask Your Question
1

Horizon slow --> causes ?

asked 2016-04-27 03:31:29 -0600

Sofiane68 gravatar image

Hi,

I deployed OpenStack with Mirantis in a vSphere environment and my Horizon is very slow


Configuration

3 nodes :

  • Controller/Storage 1 : 3 CPUs and 8GB RAM 200 GB HDD
  • Controller/Storage 2 : 3 CPUs and 8GB RAM 200 GB HDD
  • Compute : 4 CPU 8 GB RAM 100GB HDD

In your opinion what could be the causes?

Thank you

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2016-04-28 01:35:59 -0600

ross-golder gravatar image

updated 2016-05-11 22:10:44 -0600

Also considering the same question myself at the moment...

3 nodes (DELL R420s w/16GB RAM):

  • Controller/Network : running nova/neutron/glance/cinder etc, plus horizon, mysql, rabbitmq and memcache
  • 2x Compute nodes: running nova/neutron/cinder

I believe the problem is that Mysql and RabbitMQ are using inordinate amounts of CPU and memory, and causing the Controller1 node to constantly run into swap memory. The page requests from Horizon query APIs which are backed by MySQL, and so I suspect it is down to slow db queries.

The compute nodes and the VMs themselves are running along nicely. It's just Horizon, which can take 15-30 seconds between page clicks on some occassions. We're only running 5-6 VMs, so I can't figure out how/why the mysql and rabbitmq processes can be using the kind of CPU/memory that the process list is reporting they are.

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
9595 999 20 0 6690688 1.537g 4408 S 56.4 9.8 35:37.97 beam.smp
23598 999 20 0 13.317g 412044 12116 S 1.3 2.5 21:13.65 mysqld

It's been like this since Icehouse (on Mitaka now), although I suspect it's less to do with Openstack itself, and more to do with these services on which it relies. I've read other articles that all suggest it's related to stale keystone tokens, but I've cleared them and it's still the same.

Here's a typical Nova API call from the logs... (15 secs!?)

2016-04-28 06:33:17.438 16517 INFO nova.osapi_compute.wsgi.server [req-4e774ef7-447f-4ec7-9035-4456d1035e95 47ed0175d27e4485b91fee5d076e8aae 9809424358874fe189b6392b8468f177 - - -] 10.0.0.13 "GET /v2/9809424358874fe189b6392b8468fabc/servers/detail?project_id=9809424358874fe189b6392b8468f177 HTTP/1.1" status: 200 len: 3581 time: 15.7520719

Surely this isn't 'standard' performance for all Openstack deploys?

EDIT: Seems like I've been barking up the wrong tree. The majority of the memory was actually being consumed by the various API worker processes. Most API services were spawning one worker process for each CPU core. We don't need 70+ nova-api processes to serve a handful of VMs for a handful of staff, so I discovered the '*_workers' configuration parameters for the nova-api, nova-conductor, glance-api, cinder-api and neutron-api/metadata (IIRC) services. Setting these to more modest values led to less unnecessary processes being spawned, and considerably less memory (and swap) being consumed. The Horizon dashboard, and things in general on the controller node, are now running a lot more happily.

-- Ross

edit flag offensive delete link more
0

answered 2017-01-30 15:46:34 -0600

Hi, I have the same problem. The controller node is itself virtualized. But it doesn't matter it was running happily until keystone were to apache. Then it started to take a lot of processes and memory. The database itself (mysql) it's also taking huge amount of memory. I think ceilometer has something to do here.

I'm trying to optimize database but I don't know I think there must be other problems.

2003 mysql 20 0 2170292 345324 8324 S 2,0 8,6 167:09.61 mysqld
2603 nova 20 0 790156 168864 4512 S 1,3 4,2 548:30.13 nova-api
2584 nova 20 0 291228 117016 4000 S 4,3 2,9 890:06.63 nova-conductor
2583 nova 20 0 289636 115516 4000 S 3,7 2,9 814:37.26 nova-conductor
2585 nova 20 0 289636 114784 3952 S 2,0 2,8 888:48.33 nova-conductor
7069 keystone 20 0 502240 107500 8952 S 0,0 2,7 0:05.61 apache2
7071 keystone 20 0 502232 107464 8936 S 0,0 2,7 0:05.48 apache2
7070 keystone 20 0 436184 106892 8952 S 0,0 2,7 0:05.34 apache2
16597 neutron 20 0 331456 103188 4316 S 0,0 2,6 65:08.53 neutron-server
2383 nova 20 0 284772 103144 4220 S 2,0 2,6 371:14.04 nova-scheduler
16596 neutron 20 0 331200 102672 4312 S 0,0 2,5 65:15.84 neutron-server
16599 neutron 20 0 331456 99700 4340 S 0,0 2,5 24:59.80 neutron-server
2372 nova 20 0 281704 97784 4180 S 0,0 2,4 34:54.86 nova-consoleaut
2602 nova 20 0 790380 93028 4292 S 2,0 2,3 331:44.91 nova-api
15771 neutron 20 0 325568 89428 5568 S 0,7 2,2 16:04.99 neutron-server
1536 rabbitmq 20 0 1912260 89024 3300 S 1,0 2,2 419:33.64 beam.smp

I will check again but I thing there's too much activity for this small installation. Server is normally above 20% usage... I was expecting things get quiter.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2016-04-27 03:31:29 -0600

Seen: 4,198 times

Last updated: May 11 '16