Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Hello, To use multiple Ceilometer central agents you must provide a Redis or Memcache backend and configure that in the backend_url in ceilometer.conf. Ceilometer will then talk to each other over the backend and partition themselves into smaller groups, i.e one handles CPU monitoring, another RAM etc. This enabled the agents to have heartbeat checking over the backend and can therefore partition their workload.

What you are looking for is a load balanced MongoDB replica set where you would connect to the VIP. Since MongoDB is the recommended backend right now you should consider setting up proper values for time_to_live in ceilometer.conf so that you don't store too much data. This is handled by the ceilometer-expirer agent which supports MongoDB.

Other points I recommend is dumping the MongoDB and archiving data instead of always expiring it. From what I know it is possible to upgrade from a Replica Set to Sharding if you are to grow in the future.

This is mostly based on the Juno release and for future reference you should research about the Gnocchi project which enabled far better scaling than what Ceilometer has natively.

When I did research for our Ceilometer setup I collected some good links, see below.

http://blog.zhaw.ch/icclab/managing-ceilometer-data-in-openstack/

http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-cetral-compute-agent-ha.html

https://openstack.nimeyo.com/35311/openstack-ceilometer-high-availability-in-active-active

https://blog.sileht.net/using-a-shardingreplicaset-mongodb-with-ceilometer

http://www.severalnines.com/blog/openstack-metering-how-install-ceilometer-mongodb

I hope this will help you.

Best regards

Hello, Hello,

To use multiple Ceilometer central agents you must provide a Redis or Memcache backend and configure that in the backend_url in ceilometer.conf. Ceilometer will then talk to each other over the backend and partition themselves into smaller groups, i.e one handles CPU monitoring, another RAM etc. This enabled the agents to have heartbeat checking over the backend and can therefore partition their workload.

What you are looking for is a load balanced MongoDB replica set where you would connect to the VIP. Since MongoDB is the recommended backend right now you should consider setting up proper values for time_to_live in ceilometer.conf so that you don't store too much data. This is handled by the ceilometer-expirer agent which supports MongoDB.

Other points I recommend is dumping the MongoDB and archiving data instead of always expiring it. From what I know it is possible to upgrade from a Replica Set to Sharding if you are to grow in the future.

This is mostly based on the Juno release and for future reference you should research about the Gnocchi project which enabled far better scaling than what Ceilometer has natively.

When I did research for our Ceilometer setup I collected some good links, see below.

http://blog.zhaw.ch/icclab/managing-ceilometer-data-in-openstack/

http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-cetral-compute-agent-ha.html

https://openstack.nimeyo.com/35311/openstack-ceilometer-high-availability-in-active-active

https://blog.sileht.net/using-a-shardingreplicaset-mongodb-with-ceilometer

http://www.severalnines.com/blog/openstack-metering-how-install-ceilometer-mongodb

I hope this will help you.

Best regards