Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Understanding caching of imaged in openstack glance

 Hi everyone,

I am understanding caching of glance imgs in a multinode environment while I am setting up the env

Openstack set up:

Openstack version:5.2.0

Deployed as openstack-helm(Openstack deployed on K8S nodes)-All the services are running as PODS on controller 
node.There is 1 Controller node and 2 compute nodes.

Controller node glance API is configured for image caching i.e

middleware is configured which was done by default under 
paste.filter_factory = glance.api.middleware.cache_manage:CacheManageFilter.factory
flavor is set as keystone+cachemanagement 
and image_cache_max_size and image_cache_dir are also defined

And I created an Instance

And when I executed glance-cache-manage list-cached list on the controller node
it does list cached Images in the below format

Found 1 cached images...
+--------------------------------------+----------------------------+----------------------------+-----------+------+
|                  ID                  |    Last Accessed (UTC)     |    Last Modified (UTC)     |      Size | Hits |
+--------------------------------------+----------------------------+----------------------------+-----------+------+


My Question:

When does the column Hits(int) gets incremented?--Because Hits value changed from 0 to 1 and 
doesnt change further even though I have created multiple instances on the 2 compute nodes using this same Image 
which is cached. 

 1. Is it because the images are cached
    on both the compute nodes and
    nova-compute doesn't hit the glance
    from the 3rd  instance creation?
        - if its 'yes' what is the location where the image is cached
          on the compute node.
       - And if 'no' and the flow is Nova-compute -->Glance api-->Glance cache
        why is the Hits not getting increased? Did I miss any config?


Please correct me,if my understanding of caching is bad.

I sincerely thank for the time and patience.