Distributed caching
This article describes how to offload the database from repeated read operations
In order to offload the database from repeated read operations, a cache has been placed in the application logic where these operations are made. The information coming back from the database is stored in a memory cache, per instance of the running application. This is done with aggressive caching of all read operations so that the number of database operations is reduced.
The drawback of aggressive caching is that it uses a lot of memory per application instance. This is where distributed caching comes into the picture. It reduces the application memory usage, at the same time as it keeps the database from doing repeated read operations.
Distributed cache is a storage that lives outside the application instance. It is shared between multiple application instances and persisted between them when they are restarted.
Changes that are bypassing the Litium API will also bypass the notification for the distributed cache, which means that changes outside the Litium API will not update the cache.
Please note that changes directly in the database will make the distributed cache return information that hasn´t been updated with changes that already exist in the database. This means that future update operations could fail and that the application may display the wrong information.
To reduce network latency between the Litium application instance and the distributed cache server, deserialization of the information in the distributed cache is combined with a short-lived memory cache.
Please read more about how to configure here.