Caching ?
Introduction to Caching
Notes from Article : docs.hazelcast.com/hazelcast/5.2/cache/over..
A cache is a secondary data store that’s faster to read than the data’s primary store (also known as the source of truth).
Reducing network calls to a primary data store.
Saving the results of queries or computations to avoid resubmitting them.
This distributed model is called a cache cluster, and it makes your data fault-tolerant and scalable
Embedded mode: In this mode, the application and the cached data are stored on the same device. When a new entry is written to the cache, Hazelcast takes care of distributing it to the other members.
Client/server mode: In this mode, the cached data is separated from the application. Hazelcast members run on dedicated servers and applications connect to them through clients.
Caching Data on the Client-Side
If you use the client-server topology, your application will request cached data from your cluster over a network.
To reduce these network requests, you can enable a near cache on the client.
A near cache is a local cache that is created on the client. When an application wants to read data, first it looks for the data in the near cache. If the near cache doesn’t have the data, the application requests it from the cache cluster and adds it to the near cache to use for future requests.
For information about how to use a near cache, see Near Cache.
Caching Patterns
The way in which an application interacts with a cache is called a caching pattern. Many patterns exist and each one has its pros and cons.
For an in-depth discussion on caching patterns in Hazelcast, see our blog A Hitchhiker’s Guide to Caching Patterns.
A Hitchhiker’s Guide to Caching Patterns
- when your application starts slowing down, the reason is probably a bottleneck somewhere in the execution chain.
- One option would be to change your whole architecture. Before moving to such a drastic, and probably expensive measure, one can consider a trade-off: instead of getting remote data every time, you can store the data locally after the first read. This is the trade-off that caching offers: stale data vs. speed.
Cache Aside
- The biggest advantage of using Cache-Aside is that anybody can read the code and understand its execution flow. Moreover, the requirements toward the cache provider are at their lowest: it just needs to be able to get and set values. That allows for pretty straightforward migrations from a cache provider to another one (e.g. Hazelcast).
How Read works in Cache Aside
- check in cache if available return if null
- if null call to db
- and then update the cache
How Write Works in Cache Aside
- Write in cache store
- and the write to db
The biggest issue of Cache-Aside is that your code needs to handle the inconsistency gap between the cache and the datastore. Imagine that you’ve successfully updated the cache but the datastore fails to update. The code needs to implement retries. Worse, during unsuccessful retries, the cache contains a value that the datastore doesn’t.
Switching the logic to update the datastore first doesn’t change the problem. What if the datastore updates successfully but the cache doesn’t?
Read-Through
How Read Works
- check in cache , if null cache will call db
- return value and also writes in cache
- Here we are only dealing with the cache provider and do not have to worry of communicating to db, so for that matter we require more sophisticated cache for this.