Java Persistence/Caching

=Caching=

Caching is the most important performance optimisation technique. There are many things that can be cached in persistence, objects, data, database connections, database statements, query results, meta-data, relationships, to name a few. Caching in object persistence normally refers to the caching of objects or their data. Caching also influences object identity, that is that if you read an object, then read the same object again you should get the identical object back (same reference).

JPA 1.0 does not define a JPA providers can support a shared object cache or not, however most do. Caching in JPA is required within a transaction or within an extended persistence context to preserve object identity, but JPA does not require that caching be supported across transactions or persistence contexts.

JPA 2.0 defines the concept of a shared cache. The annotation or   XML attribute can be used to enable or disable caching on a class.

Example JPA 2.0 Cacheable annotation
The enum can also be set in the   XML element in the persistence.xml to configure the default cache mode for the entire persistence unit.

Example JPA 2.0 SharedCacheMode XML
There are two types of caching. You can cache the objects themselves including all of their structure and relationships, or you can cache their database row data. Both provide a benefit, however just caching the row data is missing a huge part of the caching benefit as the retrieval of each relationship typically involves a database query, and the bulk of the cost of reading an object is spent in retrieving its relationships.

Object Identity
Object identity in Java means if two variables refer to the same logical object, then   returns. Meaning that both reference the same thing (both a pointer to the same memory location).

In JPA object identity is maintained within a transaction, and (normally) within the same. The exception is in a JEE managed, object identity is only maintained inside of a transaction.

So the following is true in JPA:

This holds true no matter how the object is accessed:

In JPA object identity is not maintained across s.  Each   maintains its own persistence context, and its own transactional state of its objects.

So the following is true in JPA:

Object identity is normally a good thing, as it avoids having your application manage multiple copies of objects, and avoids the application changing one copy, but not the other. The reason different s or transactions (in JEE) don't maintain object identity is that each transaction must isolate its changes from other users of the system. This is also normally a good thing, however it does require the application to be aware of copies, detached objects and merging.

Some JPA products may have a concept of read-only objects, in which object identity may be maintained across s through a shared object cache.

Object Cache
An object cache is where the Java objects (entities) are themselves cached. The advantage of an object cache is that the data is cached in the same format that it is used in Java. Everything is stored at the object-level and no conversion is required when obtaining a cache hit. With JPA the  must still copy the objects to and from the cache, as it must maintain its transaction isolation, but that is all that is required. The objects do not need to be re-built, and the relationships are already available.

With an object cache, transient data may also be cached. This may occur automatically, or may require some effort. If transient data is not desired, you may also need to clear the data when the object gets cached.

Some JPA products allow read-only queries to access the object cache directly. Some products only allow object caching of read-only data. Obtaining a cache hit on read-only data is extremely efficient as the object does not need to be copied, other than the look-up, no work is required.

It is possible to create your own object cache for your read-only data by loading objects from JPA into your own object cache or  implementation. The main issue, which is always the main issue in caching in general, is how to handle updates and stale cached data, but if the data is read-only, this may not be an issue.


 * TopLink / EclipseLink : Support an object cache. The object cache is on by default, but can be globally or selectively enabled or configured per class.  The persistence unit property   can be set to   to disable the cache.  Read-only queries are supported through the   query hint, entities can also be marked to always be read-only using the   annotation.

Data Cache
A data cache caches the object's data, not the objects themselves. The data is normally a representation of the object's database row. The advantage of a data cache is that it is easier to implement as you do not have to worry about relationships, object identity, or complex memory management. The disadvantage of a data cache is that it does not store the data as it is used in the application, and does not store relationships. This means that on a cache hit the object must still be built from the data, and the relationships fetched from the database. Some products that support a data cache, also support a relationship cache, or query cache to allow caching of relationships.


 * Hibernate : Supports integration with a third party data cache. Caching is not enabled by default and a third party caching product such as Ehcache must be used to enable caching.

Caching Relationships
Some products support a separate cache for caching relationships. This is normally required for  and   relationships. and  relationships normally do not need to be cached, as they reference the object's. However an inverse  will require the relationship to be cached, as it references the foreign key, not primary key.

For a relationship cache, the results normally only store the related object's, not the object, or its data (to avoid duplicate and stale data). The key of the relationship cache is the source object's  and the relationship name. Sometimes the relationship is cached as part of the data cache, if the data cache stores a structure instead of a database row. When a cache hit occurs on a relationship, the related objects are looked up in the data cache one by one. A potential issue with this, is that if the related object is not in the data cache, it will need to be selected from the database. This could result in very poor database performance as the objects can be loaded one by one. Some product that support caching relationships also support batching the selects to attempt to alleviate this issue.

Cache Types
There are many different caching types. The most common is a LRU cache, or one that ejects the Least Recently Used objects and maintains a fixed size number of MRU (Most Recently Used) objects.

Some cache types include:
 * LRU - Keeps X number of recently used objects in the cache.
 * Full - Caches everything read, forever. (not always the best idea if the database is large)
 * Soft - Uses Java garbage collection hints to release objects from the cache when memory is low.
 * Weak - Normally relevant with object caches, keeps any objects currently in use in the cache.
 * L1 - This refers to the transactional cache that is part of every, this is not a shared cache.
 * L2 - This is a shared cache, conceptually stored in the, so accessible to all  s.
 * Data cache - The data representing the objects is cached (database rows).
 * Object cache - The objects are cached directly.
 * Relationship cache - The object's relationships are cached.
 * Query cache - The result set from a query is cached.
 * Read-only - A cache that only stores, or only allows read-only objects.
 * Read-write - A cache that can handle insert, updates and deletes (non read-only).
 * Transactional - A cache that can handle insert, updates and deletes (non read-only), and obeys transactional ACID properties.
 * Clustered - Typically refers to a cache that uses JMS, JGroups or some other mechanism to broadcast invalidation messages to other servers in the cluster when an object is updated or deleted.
 * Replicated - Typically refers to a cache that uses JMS, JGroups or some other mechanism to broadcast objects to all servers when read into any of the servers cache.
 * Distributed - Typically refers to a cache that spreads the cached objects across several servers in a cluster, and can look-up an object in another server's cache.


 * TopLink / EclipseLink : Support an L1 and L2 object cache. LRU, Soft, Full and Weak cache types are supported.  A query cache is supported.  The object cache is read-write and always transactional.  Support for cache coordination through RMI and JMS is provided for clustering.  The TopLink product includes a Grid component that integrates with Oracle Coherence to provide a distributed cache.

Query Cache
A query cache caches query results instead of objects. Object caches cache the object by its, so are generally not very useful for queries that are not by. Some object caches support secondary indexes, but even indexed caches are not very useful for queries that can return multiple objects, as you always need to access the database to ensure you have all of the objects. This is where query caches are useful, instead of storing objects by, the query results are cached. The cache key is based on the query name and parameters. So if you have a  that is commonly executed, you can cache its results, and only need to execute the query the first time.

The main issue with query caches, as with caching in general is stale data. Query caches normally interact with an object cache to ensure the objects are at least as up to date as in the object cache. Query caches also typically have invalidation options similar to object caches.


 * TopLink / EclipseLink : Support query cache enabled through the query hint .  Several configuration options including invalidation are supported.

Stale Data
The main issue with caching anything is the possibility of the cache version getting out of synch with the original. This is referred to as stale or out of synch data. For read-only data, this is not an issue, but for data that changes infrequently or frequently this can be a major issue. There are many techniques for dealing with stale data and out of synch data.

1st Level Cache
Caching the object's state for the duration of a transaction or request is normally not an issue. This is normally called a 1st level cache, or the  cache, and is required by JPA for proper transaction semantics. If you read the same object twice, you must get the identical object, with the same in-memory changes. The only issues occur with querying and DML.

For queries that access the database, the query may not reflect the un-written state of the objects. For example you have persisted a new object, but JPA has not yet inserted this object in the database as it generally only writes to the database on the commit of the transaction. So your query will not return this new object, as it is querying the database, not the 1st level cache. This is normally solved in JPA by the user first calling, or the   automatically triggering a flush. The default  on an   or   is to trigger a flush, but this can be disabled if a write to the database before every query is not desired (normally it isn't, as it can be expensive and lead to poor concurrency). Some JPA providers also support conforming the database query results with the object changes in memory, which can be used to get consistent data without triggering a flush. This can work for simple queries, but for complex queries this typically gets very complex to impossible. Applications normally query data at the start of a request before they start making changes, or don't query for objects they have already found, so this is normally not an issue.

If you bypass JPA and execute DML directly on the database, either through native SQL queries, JDBC, or JPQL  or   queries, then the database can be out of synch with the 1st level cache. If you had accessed objects before executing the DML, they will have the old state and not include the changes. Depending on what you are doing this may be ok, otherwise you may want to refresh the affected objects from the database.

The 1st level, or  cache can also span transaction boundaries in JPA. A JTA managed  will only exist for the duration of the JTA transaction in JEE. Typically the JEE server will inject the application with a proxy to an, and after each JTA transaction a new   will be created automatically or the   will be cleared, clearing the 1st level cache. In an application managed, the 1st level cache will exist for the duration of the. This can lead to stale data, or even memory leaks and poor performance if the  is held too long. This is why it is generally a good idea to create a new  per request, or per transaction. The 1st level cache can also be cleared using the  method, or an object can be refreshed using the   method.

2nd Level Cache
The 2nd level cache spans transactions and s, and is not required as part of JPA. Most JPA providers support a 2nd level cache, but the implementation and semantics vary. Some JPA providers default to enabling a 2nd level cache, and some do not use a 2nd level cache by default.

If the application is the only application and server accessing the database there is little issue with the 2nd level cache, as it should always be up to date. The only issue is with DML, if the application executes DML directly to the database through native SQL queries, JDBC, or JPQL  or   queries. JPQL queries should automatically invalidate the 2nd level cache, but this may depend on the JPA provider. If you use native DML queries or JDBC directly, you may need to invalidate, refresh or clear the objects affected by the DML.

If there are other applications, or other application servers accessing the same database, then stale data can become a bigger issue. Read-only objects, and inserting new objects should not be an issue. New objects should get picked up by other servers even when using caching as queries typically still access the database. It is normally only  operations and relationships that hit the cache. Updated and deleted objects by other applications or servers can cause the 2nd level cache to become stale.

For deleted objects, the only issue is with  operations, as queries that access the database will not return the deleted objects. A  by the object's   could return the object if it is cached, even if it does not exist. This could lead to constraint issues if you add relations to this object from other objects, or failed updates, if you try to update the object. Note that these can both occur without caching, even with a single application and server accessing the database. During a transaction, another user of the application could always delete the object being used by another transaction, and the second transaction will fail in the same way. The difference is the potential for this concurrency issue to occur increases.

For updated objects, any query for the objects can return stale data. This can trigger optimistic lock exceptions on updates, or cause one user to overwrite another user's changes if not using locking. Again note that these can both occur without caching, even with a single application and server accessing the database. This is why it is normally always important to use optimistic locking. Stale data could also be returned to the user.

Refreshing
Refreshing is the most common solution to stale data. Most application users are familiar with the concept of a cache, and know when they need fresh data and are willing to click a refresh button. This is very common in an Internet browser, most browsers have a cache of web pages that have been accessed, and will avoid loading the same page twice, unless the user clicks the refresh button. This same concept can be used in building JPA applications. JPA provides several refreshing options, see refreshing.

Some JPA providers also support refreshing options in their 2nd level cache. One option is to always refresh on any query to the database. This means that  operations will still access the cache, but if the query accesses the database and brings back data, the 2nd level cache will be refreshed with the data. This avoids queries returning stale data, but means there will be less benefit from caching. The cost is not just in refreshing the objects, but in refreshing their relationships. Some JPA providers support this option in combination with optimistic locking. If the version value in the row from the database is newer than the version value from the object in the cache, then the object is refreshed as it is stale, otherwise the cache value is returned. This option provides optimal caching, and avoids stale data on queries. However objects returned through  or through relationships can still be stale. Some JPA providers also allow  operation to be configured to first check the database, but this general defeats the purpose of caching, so you are better off not using a 2nd level cache at all. If you want to use a 2nd level cache, then you must have some level of tolerance to stale data.

JPA 2.0 Cache APIs
JPA 2.0 provides a set of standard query hints to allow refreshing or bypassing the cache. The query hints are defined on the two enum classes CacheRetrieveMode and CacheStoreMode.

Query hints:
 * : Ignore the cache, and build the object directly from the database result.
 * : Allow the query to use the cache. If the object/data is already in the cache, the cached object/data will be used.
 * : Do not cache the database results.
 * : If the object/data is already in the cache, then refresh/replace it with the database results.
 * : Cache the objects/data returned from the query.
 * : If the object/data is already in the cache, then refresh/replace it with the database results.
 * : Cache the objects/data returned from the query.

Cache hints example
JPA 2.0 also provides a Cache interface. The  interface can be obtained from the   using the   API. The  can be used to manually evict/invalidate entities in the cache. Either a specific entity, an entire class, or the entire cache can be evicted. The  can also be checked to see if it contains an entity.

Some JPA providers may extend the  interface to provide additional API.


 * TopLink / EclipseLink : Provide an extended  interface  .  It provides additional API for invalidation, query cache, access and clearing.

Cache Invalidation
A common way to deal with stale cached data is to use cache invalidation. Cache invalidation removes or invalidates data or objects in the cache after a certain amount of time, or at a certain time of day. Time to live invalidation guarantees that the application will never read cached data that is older than a certain amount of time. The amount of the time can be configured with respect to the application's requirements. Time of day invalidation allows the cache to be invalidated at a certain time of day, typically done in the night, this ensures data is never older than a day old. This can also be used if it is known that a batch job updates the database at night, the invalidation time can be set after this batch job is scheduled to run. Data can also be invalidated manually, such as using the JPA 2.0  API.

Most cache implementations support some form of invalidation, JPA does not define any configurable invalidation options, so this depends on the JPA and cache provider.


 * TopLink / EclipseLink : Provide support for time to live and time of day cache invalidation using the  annotation and   orm.xml element.  Cache invalidation is also supported through API, and can be used in a cluster to invalidate objects changed on other machines.

Caching in a Cluster
Caching in a clustered environment is difficult because each machine will update the database directly, but not update the other machine's caches, so each machines cache can become out of date. This does not mean that caching cannot be used in a cluster, but you must be careful in how it is configured.

For read-only objects caching can still be used. For read mostly objects, caching can be used, but some mechanism should be used to avoid stale data. If stale data is only an issue for writes, then using optimistic locking will avoid writes occurring on stale data. When an optimistic lock exception occurs, some JPA providers will automatically refresh or invalidate the object in the cache, so if the user or application retries the transaction the next write will succeed. Your application could also catch the lock exception and refresh or invalidate the object, and potentially retry the transaction if the user does not need to be notified of the lock error (be careful doing this though, as normally the user should be aware of the lock error). Cache invalidation can also be used to decrease the likelihood of stale data by setting a time to live on the cache. The size of the cache can also affect the occurrence of stale data.

Although returning stale data to a user may be an issue, normally returning stale data to a user that just updated the data is a bigger issue. This can normally be solved through session affinity, but ensuring the user interacts with the same machine in the cluster for the duration of their session. This can also improve cache usage, as the same user will typically access the same data. It is normally also useful to add a refresh button to the UI, this will allow the user to refresh their data if they think their data is stale, or they wish to ensure they have data that is up to date. The application can also choose the refresh the objects in places where up to date data is important, such as using the cache for read-only queries, but refreshing when entering a transaction to update an object.

For write mostly objects, the best solution may be to disable the cache for those objects. Caching provides no benefit to inserts, and the cost of avoiding stale data on updates may mean there is no benefit to caching objects that are always updated. Caching will add some overhead to writes, as the cache must be updated, having a large cache also affects garbage collection, so if the cache is not providing any benefit it should be turned off to avoid this overhead. This can depend on the complexity of the object though, if the object has a lot of complex relationships, and only part of the object is updated, then caching may still be worth it.

Cache Coordination
One solution to caching in a clustered environment is to use a messaging framework to coordinate the caches between the machines in the cluster. JMS or JGroups can be used in combination with JPA or application events to broadcast messages to invalidate the caches on other machines when an update occurs. Some JPA and cache providers support cache coordination in a clustered environment.


 * TopLink / EclipseLink : Support cache coordination in a clustered environment using JMS or RMI. Cache coordination is configured through the   annotation or   orm.xml element, and using the persistence unit property.

Distributed Caching
A distributed cache is one where the cache is distributed across each of the machines in the cluster. Each object will only live on one or a set number of the machines. This avoids stale data, because when the cache is accessed or updated the object is always retrieved from the same location, so is always up to date. The draw back to this solution is that cache access now potentially requires a network access. This solution works best when the machines in the cluster are connected together on the same high speed network, and when the database machine is not as well connected, or is under load. A distributed cache reduces database access, so allows the application to be scaled to a larger cluster without the database becoming a bottleneck. Some distributed cache providers also provide a local cache, and offer cache coordination between the caches.


 * TopLink : Supports integration with the Oracle Coherence distributed cache.

Cache Transaction Isolation
When caching is used, the consistency and isolation of the cache becomes as important as the database transaction isolation. For basic cache isolation it is important that changes are not committed into the cache until after the database transaction has been committed, otherwise uncommitted data could be accessed by other users.

Caches can be either transactional, or non-transactional. In a transactional cache, the changes from a transaction are committed to the cache as a single atomic unit. This means the objects/data are first locked in the cache (preventing other threads/users from accessing the objects/data), then updated in the cache, then the locks are released. Ideally the locks are obtained before committing the database transaction, to ensure consistency with the database. In a non-transactional cache the objects/data are updated one by one without any locking. This means there will be a brief period where the data in the cache is not consistent with the database. This may or may not be an issue, and is a complex issue to think about and discuss, and gets into the issue of locking, and the application's isolation requirements.

Optimistic locking is another important consideration in cache isolation. If optimistic locking is used, the cache should avoid replacing new data, with older data. This is important when reading, and when updating the cache.

Some JPA providers may allow for configuration of their cache isolation, or different caches may define different levels of isolation. Although the defaults should normally be used, it can be important to understand how the usage of caching is affecting your transaction isolation, as well as your performance and concurrency.

I can't see changes made directly on the database, or from another server

 * This means you have either enabled caching in your JPA configuration, or your JPA provider caches by default. You can either disable the 2nd level cache in your JPA configuration, or refresh the object or invalidate the cache after changing the database directly. See, Stale Data
 * TopLink / EclipseLink : Caching is enabled by default. To disable caching set the persistence property   to   in your persistence.xml or persistence properties.  You can also configure this on a per class basis if you want to allow caching in some classes and not in others. See, EclipseLink FAQ.

Caching