An

interesting article on caching with Aspect-Oriented Programming
was just
published on TheServerSide, and while it does a decent job at benchmarking and
describing the infrastructure, I have a few issues with some of the
aspect-related material it covers.

Here are a few comments:

it’s not easy to turn caching on or off dynamically when it’s part of your business logic

It should be configurable externally.  You don’t need AOP to branch
conditionally and disable (or alter) your caching logic at runtime.  Most
of the EJB and web containers that I know have been providing this kind of
functionality in XML files for quite a while.

it’s not easy to turn caching on or off dynamically when it’s part of your business logic

True, so it’s quite surprising that Srini’s own solution still falls in this
trap anyway (see below).

The cached data is released from memory (purged), by implementing a
pre-defined eviction policy, when the data is no longer needed.

I disagree with the "pre-defined" (sic) part.  Eviction policies should
absolutely be configurable at runtime, even more so than caching activation
itself.  Adjusting the eviction policy is a big part of fine-tuning and
optimizing an application, and you need as much flexibility in terms of
strategies (round-robin, last used first, timeouts, evict biggest first, etc…)
as possible.

Except for these points, Srini does a good job at framing the overall problem
and he makes a convincing case to use AOP for caching.  However, caching
with AOP is a very complicated thing to achieve, and a couple of
years ago, I offered an AOP
caching challenge
that turned out to to be much harder to solve than
everybody thought initially (including myself).

Srini’s pointcut is the following:

List around(String productGroup) : getInterestRates(productGroup) {

The problem with this approach is that it explicitly
references a method in the business code.

Not only is this dangerous
because you are increasing the coupling in your code (and I’m assuming that
refactoring will take care of modifying the aspect, should you decide to rename
or modify the getInterestRates() method), but it’s actually impossibly to scale. 
As the number of methods you want to cache increases, you need to remember to
update the pointcut to include the newcomers, and this will clearly fall apart
very quickly.

Srini is falling in the same trap as the people who tried to solve the AOP
Caching Challenge fell into:  not enough abstraction, too much coupling.

As Srini said himself above, caching is completely independent of domains,
and this fact should be reflected in the pointcuts you use.  The above
pointcut is not independent from the domain model it applies to.

You should be
able to determine a trait that "methods that can be cached" share and use this
as your pointcut.  I can think of two ways to solve this problem:

  • Decide that any method that takes a string as a key and returns a value
    can be cached (potentially dangerous since you could get false positives,
    but this could be alleviated with naming conventions).
  • Use annotations to indicate when a method can be cached.

I think the annotation-based solution is the best compromise in this case,
since it makes you independent of naming conventions and doesn’t require any
modification of your pointcuts as your code base grows.  Also, the burden
on developers is minimal since all they need to remember is to add an annotation
whenever a method can be cached.

You can also imagine more annotation schemes that would allow for a better
partitioning of your caching:

@Cacheable(category = "datasources")
public DataSource getDataSource(String driverName);
@Cacheable(category = "db.accounts")  // "use the cache for rows in table ACCOUNTS"
public Account findAccount(String customerName);

Jonas and Alex, from AspectWerkz, and Ramnivas Laddad, the author of "AspectJ
in action" have published a
series
of
articles on annotation-based AOP with AspectJ/AspectWerkz
which I strongly recommend.

Regardless, this is an interesting contribution to the problem of AOP-based
caching in general, but it goes to prove — again — that even two years later,
we still haven’t quite figured how to solve this problem optimally.