Creating Mocks and Stubs with Reflection Proxies

Originally published: 2008-07-22

Last updated: 2015-04-27

One of the basic tenets of unit testing is that you test a single class at a time, not an entire system. However, it's a rare class that has no dependencies on other classes. For example, if you've created a utility class for handling JDBC result sets, you need a JDBC ResultSet object to properly test it.

One solution would be to use an actual database. The Derby database comes with the JDK; HSQLDB is another open-source JDBC-compliant DBMS, with has the added benefit of in-memory databases: you can create and populate them for each test, then throw them away when done.

However, using a real database actually prevents you from thoroughly testing your utility class! For example, how would you test a result set that won't fit in memory? Certainly you can't do that with an embedded database. And if you switch to a real database, you move outside of the realm of “unit” testing: your dependencies extend outside the JVM, and your tests will fail if the database isn't accessible.

The solution is to simulate the result set, using a stub object that will return as many rows as you'd like. On the surface, this seems like a lot of work: java.sql.ResultSet is an interface with over 100 methods! It would take a week to provide dummy implementations of those methods.

Fortunately, you don't have to implement the entire interface. With Java's reflection proxies, you can create an object that implements as many or as few methods as you need. You can also create a proxy that's customized for a single test, rather than trying to make one implementation that suits all needs.

But before we get into actually implementing a proxy, let's take a closer look at the different ways we might want to simulate objects — and in particular, the difference between a “mock object” and a “stub.”

Mocks versus Stubs

Mocks and stubs both come from the world of test-driven development, and tie closely to the “CRC cards” created by Ward Cunningham and Kent Beck. CRC stands for Class / Responsibility / Collaboration: a Class has methods that implement its Responsibilities, and that implementation requires Collaboration with other classes. Unit tests exist to validate a class' responsibilities, and mocks or stubs stand in for the its collaborators.

Although mocks and stubs both stand in for a class' collaborators, the provide two very different roles in testing: mocks test collaboration, while stubs simulate collaborators. This is best explained with two examples:

First, stubs. The ResultSet example described above is a stub: its role is to simulate an actual database, and produce endless rows of dummy data. In the real world, the collaborator is a ResultSet returned from the database. In the test world, the stub takes its place.

So isn't that testing collaboration? Well, no. When you're testing collaboration, you're not so much interested in how the test object uses data, but how it interacts with its collaborators. For example: if next() returns true, does the tested code then access specific columns?

Mock object testing is all about expectations: you create the mock object, tell it what to expect, and then execute the test against the class using that mock. Continuing the example, you would tell the mock object to expect a call to next(), followed by a call to getString().

The line between mocks and stubs is often blurred, because most interactions are dependent on how the mock object simulates the collaborator. For example, if next() returns false, we should expect that the class being tested does not call getString().

Do You Need To Roll Your Own?

It doesn't take very long with Google to discover that there are a lot of mock object packages available; in fact, you'll find at least one package that implements mock objects via reflection proxies! The simplest answer to this question is that you should use these packages whenever they suit your purposes. You may find that they're sufficient for all your testing needs.

The most common reason to roll your own is when you need a stub rather than a mock. The ResultSet example in this article is one of those cases: I want to verify the behavior of a method when reading an infinite number of rows, not the fact that my method accesses the results an infinite number of times.

If you choose to roll your own, I strongly suggest that such testcases be separate from those that use a third-party mock library. As with all coding, consistency is important: don't district your colleagues by writing some tests with a mock package and others with proxy classes.

Reflection Proxies

Reflection proxies, added to the API with JDK 1.3, allow you to create a runtime object that implements an interface. Unlike a compiled class, this object doesn't have to fully implement the interface; instead, you can pick and choose the methods you want. This ability is what makes proxies especially useful as stub objects for testing.

There are two parts to a proxy: first is the proxy object itself, an instance of Proxy. Second is an implementation of InvocationHandler.

Creating the proxy instance is simple: you call Proxy.newProxyInstance(), passing it information about the interfaces your proxy will implement. For example, to create a proxy for ResultSet:

ResultSet rslt = (ResultSet)Proxy.newProxyInstance(
                        ResultSet.class.getClassLoader(),
                        new Class[] { ResultSet.class },
                        new MyResultSetHandler());

So, what's happening here? And why are we passing a classloader?

The second answer is easy: we pass a classloader because Proxy creates a new class to act as the proxy, and every class must be associated with a classloader. Since this is an example, I pass in the most convenient classloader: the bootstrap loader associated with ResultSet. If you were using reflection proxies in an app-server, you would want to use the context classloader. As you'll see later, my pattern for test proxies passes in the classloader for the test class itself.

The next argument to this call is an array of interface classes: the generated proxy class will pretend to implement all of these interfaces. While there may be reasons to implement multiple interfaces, in a testing situation I recommend limiting yourself to just one — the goal is to tightly focus your tests, remember?

The final argument is an InvocationHandler instance: an object that handles the actual calls to the proxy:

private static class MyResultSetHandler
implements InvocationHandler
{
    public Object invoke(Object proxy, Method method, Object[] args)
    throws Throwable
    {
        if (method.getName().equals("next"))
            return Boolean.TRUE;
        else if (method.getName().equals("getString"))
            return "foo";
        throw new UnsupportedOperationException(method.getName());
    }
}

The invocation handler simply an if-else chain to select method calls of interest. I always finish with an exception for unhandled calls, which will let me know if my class is doing something unexpected.

One thing you should remember is that the invocation handler might have to deal with overloaded methods. For example, the field access methods on ResultSet have one variant that takes a name and one that takes an index. This simple example has no need to differentiate; if your code does, call getParameterTypes() on the passed Method.

A Pattern for Building Proxies

I like my invocation handlers to be self-contained. To this end, they all tend to look like the following:

private static class MyProxy
implements InvocationHandler
{
    // methods to configure proxy return values; use Builder pattern

    // assertions that particular methods were called

    public ClassToBeProxied toStub()
    {
        return (ClassToBeProxied)Proxy.newProxyInstance(
                        this.getClass().getClassLoader(),
                        new Class[] { ClassToBeProxied.class },
                        this);
    }

    public Object invoke(Object proxy, Method method, Object[] args)
    throws Throwable
    {
        // if-else chain for expected methods

        throw new UnsupportedOperationException(method.getName());
    }
}

It's a rare proxy that doesn't need to be configured in some way. I generally use setter methods to do this, rather than a constructor, because it allows more flexibility: some tests may not need particular values, and I don't want to have lots of constructors. In addition, by using explicit setter methods you can change the configuration during the course of a test, for example having next() return false. My configuration methods generally follow the “builder” pattern, in which each setter returns the object, allowing calls to be chained together (you'll see this later).

These configuration methods are followed by proxy-specific assertions — the “expectations” of mock object testing. Unlike a typical third-party mock package, you must explicitly invoke these assertions as part of the test, rather than configuring the mock beforehand.

The last piece of interest is the toStub() method: this is boilerplate code, but is tied to the particular proxy. While you could simply copy it into every test method, that would lead to more maintenance.

Example: closeQuietly()

OK, so let's see this in action. Imagine yourself in a world that doesn't have the Jakarta Commons DbUtils, and you are tasked with writing an equivalent of the closeQuietly() method. You would probably write test cases that exercise the three possible outcomes (success, SQLException, NullPointerException), and you'd like to verify that your method is actually closing the ResultSet. Using a proxy object, you might end up with test methods like these:

public void testCloseQuietly() throws Exception
{
    ResultSetClosingProxy proxy = new ResultSetClosingProxy();
    closeQuietly(proxy.toStub());
    proxy.assertCloseCalled();
}


public void testCloseQuietlyWithException() throws Exception
{
    ResultSetClosingProxy proxy = new ResultSetClosingProxy()
                                  .setThrowOnClose(true);
    closeQuietly(proxy.toStub());
    proxy.assertCloseCalled();
}


public void testCloseQuietlyWhenNull() throws Exception
{
    closeQuietly(null);
}

This example shows all of the pieces that I described above:

Here's the proxy class that makes this work:

public class ResultSetClosingProxy
implements InvocationHandler
{
    private boolean _throwOnClose;
    private boolean _closeCalled;

    public ResultSetClosingProxy setThrowOnClose(boolean value)
    {
        _throwOnClose = value;
        return this;
    }

    public void assertCloseCalled()
    {
        assertTrue("close not called", _closeCalled);
    }

    public ResultSet toStub()
    {
        return (ResultSet)Proxy.newProxyInstance(
                        this.getClass().getClassLoader(),
                        new Class[] { ResultSet.class },
                        this);
    }

    @Override
    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable
    {
        if (method.getName().equals("close"))
        {
            _closeCalled = true;
            if (_throwOnClose)
                throw new SQLException("proxy threw on close");
            else
                return null;
        }
        throw new UnsupportedOperationException(method.getName());
    }
}

By now, this class shouldn't have any surprises. However, we can expand its scope a bit, to test all variants of closeQuietly(): ResultSet, Statement, and Connection — or, for that matter, anything that has a close() method. To make that happen, parameterize toStub(), and pass in the desired interface type:

public <T> T toStub(Class<T> klass)
{
    return (T)Proxy.newProxyInstance(
                    this.getClass().getClassLoader(),
                    new Class[] { klass },
                    this);
}

Some Real-World Examples

Stubbing a ResultSet is all well and good, but unless you're writing a database access library, you probably don't need to do so very often. So here are two examples from the last month or so at my full-time job.

Updating legacy order-processing code

I currently work for an eCommerce firm, and a recent project changed the way we interact with Google Checkout. The code is based on the Strategy pattern, with a pluggable set of objects that take an Order and use it to generate the XML sent to Google. Manual testing would mean sending multiple requests to Google and examining the logs. Unfortunately, Google considers such repeated requests as a denial-of-service attack — even in their development sandbox. If I didn't need a reason for unit testing before, that would have given me one.

Our data model is relatively complex, having grown organically over the past nine years. It also has limited test coverage. Writing tests that used real objects would require extensive setup, taking them out of the realm of “unit” testing. In the past I've used in-container testing when touching the order code, but that adds significant time to the development process.

Fortunately, all of the domain objects are represented by interfaces, and the code being tested only used a half-dozen or so methods. This was an almost perfect situation for reflection proxies, because I only needed to implement the methods that I actually used. More important, because I could configure these proxies on a per-test basis, I was able to test situations such as an order without any items.

Reflection proxies also allow us to gradually add test coverage to the order process: as we need more functionality from the various objects, we can implement additional methods.

Adding functionality to a J2EE servlet filter

One of the problems with a nine year old codebase is that it can be difficult to backport best practices. For example, cross-site scripting (XSS) attacks have gotten more sophisticated over the years, and going through thousands of pages to identify vulnerabilities would not only be cost-prohibitive but could not guarantee complete coverage. So we took the approach of using a servlet filter that would escape request parameters before they were used by the page.

I was recently asked to look at this filter, both because it was not catching some attacks, and because it was blocking parameters used by dynamic includes (for those not familiar with J2EE: a servlet can include content generated by another servlet; this included content is handled by a not-quite-separate request that also goes through the filter chain). As this was legacy code, my first step was to create a test harness that would exercise the existing code and allow me to introduce test conditions that would break it.

Like ResultSet, javax.servlet.http.HttpServletRequest is an interface that defines dozens of methods, and I was only interested in those that retrieved the request parameters. There are also several mock implementations of this interface, such as that provided with the Spring framework.

Ultimately, I decided to go with a reflection proxy. One reason was that I didn't want to add a Spring dependency into the code that I was testing. The other, more important reason was that I wanted to change request parameters after the filter was applied (due to a quirk of request processing, in which dynamic includes reuse the original request object — still astounds me, but both Tomcat and WebLogic do it). Within the test, I wanted to overwrite parameter values, but the Spring object only provides addParameter().

Thoughts on Testing with Mock Objects

Are unit tests supposed to test an object's interface or its implementation? Most testing aficionados would say the former: one of the touted benefits of testing is that you can change the implementation and use existing tests as a safety net. However, mock objects far too often drive you to testing implementation: you build up a chain of expectations that can be met by one and only one implementation.

If you find yourself breaking mock-based tests whenever you touch the implementation, you need to stop and ask yourself exactly what your test is trying to accomplish. For example, if you're testing code that adds an address to a customer record, what really matters is that the ADDRESS table has a new row, and that row holds the correct values. The order in which you execute database operations is unimportant (unless your testing is to ensure there are no deadlocks).

So how should you test such code? One approach is to decide that mock objects, and unit-level testing in general, is an inappropriate approach, and instead use an integration test with a live database. Alternatively, you might decide that the domain objects are too closely coupled to the persistence mechanism, and separate them, using a mock object to represent the persistence framework.

Personally, I think the latter approach is closer to the spirit of test-driven development, in which individual objects have simple collaborations with other objects. The result is a complex application built from highly decoupled (and therefore replaceable) parts.

For More Information

Compilable examples:

For simple mock testing, where you care only about whether a method was called, you can use the SimpleMock class from the KDG Commons project.

Martin Fowler has a great article that explores the difference between mocks and stubs.

And for a mock-testing anti-pattern, read Oh no, we're testing the Mock!.

Copyright © Keith D Gregory, all rights reserved

This site does not intentionally use tracking cookies. Any cookies have been added by my hosting provider (InMotion Hosting), and I have no ability to remove them. I do, however, have access to the site's access logs with source IP addresses.