nosewheelie

Technology, mountain biking, politics & music.

Archive for January, 2007

Instinct 0.1.0

with 7 comments

One of the benefits of working as a consultant is that there are occasional downtimes between gigs. The beauty of these is you get to spend some time learning new stuff or working on interesting projects. I’ve been lucky enough to have spent the last couple of weeks working on my latest pet project, a behaviour driven development (BDD) framework called Instinct. Now that my little proof-of-concept now has enough features to be released, I’m announcing it here and on the Instinct discussion list.

So what is it?

From the source:

Instinct is a Behaviour Driven Development (BDD) framework for Java. Inspired by RSpec, Instinct provides flexible annotation of contexts, specifications, mocks, etc. (via Java 1.5 annotations, marker interfaces or naming conventions); automatic creation of test doubles (dummies, mocks & stubs) & test subjects; a verification API (similar to JUnit’s Assert); and JUnit test runner integration.

What! Another testing framework

While Instinct isn’t intended as a “testing” framework, it can indeed be used as a testing framework in place of JUnit, TestNG, JTiger, etc. Instinct was developed to be used in performing behaviour driven development, which has a slightly different emphasis than testing, focusing more on specifying behaviour and exploring the design of code. I won’t go into the psychology of why names are important, suffice to say that a new framework allows me to explore behaviour driven development (which the projects I’ve been working on have been doing for a while) while also offering additional features not found in current testing frameworks.

Also, Instinct was developed to overcome some deficiencies in current frameworks and provide a simplification of key ideas such as mocking. I also wanted to be able to standardise on names of common items used in testing (subject, fixture, mock, stub, dummy, etc.), and provide framework level support for these items.

Instinct will also offer flexibility in the way things are marked, and hence made available to the framework. TestNG pioneered the use of annotations for marking tests and providing metadata (such as test groups), however there are times when you may not want to use annotations and would prefer say a naming convention (cf. JUnit picks up method names starting with “test”) or a marker interface.

My last post ended with a discussion on the problems with simplifying mock object based tests, Instinct aims to address these by providing explicit framework support.

Here’s a complete list of why I created Instinct:

  • Instinct is BDD framework, so has a slightly different focus to conventional testing frameworks.
  • It formalises definitions (by including them in the syntax of the framework) of common test objects such as subjects, mocks & stubs.
  • It Includes test objects directly into the lifecycle of a specification.
  • It does away with needless infrastructure setup such as stub/mock creation.
  • Flexible marking of test objects – specifications, mocks, stubs, dummies, etc. based on annotations, marker interfaces and naming conventions.
  • Test objects are explicitly marked with their function.
  • Simplification of mocks and controllers, the mocking API only provides access to the mock, doing away with the need to access the controller and manage two objects.
  • It removes the need for concrete class inheritance (which most testing frameworks now support anyway).
  • It makes use of Java 1.5 features in order to simplify testing, such as annotations and typesafe mock creation.
  • It embodies lots of common code usually created on TDD projects as Open Source Software making it available outside individual projects.
  • I wanted to explore the state vs. interaction testing debate.
  • Other Java BDD frameworks (such as jBehave) have a different focus.

If I wanted BDD, I’d use jBehave

True, jBehave is the original BDD framework. However, jBehave’s goals and implementation is different to Instinct. jBehave appears to be aiming at a higher level (i.e. not atomic), and may even allow you to specify acceptance/functional type tests in textual form, and parsing them into executable statements. I wasn’t able to confirm any of this however as the jBehave source link is dead andemail to the maintainers has not been returned.

Update: Dan North has replied with a correct link: http://jbehave.org/.

So what do I get?

Instinct 0.1.0 provides the ability to run a behaviour context (the equivalent of a JUnit TestCase), will discover and run all specifications within that context, and will discover and run all before and after specification methods in the correct order. Contexts can only be run all at once (unless you split them across source tree) via a native Ant task or using the supplied JUnit test suite.

Specifically, 0.1.0 includes the following features:

  • Support for running behaviour contexts containing specifications (marked using Java 1.5 annotations), effectively providing a testing framework.
  • Marker annotations for specification lifecycle methods (BeforeSpecification, AfterSpecification & Specification) and grouping of specifcations (BehaviourContext).
  • An Ant task, providing aggregation of contexts and result output (similar to the JUnit Ant task).
  • An integrated mocking API, built on jMock 1.1.
  • A simple Verification class, similar to JUnit’s Assert. This will probably change in future releases, possibly to use Hamcrest matchers.
  • Support for running Instinct within an IDE is provided via a JUnit test suite (com.googlecode.instinct.integrate.junit.JUnitSuite), however this will run all contexts at once and doesn’t provide nice test names in the IDE’s test output UI (JUnit finds tests based on reflection and Class.getDeclaredMethods() cannot be proxied to provide nice names).
  • A sample project showing how to use Instinct.

What’s next?

See the roadmap for complete details, but basically:

  • An IntelliJ plugin.
  • Improvements to the verification API, possibly based on Hamcrest.
  • Completion of the auto test-double creation code.
  • Implementation of additional markers – naming conventions & marker interfaces.
  • JUnit XML test output formatters, for integration of JUnit style XSL stylesheets.

Tell me more

Check out the project home page. There is also an introductory tutorial available and the Instinct discussion list.

Written by Tom Adams

January 24th, 2007 at 2:42 pm

Posted in BDD,Instinct,Java,TDD

Simplifying Mock Object Testing

with 2 comments

If you use mock objects in your atomic (unit) testing and you refactor hard, you’ve probably hit up against a problem where the tests become less readable when they’re refactored than when they weren’t. The core of this problem has to do with using the same mock in different ways; usually, at least correct behaviour (the golden path) and incorrect behaviour (e.g. the mock throws an exception). The readability issues come about as you have all this code lying around initialising your mocks with slightly different values, and the harder you refactor, the more you end up with code like the following:

private ConnectionManager createConnectionManager() {
    Mock mock = new Mock(ConnectionManager.class);
    return (ConnectionManager) mock.proxy();
}

private ConnectionManager createConnectionManagerThatExpectsGetPoolCalls(int seededNumActive, int seededNumIdle) {
    return createConnectionManagerThatExpectsGetPoolCalls(seededNumActive, seededNumIdle,
        seededNumActive, seededNumIdle);
}

private ConnectionManager createConnectionManagerThatExpectsGetPoolCalls(int seededNumActiveA, int seededNumIdleA,
        int seededNumActiveB, int seededNumIdleB) {
    Mock mock = new Mock(ConnectionManager.class);
    mock.expects(once()).method("getPool").with(eq("A")).will(returnValue(createPool(seededNumActiveA, seededNumIdleA)));
    mock.expects(once()).method("getPool").with(eq("B")).will(returnValue(createPool(seededNumActiveB, seededNumIdleB)));
    return (ConnectionManager) mock.proxy();
}

The sure sign of a smell is the setting of expectations and the returning of a mock across multiple methods. In this example, the only important lines are the setting of expectations:

mock.expects(once()).method("getPool").with(eq("A")).will(returnValue(createPool(seededNumActiveA, seededNumIdleA)));
mock.expects(once()).method("getPool").with(eq("B")).will(returnValue(createPool(seededNumActiveB, seededNumIdleB)));

The rest is plumbing.

Lets show this in practice using an example. Unfortunately the codebase I’m currently working on isn’t as complicated as the one the above example is pulled from, so the differences won’t be as stark.

Starting the wrong way around, here are the classes we’ll be testing. To give some context, MarkedFieldLocator aggregates the results of other locators (in this instance only AnnotatedFieldLocator). In the tests, we’ll be checking that the delegation works correctly.

public interface MarkedFieldLocator {
    <A extends Annotation, T> Field[] locate(final Class<T> cls, final Class<A> annotationType, final NamingConvention namingConvention);
    <A extends Annotation, T> Field[] locateAll(final Class<T> cls, final Class<A> annotationType, final NamingConvention namingConvention);
}

public static final class MarkedFieldLocatorImpl implements MarkedFieldLocator {
    private final AnnotatedFieldLocator annotatedFieldLocator;

    public MarkedFieldLocatorImpl(final AnnotatedFieldLocator annotatedFieldLocator) {
        this.annotatedFieldLocator = annotatedFieldLocator;
    }

    public <A extends Annotation, T> Field[] locate(final Class<T> cls, final Class<A> annotationType,
            final NamingConvention namingConvention) {
        return annotatedFieldLocator.locate(cls, annotationType);
    }

    public <A extends Annotation, T> Field[] locateAll(final Class<T> cls, final Class<A> annotationType,
            final NamingConvention namingConvention) {
        return annotatedFieldLocator.locateAll(cls, annotationType);
    }
}

public interface AnnotatedFieldLocator {
    <A extends Annotation, T> Field[] locate(Class<T> cls, Class<A> annotationType);
    <A extends Annotation, T> Field[] locateAll(Class<T> cls, Class<A> annotationType);
}

public static final class AnnotatedFieldLocatorImpl implements AnnotatedFieldLocator {
    public <A extends Annotation, T> Field[] locate(final Class<T> cls, final Class<A> annotationType) {
        return new Field[]{};
    }

    public <A extends Annotation, T> Field[] locateAll(final Class<T> cls, final Class<A> annotationType) {
        return new Field[]{};
    }
}

Here is a simplified JMock test case for the above class.


import java.lang.annotation.Annotation;
import java.lang.reflect.Field;
import com.googlecode.instinct.core.annotate.Dummy;
import com.googlecode.instinct.core.naming.DummyNamingConvention;
import com.googlecode.instinct.core.naming.NamingConvention;
import org.jmock.Mock;
import org.jmock.MockObjectTestCase;

public final class MarkedFieldLocatorAtomicTest extends MockObjectTestCase {
    private static final Class<WithRuntimeAnnotations> CLASS_WITH_ANNOTATIONS = WithRuntimeAnnotations.class;
    private static final Class<Dummy> ANNOTATION_TO_LOCATE = Dummy.class;
    private static final Field[] ANNOTATED_FIELDS = {};

    public void testLocate() {
        final Mock mock = new Mock(AnnotatedFieldLocator.class);
        mock.expects(once()).method("locate").with(same(CLASS_WITH_ANNOTATIONS), same(ANNOTATION_TO_LOCATE)).will(returnValue(ANNOTATED_FIELDS));
        final MarkedFieldLocator fieldLocator = new MarkedFieldLocatorImpl((AnnotatedFieldLocator) mock.proxy());
        final Field[] fields = fieldLocator.locate(CLASS_WITH_ANNOTATIONS, ANNOTATION_TO_LOCATE, new DummyNamingConvention());
        assertSame(ANNOTATED_FIELDS, fields);
    }

    public void testLocateAll() {
        final Mock mock = new Mock(AnnotatedFieldLocator.class);
        mock.expects(once()).method("locateAll").with(same(CLASS_WITH_ANNOTATIONS), same(ANNOTATION_TO_LOCATE)).will(returnValue(ANNOTATED_FIELDS));
        final MarkedFieldLocator fieldLocator = new MarkedFieldLocatorImpl((AnnotatedFieldLocator) mock.proxy());
        final Field[] fields = fieldLocator.locateAll(CLASS_WITH_ANNOTATIONS, ANNOTATION_TO_LOCATE, new DummyNamingConvention());
        assertSame(ANNOTATED_FIELDS, fields);
    }
}

As we are setting multiple expectations on the AnnotatedFieldLocator mock, we could refactor the above to something like the following.


import java.lang.annotation.Annotation;
import java.lang.reflect.Field;
import com.googlecode.instinct.core.annotate.Dummy;
import com.googlecode.instinct.core.naming.DummyNamingConvention;
import com.googlecode.instinct.core.naming.NamingConvention;
import org.jmock.Mock;
import org.jmock.MockObjectTestCase;

public final class MarkedFieldLocatorAtomicTest extends MockObjectTestCase {
    private static final Class<WithRuntimeAnnotations> CLASS_WITH_ANNOTATIONS = WithRuntimeAnnotations.class;
    private static final Class<Dummy> ANNOTATION_TO_LOCATE = Dummy.class;
    private static final Field[] ANNOTATED_FIELDS = {};

    public void testLocate() {
        final AnnotatedFieldLocator annotatedFieldLocator = createAnnotatedFieldLocator("locate");
        final MarkedFieldLocator fieldLocator = new MarkedFieldLocatorImpl(annotatedFieldLocator);
        final Field[] fields = fieldLocator.locate(CLASS_WITH_ANNOTATIONS, ANNOTATION_TO_LOCATE, new DummyNamingConvention());
        assertSame(ANNOTATED_FIELDS, fields);
    }

    public void testLocateAll() {
        final AnnotatedFieldLocator annotatedFieldLocator = createAnnotatedFieldLocator("locateAll");
        final MarkedFieldLocator fieldLocator = new MarkedFieldLocatorImpl(annotatedFieldLocator);
        final Field[] fields = fieldLocator.locateAll(CLASS_WITH_ANNOTATIONS, ANNOTATION_TO_LOCATE, new DummyNamingConvention());
        assertSame(ANNOTATED_FIELDS, fields);
    }

    private AnnotatedFieldLocator createAnnotatedFieldLocator(final String methodName) {
        final Mock mock = new Mock(AnnotatedFieldLocator.class);
        mock.expects(once()).method(methodName).with(same(CLASS_WITH_ANNOTATIONS), same(ANNOTATION_TO_LOCATE)).will(returnValue(ANNOTATED_FIELDS));
        return (AnnotatedFieldLocator) mock.proxy();
    }
}

We could probably refactor a bit harder and clean up some of the duplicate checks, and with closures we could also clean up the duplication in the locate() and locateAll() calls. We’ve now pulled out the duplication between the two test methods, but the mock plumbing is still called from both places. The more mocks we have (we only have one here) the more verbose this kind of plumbing becomes.

We can clean this up a little if we pull out all our mocks as fields, and do the mock creation in JUnit’s setUp() method.

import java.lang.annotation.Annotation;
import java.lang.reflect.Field;
import com.googlecode.instinct.core.annotate.Dummy;
import com.googlecode.instinct.core.naming.DummyNamingConvention;
import com.googlecode.instinct.core.naming.NamingConvention;
import org.jmock.Mock;
import org.jmock.MockObjectTestCase;

public final class MarkedFieldLocatorRefactorAtomicTest extends MockObjectTestCase {
    private static final Class<WithRuntimeAnnotations> CLASS_WITH_ANNOTATIONS = WithRuntimeAnnotations.class;
    private static final Class<Dummy> ANNOTATION_TO_LOCATE = Dummy.class;
    private static final Field[] ANNOTATED_FIELDS = {};
    private Mock mockFieldLocator;

    @Override
    public void setUp() {
        mockFieldLocator = new Mock(AnnotatedFieldLocator.class);
    }

    public void testLocate() {
        mockFieldLocator.expects(once()).method("locate").with(same(CLASS_WITH_ANNOTATIONS), same(ANNOTATION_TO_LOCATE)).will(
                returnValue(ANNOTATED_FIELDS));
        final MarkedFieldLocator fieldLocator = new MarkedFieldLocatorImpl((AnnotatedFieldLocator) mockFieldLocator.proxy());
        final Field[] fields = fieldLocator.locate(CLASS_WITH_ANNOTATIONS, ANNOTATION_TO_LOCATE, new DummyNamingConvention());
        assertSame(ANNOTATED_FIELDS, fields);
    }

    public void testLocateAll() {
        mockFieldLocator.expects(once()).method("locateAll").with(same(CLASS_WITH_ANNOTATIONS), same(ANNOTATION_TO_LOCATE)).will(
                returnValue(ANNOTATED_FIELDS));
        final MarkedFieldLocator fieldLocator = new MarkedFieldLocatorImpl((AnnotatedFieldLocator) mockFieldLocator.proxy());
        final Field[] fields = fieldLocator.locateAll(CLASS_WITH_ANNOTATIONS, ANNOTATION_TO_LOCATE, new DummyNamingConvention());
        assertSame(ANNOTATED_FIELDS, fields);
    }
}

With some creative inclining, we’ve now reduced the code by a method per mock. You could of course argue that my initial refactoring was a setup for this step (which it was), however, I’ve now seen this same style of refactoring on two projects. We’ve also pulled the expectations back into the test methods making the tests easier to read. The style of mock creation is especially useful if you have multiple mocks that are reused across multiple tests. JUnit’s test lifecycle ensures that setUp() will be called before each test method, ensuring that all mocks have their expectations reset.

The test is getting better, it’s become simpler and easier to read, however we can go one step further. The mock creation is all plumbing that we can have automatically generated for us. In order to create a mock we need the type and a way of knowing which fields need to be mocked, below we choose a naming convention; the field starts with “mock” and has a null value). We then need some magic that inserts mocks for us, in this case it’s a parent test case (AutoMockingTestCase) that reflectively find all fields marked as mocks, and using their type, automatically mocks them.

import java.lang.annotation.Annotation;
import java.lang.reflect.Field;
import com.googlecode.instinct.core.annotate.Dummy;
import com.googlecode.instinct.core.naming.DummyNamingConvention;
import com.googlecode.instinct.core.naming.NamingConvention;
import com.magical.AutoMockingTestCase;

public final class MarkedFieldLocatorRefactorAtomicTest extends AutoMockingTestCase {
    private static final Class<WithRuntimeAnnotations> CLASS_WITH_ANNOTATIONS = WithRuntimeAnnotations.class;
    private static final Class<Dummy> ANNOTATION_TO_LOCATE = Dummy.class;
    private static final Field[] ANNOTATED_FIELDS = {};
    private AnnotatedFieldLocator mockFieldLocator;

    public void testLocate() {
        expects(mockFieldLocator, once()).method("locate").with(same(CLASS_WITH_ANNOTATIONS), same(ANNOTATION_TO_LOCATE)).will(
                returnValue(ANNOTATED_FIELDS));
        final MarkedFieldLocator fieldLocator = new MarkedFieldLocatorImpl((AnnotatedFieldLocator) mockFieldLocator.proxy());
        final Field[] fields = fieldLocator.locate(CLASS_WITH_ANNOTATIONS, ANNOTATION_TO_LOCATE, new DummyNamingConvention());
        assertSame(ANNOTATED_FIELDS, fields);
    }

    public void testLocateAll() {
        expects(mockFieldLocator, once()).method("locateAll").with(same(CLASS_WITH_ANNOTATIONS), same(ANNOTATION_TO_LOCATE)).will(
                returnValue(ANNOTATED_FIELDS));
        final MarkedFieldLocator fieldLocator = new MarkedFieldLocatorImpl((AnnotatedFieldLocator) mockFieldLocator.proxy());
        final Field[] fields = fieldLocator.locateAll(CLASS_WITH_ANNOTATIONS, ANNOTATION_TO_LOCATE, new DummyNamingConvention());
        assertSame(ANNOTATED_FIELDS, fields);
    }
}

We’ve now removed all the mock plumbing so that our tests focus only on what is important, making the final code a lot simpler than what we started with.

So what’s still wrong with this situation? For one, in most mocking frameworks you have the notion of the mock controller (JMock calls this the org.jmock.Mock and EasyMock the org.easymock.MockControl) and the mocked object (JMock calls this a proxy) that you’ll pass into the dependent class under test. Having to deal with these two things is cumbersome, as witnessed by the code above. An obvious simplification is to hide these behind an API that manages the mapping between the mock and the control (say in a Map, as is done in the expect() method above), EasyMock does this with it’s EasyMock.expect(<T>) method (though this has limits when generics come into play and cannot always be used), and similar code can be created for JMock. This allows the client code (the test) to access only one thing (the mocked object) at all times.

Secondly, there is a lot of magic going on behind the scenes in the parent test case. This is bad an explicitness point of view – there’s nothing denoting that AnnotatedFieldLocator is a mocked and how that mocking is taking place, and we’re also inheriting from a concrete superclass. It would be much better to have a more explicit way of marking things to be mocked (perhaps with annotations) and having a framework do it for you (more on this later) without the need for a concrete superclass.

Several projects are using this approach already, a large project at a bank (the guys on this team came up with the automocking idea) and the open source Boost framework.

Written by Tom Adams

January 16th, 2007 at 11:30 am

Posted in Agile,Instinct,Java