Home > Articles > Programming > Java

This chapter is from the book

Test Implementation

Based on our goals, we can now start to define a test for our component. The test definition involves going through each of our goals and enhancing the test so the goal is satisfied, while ensuring that we do not accidentally get distracted with any of the nongoals.

The first goal is to ensure that a valid XML document is processed correctly and the appropriate database calls made. Listing 3-2 shows the test skeleton.

Listing 3-2. Initial attempt at a functional test

@Test
public void componentAShouldUpdateDatabase() throws Exception {
  ComponentA component = new ComponentA();
  component.onMessage(...);
  Connection c = DatabaseHelper.getConnection();
  String verifySQL = ...;
  PreparedStatement ps = c.prepareStatement(verifySQL);
  // set parameters
  // read resultset and verify results match expectations
  String someValue = resultSet.getString(1);
  assert "foo".equals(someValue);
}

Testing for Success

As soon as we start to fill in our test code, we start running into problems. The first problem we have is that the component's only method is the onMessage method. This method takes in a JMSMessage. This class in the JMS API is in fact an interface, as is our expected message type, TextMessage. The API does not provide for an easy way to create instances of these interfaces (which, incidentally, is a good thing—an API should define contracts, not implementations). So how do we test our component?

There are two options for tackling this hurdle.

  1. Use mock (or stub) objects to create our own implementation of TextMessage, represented by a simple POJO with setters for the message body and properties.
  2. Refactor the component so the business functionality is not coupled to the JMS API.

The first approach is fairly popular but violates one of our nongoals, which is to not test external APIs. Strictly speaking, we'd be trying to use mock objects to refactor away the external API dependency. In practice, however, we'd have to model too much of it.

We would have to define a JMS message, and to ensure correctness, our implementation would have to be checked to ensure it matches the specification contract for TextMessage, if we hope to reuse it in any other tests that might expect different (and more compliant!) semantics of TextMessage. This extra code is another source of potential bugs and is yet more code to maintain. The mock object approach for external APIs should generally be used only for black-box testing, where we do not have access or rights to modify the source for the code being tested and so are forced to provide an environment that matches its expectations.

Although using mock or stub objects is the incorrect choice for our test, this is not always the case. For APIs that are complex or have very implementation-specific behavior, mocking of third-party dependencies should be avoided. However, there are times when the external API is trivial and easy to mock, in which case there is no harm in whipping up a quick stub for testing purposes.

The second approach is the correct one for our purposes. Since our goal is not to check whether we can retrieve text from a JMS message, we assume that functionality works and can be relied on. Our component should instead be modified so that the business functionality is decoupled from the incoming message. The decoupling gains us an important benefit: increased testability. We did make an implicit tradeoff in this decision, too. The modification to the code is the result not of a domain-based consideration (no business requirement is satisfied by this change) but of a testability one.

In Listing 3-3, the onMessage method handles all the JMS interaction and then passes the XML document string to the processDocument method, which then does all the work.

Listing 3-3. Refactoring component to decouple extraction from parsing

public void onMessage(Message message) {
  TextMessage tm = (TextMessage)message;
  processDocument(tm.getText());
}

public void processDocument(String xml) {
  // code previously in onMessage that updates DB
  // and calls stored procedure
}

We can now modify our functional test as shown in Listing 3-4 so that it no longer references JMS at all and instead simply passes the XML string to the processDocument method.

Listing 3-4. Refactored test to only test message processing

@Test
public void componentAUpdateDatabase() throws Exception {
  ComponentA component = new ComponentA();
  String xml = IOUtils.readFile(new File("trade.xml"));
  component.processDocument(xml);
  Connection c = DatabaseHelper.getConnection();
  String verifySQL = ...;
  PreparedStatement ps = c.prepareStatement(verifySQL);
  // set parameters
  // read resultSet and verify that results match expectations
  String someValue = resultSet.getString(1);
  assert "foo".equals(someValue);
}

Note how we load in the sample XML data from a file and then pass it to the component. The fact that the component happens to rely on JMS for message delivery is not relevant in terms of its business functionality, so we restructured the component to allow us to focus on testing the functionality rather than the JMS API.

An interesting side effect of this approach is that we made the processDocument method public. This method could well be an implementation detail that should not be exposed. To restrict its access level, we could make it protected or package protected and ensure that the test case is in the appropriate package. That way it can be invoked from the test but not from other clients.

As a side note, though we've moved the processing into another method, in practice we'd go a bit further than that and move it to another class altogether. That refactoring will result is a more reusable class that is not coupled to JMS at all.

At this point, we have a test that can verify that a sample XML file can be processed and that the database has been updated correctly.

Building Test Data

Now that we can consume a previously recorded XML file, we can easily grow the test input data and support as many files as we want. We can create a test for every file that ensures that all sorts of different input data can be verified.

Unfortunately, this approach very quickly proves itself to be rather cumbersome. The input XML files can vary significantly, and alarm bells should be going off anyway whenever we find ourselves copying and pasting, thus violating the Don't Repeat Yourself (DRY) principle.

As we have discussed previously, this is where it's useful for the testing framework to support Data-Driven Testing. We simply modify our test to use Data Providers, and parameterize the XML data as shown in Listing 3-5.

Listing 3-5. Refactored test using a Data Provider

@Test(dataProvider = "componentA-data-files")
public void componentAUpdateDatabase(String xml) throws Exception {
  ComponentA component = new ComponentA();
  component.processDocument(xml);
  // rest of test code
}

@DataProvider(name = "componentA-data-files")
public Iterator<Object[]> loadXML() throws Exception {
  // return data set
}

Note that our test now takes in a parameter and no longer has to concern itself with sourcing the sample data or determining how to load it in. All it does now is specify its Data Provider. The actual mechanism of loading the XML is now delegated to a separate loader. Our test is cleaner as a result since we have parameterized the variable data and can now invoke it multiple times for each of our sample XML files.

The type of the parameter is a String. Due to how the two methods are related, it is not possible to have a type-safe declaration for the method parameter. The Data Provider must return a type that matches that declared by the test method. For example, if the loadXML method were to return Document objects, we would get a runtime type mismatch exception.

The Data Provider itself can now deal with loading the XML file, as shown in Listing 3-6. Note that it does not need to have a hardcoded list. Instead it scans a specific directory and feeds all the files found to the test case. So the next time a new sample XML file needs to be added to the test suite, we just have to drop it in a specific directory and it will automatically be included, no coding or recompilation needed.

Listing 3-6. Refactored test to read in all data files from a specific directory

@DataProvider(name = "componentA-data-files")
  public Iterator<Object[]> loadXML() throws Exception {
    File[] f = new File("samples/ComponentA/trades").listFiles();
    final Iterator<File> files = Arrays.asList(f).iterator();
    return new Iterator<Object[]>() {

      public boolean hasNext() {
        return files.hasNext();
      }

      public Object[] next() {
        return new Object[]{IOUtils.readFile(files.next())};
      }

      public void remove() {
        throw new UnsupportedOperationException();
      }
    };
  }

The provider is fairly simple. It grabs a list of all the XML files in a specific directory and adds the file contents to the parameters. The file contents are added as an array of size 1 since the test method takes in just the one parameter. If we needed to parameterize other variables, that would be reflected in the array returned by the next() iterator method.

The provider method name does not matter at all; it can be whatever is most appropriate for a given case, as long as the @DataProvider annotation name matches what our test expects.

Of course, it is possible to return an array of Object[] from the Data Provider. However, that approach would mean that we would have to load all the file data in memory at once since the array has to be prepopulated. While this will work for a small data set, the memory requirements of the test will keep increasing over time, so the test will not scale with our data. Since this test is designed to grow over time, a little bit of upfront planning will head off this issue early on; we simply use lazy loading for the Data Provider so we only load one file's data at a time.

Test Setup Issues

Unfortunately, our test is not idempotent. An idempotent test is one where the result of running a test once is the same as running it multiple times. Describing something as idempotent is essentially saying that it does not alter state when it is invoked multiple times. So, for example, a method that reads data from a database is idempotent since calling it again will return the same data. On the other hand, a method that writes to a database may not be idempotent; invoking it again will likely result in an error since the state of the database has changed once the method has been invoked.

While we'll cover specific strategies for handling database setup and management, the concepts apply equally to any external stores we might need to interact with as part of the tests. These range from file systems to WebDAV resources to remote repositories of any format.

Not only is our test required to be idempotent, but the ordering of tests themselves shouldn't matter (assuming we haven't declared dependencies to enforce any ordering). So in addition to being idempotent, tests should not have any impact on other tests in terms of state or data.

Since the test performs a number of write operations, successive runs can easily be polluted from the results of previous runs. Any test that writes to a database will suffer from this problem, and while there is no ideal solution, a number of approaches can help us cope with this problem.

  • Embedded databases
  • Initialized data in the test setup
  • Transaction rollbacks

Each of these approaches has its uses, and which combination of them we end up going with depends on the application and environment; some might not be options, and some might be more cumbersome than others.

Note that it might be tempting to consider using a mock library for the JDBC functionality. Resist that temptation! We discussed mocks earlier, and this is a great example of the urge and the need to resist it. A JDBC mock object would not (could not, even) cope with all the intricacies of database behavior, much less all the issues surrounding transactions or locking.

Embedded Databases

A number of Java-based database engines have been specifically designed with embedding support in mind. These databases can be created and initialized on the fly, from inside the test. They have low overhead in terms of setup costs and often perform very well.

The disadvantage of this approach, however, is that it deviates significantly from the environment the application will actually run in. There are also often significant differences between database features. While this approach is well suited to applications that use a database purely as a data store and restrict themselves to ANSI SQL database calls or use an abstraction layer (such as JPA or any similar object-relational mapping tool), it is not suitable for any applications (such as our example) that have application logic embedded in the database. Stored procedures are not portable across databases, and reimplementing them in our embedded database would be too much effort.

Initialized Data in the Test Setup

The next approach is to load our test database with a known quantity of test data. This would include all the data we'd like to manipulate, as well as any external references that our component relies on. For some tests, this might not even be sufficient, so in addition to loading data we'd have to ensure that extraneous data is also removed on start-up. While somewhat cumbersome, this approach can be combined with the embedded database engine to satisfy the needs of the component sufficiently for it to run successfully in any environment.

In practice, many tests rely on two different kinds of data. The first is statics. Statics are effectively constants stored in the database. For example, the list of U.S. states is considered a static, as is a list of currencies. If our test does its work against a shared remote database (a test instance, not the production one!), it can reasonably expect that the static data will be in place. After all, this information is constant across all tests, so there's no reason to load it in every run.

However, tests do also rely on data that is specific to their business functionality. For example, we might have a test that asserts that an audit trail for a financial transaction meets certain criteria, or that an invalid audit trail correctly raises the right alarms. In such cases, our test setup needs to load this test data into the database and then clear it out after the test run.

One downside of this approach is the difficulty of maintaining a robust data set that is meaningful enough to test. As the project progresses, there's a strong chance that data structures and schemas will change, and the test data can become stale. Updating the tests constantly in this situation can be quite unsatisfying as it involves duplicating the effort it has taken to implement the changes in the rest of the code base.

Thus, we have a tradeoff between capturing a meaningful data set and locking ourselves into a very specific snapshot of our model that will constantly need updating and modification to keep up to date. There is no right answer for which approach is best; the choice varies depending on the project and how much we expect the model to evolve over time.

Transaction Rollbacks

Another approach is to use Java APIs to prevent the data from being written out to the permanent data store. In both cases, the general idea is to start a transaction, perform all of our write operations, verify that everything works, and then roll back the transaction. The benefit of this approach is that we do not have to worry about cleanup; simply rolling back the transactions ensures that all the work is undone correctly, something that a manual cleanup operation might not do quite as thoroughly.

A manual rollback is also more brittle since it is more code to write and thus more code that could go wrong. Manual rollbacks becomes even trickier if we're testing multiple databases, and dealing with the hassles of ensuring that the databases are synchronized and the data is correctly cleaned up is too cumbersome for testing.

As with many of these approaches, there are disadvantages. Code that manipulates transactions or starts its own transactions cannot be tested this way without complicated nested transaction setups. For example, any code that calls commit() or rollback() should usually not be tested using this approach unless you're very clear on the semantics of what the code does and how having an external traction will impact its behavior.

Most applications will communicate with the database either through straight JDBC or through a DataSource implementation. The first approach involves manually working with Driver and Connection objects. Connections obtained through this mechanism are not transactional, so to prevent any writes to the database from taking place, our test would simply have to turn off autocommit, via the Connection.setAutocommit(false) method.

The other option is to perform database access through a DataSource object, which can integrate with a transaction manager and thus can be told to abort a transaction. We'll outline the specifics of this approach in Chapter 4.

Note that it is also important to ensure that the isolation level is set to READ UNCOMMITTED. Some databases (particularly embedded ones) have this as the default. The reason we need this is that we'd like to be able to verify some of the data we've attempted to write, and this isolation level allows us to read uncommitted data. Setting it to anything else means that we'd have to ensure that data is validated in the same transaction as it's being written, or else we'd never get to read it.

Having said that, it is important to understand the semantics of the isolation level we choose. It's very likely that in the production environment, a different isolation level is in place, and this subtle change in environments could result in some difficult-to-track bugs that do not manifest themselves in the test environment. Furthermore, this isolation level will cause issues when run in concurrent tests, as different tests might end up seeing the same data if there is just one transaction covering a particular set of tests.

Selecting the Right Strategy

For our component, we can go with disabling autocommit on the connection we obtain in the test. An embedded database is not an option since we rely on a database-specific stored procedure. So the test can expect to have a database that has been set up correctly to connect to.

Looking over the test code as we have it now, we currently obtain a database connection within the test method itself. The fact that we expect certain data to be available in the database opens us to the possibility of connecting to a database that does not have this data or, even worse, failing to connect to the database at all. In both cases, we don't get to test the business functionality, so we don't actually know if our business logic is correct or not.

In that case, our test will fail through not being tested, rather than through an explicit logic error. To distinguish between the two, another refactoring is called for.

We know that our test will be called multiple times, and we also know that it's fairly likely that we will end up with further tests that verify different aspects of our component, all of which are going to need access to the database. The database is an external dependency, so we model it accordingly in Listing 3-7, as part of the environment setup, rather than the test proper.

Listing 3-7. Extract database setup into configuration methods

private Connection connection;

@BeforeMethod
public void connect() throws SQLException {
  connection = DatabaseHelper.getConnection(...);
  connection.setAutoCommit(false);
}

@AfterMethod
public void rollback() throws SQLException {
  connection.rollback();
}

We've refactored the test to move the database connection handling into setup methods. The benefit of this approach is that if we do have an issue connecting to the database, we will get a more helpful error message that makes it clear that the failure is in setup and not in the tests themselves. We also ensure that the connection is rolled back after every test method invocation.

Of course, it might be desirable to perform a number of tests and then roll them back at the end. The rollback method can instead be marked with @AfterClass or @AfterSuite, depending on our needs.

An interesting problem we might face is that the code we're testing might explicitly call commit. How would we prevent the transaction from committing in this case?

To deal with this situation, we employ the Decorator pattern. We'll assume that the code has a connection provided to it. In Listing 3-8, we wrap the connection in a decorator that prevents calls to commit and pass that to the component instead of the real connection.

Listing 3-8. Disabling commit by using a wrapped connection

private WrappedConnection wrappedConnection;

@BeforeMethod
public void connect() throws SQLException {
  connection = DatabaseHelper.getConnection();
  connection.setAutoCommit(false);
  wrappedConnection = new WrappedConnection(connection);
  wrappedConnection.setSuppressCommit(true);
}

The WrappedConnection implementation is a decorator around the actual connection. It implements the Connection interface. The relevant parts are shown in Listing 3-9.

Listing 3-9. WrappedConnection implementation

public class WrappedConnection implements Connection {
  private Connection connection;
  private boolean suppressClose;
  private boolean suppressCommit;

  public WrappedConnection(Connection c) {
    this.connection = c;
  }

  public boolean isSuppressClose() {
    return suppressClose;
  }

  public void setSuppressClose(boolean suppressClose) {
    this.suppressClose = suppressClose;
  }

  public boolean isSuppressCommit() {
    return suppressCommit;
  }

  public void setSuppressCommit(boolean suppressCommit) {
    this.suppressCommit = suppressCommit;
  }

  public void commit() throws SQLException {
    if(!suppressCommit)
      connection.commit();
  }

  public void close() throws SQLException {
  if(!suppressClose)
    connection.close();
  }
  // rest of the methods all just delegate to the connection
  }

Using the wrapped connection now enables us to prevent any objects we use in our tests from calling commit or close, as needed.

Error Handling

At this point, we've achieved two of our stated goals, while reworking our code to ensure we don't pollute our tests with nongoals.

This test is valuable in that it successfully verifies that our component behaves the way we'd like it to, but an equally important part of testing is capturing boundary and error conditions. Invariably in the real world, things go wrong. They often go wrong in interesting and perplexing ways, and they often do so at fairly inconvenient times. What we'd like is to at least capture some of these failures and know what our code is going to do. It's fine if things blow up, as long as we know exactly what will blow up and how.

Of course, it's tempting to wrap the whole thing in a big try/catch, log the error, and forget about it. In fact, if we look at our component code, that's pretty much what it does. It's equally tempting to think that we can easily figure out all the failure points and account for them. Very, very few people can do this. It's important, in fact, not to get bogged down thinking of every possible thing that can go wrong and check for it. It's crucial that we remain pragmatic and practical and, at this point, handle only likely errors.

Our test will not capture everything that can go wrong. Things will go wrong over time that we did not anticipate. Some will be obvious, but others will be insidious and tricky. The crucial lesson in error handling then is to take back the feedback and results from a live run and feed them back into our tests. It's less important to have a comprehensive set of failure tests up front than it is to capture actual failures as they happen after the code has been deployed. The value of tests lies in their growth and evolution over time, not in the initial spurt, which in terms of the big picture is insignificant.

When capturing a bug that's found in production code, it's also important to label it correctly. The requirement that should always be satisfied is this: "If someone new joins the project six months after I leave, will he or she be able to look at this test case and know why it's here?" Comments in the test should include a link to the associated bug report. If that's not available, add a brief explanation of what functionality the test verifies beyond just the code.

So, what can go wrong with our component? The developer responsible for this component merrily yelled out "Nothing!" when asked. But he wasn't quite as accurate as we'd all hoped.

One interesting error that cropped up time and time again in the log files is the ubiquitous NullPointerException. On further investigation, it turns out that the processor extracts a currency code from the XML. It then looks up some rates associated with that currency. The problem? The currency wasn't listed in the database, hence a dazzling variety of long stack traces in the log files. No problem; the developer adds a check to verify that the currency is valid, and if not, to throw an exception.

Now that we have some tests in place, the first thing we need to do is model the failure before fixing it. Having an easy way to reproduce an error instead of clicking around a UI is a huge timesaver and is a very easy payoff for having gone to the bother of developing a testing strategy.

How do we model this failure? Thanks to our Data-Driven Testing, all we have to do is get the XML file with the invalid currency and drop it into our data directory. Running our test now will correctly show the NullPointerException.

We now have a reproducible error, and we know how to fix the code. The fix involves explicitly checking for invalid currencies and throwing an application-specific exception indicating invalid input data (e.g., InvalidTradeException). Putting that fix in shows that we correctly throw the exception, but, of course, our test will still fail since it does not expect this exception.

One option shown in Listing 3-10 is to catch the exception in the test.

Listing 3-10. Initial attempt to handle invalid data

@Test(dataProvider = "componentA-data-files")
public void componentAUpdateDatabase(String xml) throws Exception {
  ComponentA component = new ComponentA();
  try {
    component.processDocument(xml);
  }
  catch(InvalidTradeException e) {
    // this is OK
    return;
  }
  // rest of test code
}

As a side note, it's convenient to have tests declare that they throw an exception, as a reasonable catch-all mechanism for "anything that can go wrong." In production code, this is a bad idea, as it does not allow the caller to explicitly handle exceptions. Here we see yet another example of a pattern that is acceptable (and even recommended, for the sake of simplicity) in test code that should always be avoided in production code.

The problem with the approach is that it does not enable us to distinguish between the cases where that failure is expected and those where it isn't. Instead, what we should do is distinguish between expected successes and expected failures. The test as it stands can pass for two situations: It either passes when we have good data, or it passes when we have bad data. In either case, we can't make assertions about what actually happened; did we test the good data path or the bad data one? More importantly, what didn't we test?

The fact that the two paths happen to be "good data" and "bad data" in our case is a specific example. It's equally easy to accidentally write a test that has two or more conditions for passing, and we'd have the same issue with regard to what sort of assertions we can make about the success result.

The guideline to follow here is that a test shouldn't pass in more than one way. It's fine if the test verified different failure modes, but having one that can pass for both good and bad data is inviting subtle errors that are tricky and difficult to track down. We therefore define another directory and Data Provider in Listing 3-11 that handles failures, the same way as we do for valid input.

Listing 3-11. Defining a separate provider for invalid data

@DataProvider(name = "componentA-invalid-data-files")
  public Iterator<Object[]> loadInvalidXML() throws Exception {
    File dir = new File("samples/ComponentA/trades/invalid");
    File[] f = dir.listFiles();
    final Iterator<File> files = Arrays.asList(f).iterator();
    return new Iterator<Object[]>() {

      public boolean hasNext() {
        return files.hasNext();
      }

      public Object[] next() {
        return new Object[]{IOUtils.readFile(files.next())};
      }

      public void remove() {
        throw new UnsupportedOperationException();
      }
    };
  }

  @Test(dataProvider = "componentA-invalid-data-files",
        expectedExceptions  = InvalidTradeException.class)
  public void componentAInvalidInput(String xml) throws Exception {
    ComponentA component = new ComponentA();
    component.processDocument(xml);
    // rest of test code
  }

Here we defined another set of inputs with an associated test that always expects an exception to be thrown. If we do end up putting a valid XML file in the invalid trades directory, we will also correctly get a test failure since our processDocument method will not throw InvalidTradeException (since the input is valid).

Emerging Unit Tests

As we receive more bug reports from the QA team, our test sample grows steadily. Each sample will test a certain code branch. It won't be long, however, before we find that it's actually rather cumbersome to run the entire functional test to narrow an issue with our XML parsing.

The example of invalid data that we highlighted earlier demonstrates this perfectly. In this case, our first step in the functional test fails. We never even get to the database. Given that our problem is restricted to a small part of our functional test, it would be tremendously useful if we could isolate that one part and split it off into a unit test.

This highlights one of the important paths through which we can grow our unit tests; let them evolve naturally through functional tests. As we start to debug failures, unit tests will become more apparent as a quick and easy way to reproduce fast failures. This top-down approach is very useful in isolating bugs and in testing an existing code base. Once we're in the habit of testing, using the bottom-up approach of unit tests followed by functional tests is well suited when developing new functionality.

We have an important principle here. Unit tests do not necessarily have to be written before any other kind of test—they can be derived from functional tests. Particularly in large projects or an existing code base, writing useful unit tests at first can be tricky because, without an understanding of the bigger picture, they're likely to be too trivial or unimportant. They can instead be derived from meaningful functional tests intelligently, as the process of debugging and developing functional and integration tests will reveal their unit test components. Since functional tests are (hopefully!) derived from specifications and requirements, we know that they satisfy a core piece of functionality, whereas in a large project it might be difficult to immediately spot meaningful units that are relevant and in need of testing.

So, for our example, we will go through another round of refactoring. We need to split up the XML validation and database processing into separate methods so that we can invoke and test them separately.

Our component code now becomes something like Listing 3-12.

Listing 3-12. Refactored component to separate processing from validation

public void processDocument(String xml)
  throws InvalidDocumentException {
  Document doc = XMLHelper.parseDocument(xml);
  validateDocument(doc);
  // do DB work
}

public void validateDocument(Document doc)
  throws InvalidDocumentException {
  // perform constraint checks that can't be captured by XML
}

We separated the document validation from document processing so that we can test them separately. The upshot of this refactoring is that we now have a simple unit test that has very few (and more importantly, light and inexpensive) external dependencies that can be used to validate all our documents. This test does not require a database or much of an environment since all it does is look through our set of XML documents to ensure that any constraints that cannot be expressed via the document's DTD or schema are not violated.

Since we already have a good set of input XML, why not reuse it for our unit test, too? By the very nature of how we derived our unit test, we know that if it fails, the functional test will also fail. This is an important aspect of functional testing; a good functional test can be decomposed into a number of unit tests. And no matter what anyone tells you, the order in which you write them is not important at all. For new code, it's likely easier to start with unit tests and then develop functional tests that likely build on the rest of the unit tests. For existing code, the reverse is true. The principle remains the same in both cases.

Having said that, it is important to note that regardless of what order they're written in, functional and unit tests are complementary. A functional test is a more horizontal test that touches on many different components and exercises many portions of the code. A unit test, on the other hand, is more vertical in that it focuses on a narrow subject and tests far more exhaustively than a functional test would.

How do we express this relationship between functional tests and unit tests? We place them into logical groupings and explicitly specify the dependency. Putting these concepts together gives us the tests shown in Listing 3-13.

Listing 3-13. Dependency between unit and functional tests

@Test(dataProvider = "componentA-data-files", groups="unit-tests")
public void componentAValidateInput(String xml) throws Exception {
  ComponentA component = new ComponentA();
  component.validateDocument(XMLHelper.parseDocument(xml));
  // rest of test code
}

@Test(dataProvider = "componentA.xml", groups = "func-tests",
        dependsOnGroups = "unit-tests")
public void componentAUpdateDatabase(String xml) throws Exception {
  ComponentA component = new ComponentA();
  component.processDocument(xml);
  // rest of test code
}

Here we have two tests, one unit and one functional, both belonging to their respective groups, and the dependency between them is explicitly specified. Our test engine will ensure that they are run in the correct order. We can also use the same approach we did for our functional test to add a unit test that verifies that invalid inputs fail correctly as well.

Coping with In-Container Components

One thing we assumed in our test is that the component to be tested can be instantiated easily. Unfortunately, most code out in the real world isn't quite blessed with that convenience. In many cases, the component we dissected earlier would be a Message-Driven Bean (MDB), which runs in an application server. It could also be a servlet or any other managed component that expects a specific environment and cannot be easily instantiated.

We use the term in-container to denote that the code needs to be run and deployed into a container and so requires an expensive and heavy environment. Obviously, this makes testing much trickier and more difficult, and a recurring theme of frameworks like Spring is to always try to abstract away the container dependency, to promote better reuse and code testability.

So, how do we test components in such situations? The answer lies in the same way we managed to get rid of the JMS dependency in our component test. The trick is to refactor the component so that its business functionality is isolated from its environment. The environment is either handled by an external class or injected into the component. For example, if our component were an MDB, we would have gone through the same approach as we did earlier to get rid of JMS. If it were a servlet, we would have used a delegate.

This is not to say that we should always avoid tests that have external dependencies or need to run inside of a server. Such tests are complimentary to our other unit and functional tests. For example, we do need a test at some point to verify that we don't have a typo in the code that reads a JMS message property, and such a test cannot be done without JMS in place.

The Delegate pattern means that the functionality of the component would have been moved away from the servlet class itself into a POJO that can be easily instantiated and tested. The servlet would act as a delegate, and all it would do is ensure that the actual component receives the correct environment and settings based on the request the servlet receives.

Having said that, components are sometimes more intimately tied to their environments and APIs. While it is possible to modify code that, for example, relies on JTA and JNDI, it might be more convenient to simulate that environment in our tests to minimize the testing impact on the code being tested. Chapter 4 will go through a number of Java EE APIs and outline approaches for simulating the correct test environment for them.

Another option that is worth considering is using an in-container test. We will explore this further Chapter 4. The main concept is that the test engine is embedded in the application server, so we can invoke tests that run in the actual deployment environment and interact with the results remotely.

Putting It All Together

We started out with a transformation component that was written without taking testing into consideration at all. It consisted of a monolithic method where a number of concerns and concepts were intertwined, without any clear separation of responsibility. More informally, it was what's often called messy code.

When we tried to test this code, we immediately ran into hurdles. The test encouraged us to shuffle things about a bit in the component itself to make it more easily testable. The shuffling about (or refactoring, as it's more politely known) resulted in a cleaner code base and a more testable one.

We defined our goals for the test and implemented them one by one. Putting together all our changes, Listing 3-14 shows our modified component.

Listing 3-14. Refactored component

public void onMessage(Message message) {
  TextMessage tm = (TextMessage)message;
  processDocument(tm.getText());
}

public void processDocument(String xml)
                throws InvalidTradeException {
  Document doc = XMLHelper.parseDocument(xml);
  validateDocument(doc);
  // do DB work
}

public void validateDocument(Document doc)
  throws InvalidTradeException {
  // perform constraint checks that can't be captured by XML
}

We also have three test classes in Listing 3-15, one to hold all our functional tests for this component, one to hold all the unit tests, and one to act as the Data Provider.

Listing 3-15. Test classes for the component

public class ComponentFunctionalTests {
  private Connection connection;

  @Test(dataProvider = "componentA-data-files",
        groups = "func-tests",
        dependsOnGroups = "unit-tests")
  public void componentAUpdateDatabase(String xml)
                                  throws Exception {
    ComponentA component = new ComponentA();
    component.processDocument(xml);
    // rest of test code
  }

  @Test(dataProvider = "componentA-invalid-data-files",
        expectedExceptions  = InvalidTradeException.class)
  public void componentAInvalidInput(String xml)
                                 throws Exception {
    ComponentA component = new ComponentA();
    component.processDocument(xml);
    // rest of test code
  }

  @BeforeMethod
  public void connect() throws SQLException {
    connection = DatabaseHelper.getConnection();
    connection.setAutoCommit(false);
  }

  @AfterMethod
  public void rollback() throws SQLException {
    connection.rollback();
  }
}

public class ComponentUnitTests {
@Test(dataProvider = "componentA-data-files",
        groups="unit-tests")
  public void componentAValidateInput(String xml)
                                   throws Exception {
    ComponentA component = new ComponentA();
    component.validateDocument(XMLHelper.parseDocument(xml));
    // rest of test code
  }
}

public class ComponentDataProvider {
  @DataProvider(name = "componentA-invalid-data-files")
  public Iterator<Object[]> loadInvalidXML() throws Exception {
    return getPathContents("samples/ComponentA/trades/invalid");
  }

  @DataProvider(name = "componentA-data-files")
  public Iterator<Object[]> loadXML() throws Exception {
    String path = "samples/ComponentA/trades";
    return getPathContents(path);
  }

  private Iterator<Object[]> getPathContents(String path) {
    File[] files = new File(path).listFiles(new XmlFileFilter());
    final Iterator<File> iter = Arrays.asList(files).iterator();
    return new FileContentsIterator(iter);
  }

  private static class XmlFileFilter implements FileFilter {
    public boolean accept(File file) {
      return !file.isDirectory() &&
               file.getName().endsWith(".xml");
    }
  }

  private static class FileContentsIterator
      implements Iterator<Object[]> {
    private final Iterator<File> iter;

    public FileContentsIterator(Iterator<File> iter) {
      this.iter = iter;
    }

    public boolean hasNext() {
    return iter.hasNext();
    }

    public Object[] next() {
      return new Object[]{IOUtils.readFile(iter.next())};
    }

    public void remove() {
      throw new UnsupportedOperationException();
    }
  }
}

As we can see, the few small refactorings we performed on our component have paid off handsomely, and the code is now clearer and easier to test. Our tests now include a unit test as well as functional tests, both of which will hopefully grow over time.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020