Tải bản đầy đủ
Chapter 14. A Roadmap to Effective Test Automation

Chapter 14. A Roadmap to Effective Test Automation

Tải bản đầy đủ

176

Chapter 14

A Roadmap to Effective Test Automation

2. Stateless service objects
• Individual components via component tests
• The entire business logic layer via Layer Tests (page 337)
3. Stateful service objects
• Customer tests via a Service Facade [CJ2EEP] using Subcutaneous
Tests (see Layer Test)
• Stateful components via component tests
4. “Hard-to-test” code
• User interface logic exposed via Humble Dialog (see Humble
Object on page 695)
• Database logic
• Multi-threaded software
5. Object-oriented legacy software (software built without any tests)
6. Non-object-oriented legacy software
As we move down this list, the software becomes increasingly more challenging to
test. The irony is that many teams “get their feet wet” by trying to retrofit tests onto
an existing application. This puts them in one of the last two categories in this list,
which is precisely where the most experience is required. Unfortunately, many teams
fail to test the legacy software successfully, which may then prejudice them against
trying automated testing, with or without test-driven development. If you find yourself trying to learn test automation by retrofitting tests onto legacy software, I have
two pieces of advice for you: First, hire someone who has done it before to help you
through this process. Second, read Michael Feathers’ excellent book [WEwLC]; he
covers many techniques specifically applicable to retrofitting tests.

Roadmap to Highly Maintainable Automated Tests
Given that some kinds of tests are much harder to write than others, it makes
sense to focus on learning to write the easier tests first before we move on to the
more difficult kinds of tests. When teaching automated testing to developers, I
introduce the techniques in the following sequence. This roadmap is based on
Maslow’s hierarchy of needs [HoN], which says that we strive to meet the higherlevel needs only after we have satisfied the lower-level needs.

Roadmap to Highly Maintainable Automated Tests

1. Exercise the happy path code
• Set up a simple pre-test state of the SUT
• Exercise the SUT by calling the method being tested
2. Verify direct outputs of the happy path
• Call Assertion Methods (page 362) on the SUT’s responses
• Call Assertion Methods on the post-test state
3. Verify alternative paths
• Vary the SUT method arguments
• Vary the pre-test state of the SUT
• Control indirect inputs of the SUT via a Test Stub (page 529)
4. Verify indirect output behavior
• Use Mock Objects (page 544) or Test Spies (page 538) to intercept
and verify outgoing method calls
5. Optimize test execution and maintainability
• Make the tests run faster
• Make the tests easy to understand and maintain
• Design the SUT for testability
• Reduce the risk of missed bugs
This ordering of needs isn’t meant to imply that this is the order in which we
might think about implementing any specific test.1 Rather, it is likely to be the
order in which a project team might reasonably expect to learn about the techniques of test automation.
Let’s look at each of these points in more detail.

Exercise the Happy Path Code
To run the happy path through the SUT, we must automate one Simple Success
Test (see Test Method on page 348) as a simple round-trip test through the SUT’s
API. To get this test to pass, we might simply hard-code some of the logic in the
1

Although it can also be used that way, I find it better to write the assertions first and
then work back from there.

177

178

Chapter 14

A Roadmap to Effective Test Automation

SUT, especially where it might call other components to retrieve information it
needs to make decisions that would drive the test down the happy path. Before
exercising the SUT, we need to set up the test fixture by initializing the SUT to
the pre-test state. As long as the SUT executes without raising any errors, we
consider the test as having passed; at this level of maturity we don’t check the
actual results against the expected results.

Verify Direct Outputs of the Happy Path
Once the happy path is executing successfully, we can add result verification logic
to turn our test into a Self-Checking Test (see page 26). This involves adding calls
to Assertion Methods to compare the expected results with what actually occurred. We can easily make this change for any objects or values returned to the
test by the SUT (e.g., “return values,” “out parameters”). We can also call other
methods on the SUT or use public fields to access the post-test state of the SUT;
we can then call Assertion Methods on these values as well.

Verify Alternative Paths
At this point the happy path through the code is reasonably well tested. The
alternative paths through the code are still Untested Code (see Production Bugs
on page 268) so the next step is to write tests for these paths (whether we have
already written the production code or we are striving to automate the tests that
would drive us to implement them). The question to ask here is “What causes the
alternative paths to be exercised?” The most common causes are as follows:
• Different values passed in by the client as arguments
• Different prior state of the SUT itself
• Different results of invoking methods on components on which the
SUT depends
The first case can be tested by varying the logic in our tests that calls the SUT
methods we are exercising and passing in different values as arguments. The
second case involves initializing the SUT with a different starting state. Neither
of these cases requires any “rocket science.” The third case, however, is where
things get interesting.

Controlling Indirect Inputs
Because the responses from other components are supposed to cause the SUT
to exercise the alternative paths through the code, we need to get control

Roadmap to Highly Maintainable Automated Tests

over these indirect inputs. We can do so by using a Test Stub that returns the
value that should drive the SUT into the desired code path. As part of fixture
setup, we must force the SUT to use the stub instead of the real component.
The Test Stub can be built two ways: as a Hard-Coded Test Stub (see Test
Stub), which contains hand-written code that returns the specific values, or
as a Configurable Test Stub (see Test Stub), which is configured by the test to
return the desired values. In both cases, the SUT must use the Test Stub instead
of the real component.
Many of these alternative paths result in “successful” outputs from the SUT;
these tests are considered Simple Success Tests and use a style of Test Stub called
a Responder (see Test Stub). Other paths are expected to raise errors or exceptions; they are considered Expected Exception Tests (see Test Method) and use
a style of stub called a Saboteur (see Test Stub).

Making Tests Repeatable and Robust
The act of replacing a real depended-on component (DOC) with a Test Stub has
a very desirable side effect: It makes our tests both more robust and more repeatable.2 By using a Test Stub, we replace a possibly nondeterministic component
with one that is completely deterministic and under test control. This is a good
example of the Isolate the SUT principle (see page 43).

Verify Indirect Output Behavior
Thus far we have focused on getting control of the indirect inputs of the SUT
and verifying readily visible direct outputs by inspecting the post-state test of the
SUT. This kind of result verification is known as State Verification (page 462).
Sometimes, however, we cannot confirm that the SUT has behaved correctly
simply by looking at the post-test state. That is, we may still have some Untested
Requirements (see Production Bugs) that can only be verified by doing Behavior
Verification (page 468).
We can build on what we already know how to do by using one of the close
relatives of the Test Stub to intercept the outgoing method calls from our SUT.
A Test Spy “remembers” how it was called so that the test can later retrieve the
usage information and use Assertion Method calls to compare it to the expected
usage. A Mock Object can be loaded with expectations during fixture setup,
which it subsequently compares with the actual calls as they occur while the
SUT is being exercised.
2
See Robust Test (see page 29) and Repeatable Test (see page 26) for a more detailed
description.

179

180

Chapter 14

A Roadmap to Effective Test Automation

Optimize Test Execution and Maintenance
At this point we should have automated tests for all the paths through our code.
We may, however, have less than optimal tests:
• We may have Slow Tests (page 253).
• The tests may contain Test Code Duplication (page 213) that makes
them hard to understand.
• We may have Obscure Tests (page 186) that are hard to understand
and maintain.
• We may have Buggy Tests (page 260) that are caused by unreliable Test
Utility Methods (page 599) or Conditional Test Logic (page 200).

Make the Tests Run Faster
Slow Tests is often the first behavior smell we need to address. To make tests run
faster, we can reuse the test fixture across many tests—for example, by using some
form of Shared Fixture (page 317). Unfortunately, this tactic typically produces
its own share of problems. Replacing a DOC with a Fake Object (page 551)
that is functionally equivalent but executes much faster is almost always a better
solution. Use of a Fake Object builds on the techniques we learned for verifying
indirect inputs and outputs.

Make the Tests Easy to Understand and Maintain
We can make Obscure Tests easier to understand and remove a lot of Test Code
Duplication by refactoring our Test Methods to call Test Utility Methods that
contain any frequently used logic instead of doing everything on an in-line basis.
Creation Methods (page 415), Custom Assertions (page 474), Finder Methods
(see Test Utility Method), and Parameterized Tests (page 607) are all examples
of this approach.
If our Testcase Classes (page 373) are getting too big to understand, we can
reorganize these classes around fixtures or features. We can also better communicate our intent by using a systematic way of naming Testcase Classes and Test
Methods that exposes the test conditions we are verifying in them.

Reduce the Risk of Missed Bugs
If we are having problems with Buggy Tests or Production Bugs, we can reduce
the risk of false negatives (tests that pass when they shouldn’t) by encapsulating
complex test logic. When doing so, we should use intent-revealing names for our

What’s Next?

Test Utility Methods. We should verify the behavior of nontrivial Test Utility
Methods using Test Utility Tests (see Test Utility Method).

What’s Next?
This chapter concludes Part I, The Narratives. Chapters 1–14 have provided
an overview of the goals, principles, philosophies, patterns, smells, and coding
idioms related to writing effective automated tests. Part II, The Test Smells, and
Part III, The Patterns, contain detailed descriptions of each of the smells and
patterns introduced in these narrative chapters, complete with code samples.

181

This page intentionally left blank

PART II

The Test Smells

The Test
Smells

This page intentionally left blank

Chapter 15

Code Smells

Code Smells

Smells in This Chapter
Obscure Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Conditional Test Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Hard-to-Test Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Test Code Duplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Test Logic in Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

185

186

Obscure
Test

Chapter 15

Code Smells

Obscure Test
It is difficult to understand the test at a glance.

Also known as:
Long Test,
Complex Test,
Verbose Test

Automated tests should serve at least two purposes. First, they should act as
documentation of how the system under test (SUT) should behave; we call this
Tests as Documentation (see page 23). Second, they should be a self-verifying
executable specification. These two goals are often contradictory because the
level of detail needed for tests to be executable may make the test so verbose as
to be difficult to understand.

Symptoms
We are having trouble understanding what behavior a test is verifying.

Impact
The first issue with an Obscure Test is that it makes the test harder to understand
and therefore maintain. It will almost certainly preclude achieving Tests as Documentation, which in turn can lead to High Test Maintenance Cost (page 265).
The second issue with an Obscure Test is that it may allow bugs to slip
through because of test coding errors hidden in the Obscure Test. This can result in Buggy Tests (page 260). Furthermore, a failure of one assertion in an
Eager Test may hide many more errors that simply aren’t run, leading to a loss
of test debugging data.

Causes
Paradoxically, an Obscure Test can be caused by either too much information
in the Test Method (page 348) or too little information. Mystery Guest is an
example of too little information; Eager Test and Irrelevant Information are
examples of too much information.
The root cause of an Obscure Test is typically a lack of attention to keeping
the test code clean and simple. Test code is just as important as the production
code, and it needs to be refactored just as often. A major contributor to an
Obscure Test is a “just do it in-line” mentality when writing tests. Putting code
in-line results in large, complex Test Methods because some things just take a
lot of code to do.
The first few causes of Obscure Test discussed here relate to having the
wrong information in the test: