Training TEST DESIGN PRINCIPLES6. Why do most software tests suck?

6. Why do most software tests suck?

This lesson points out a couple important but seldom discussed reasons why most collections of software tests in the world today leave a great deal to be desired.

Software tests repeat themselves far more than they need to.

  • If you imagine someone standing at one end of a mine field and they absolutely had to walk through it, following in the footsteps of someone who had successfully made it across would be a good strategy.
  • As James Bach has pointed out, in software testing, the opposite is almost always true. Repeating "in the footsteps" of other test scripts that have already been executed is usually an absolutely terrible way to find defects and an equally terrible way learn more about whatever it is you're testing.
  • Despite this, software tests repeat themselves much more than they need to. This often has pretty disastrous effects on both the efficiency and effectiveness of software test execution.

Gut-feel and guesswork is used to decide which tests get tested and which potential tests never get tested. More effective prioritization approaches are ignored.

  • You can't test everything, so what SHOULD you test? When it comes to determining which tests should be executed and which potential tests do not get executed, test designers almost always miss the opportunity to use a scientific, well-reasoned approch to guide their decisions. For most testing projects, "gut feel" plays a large role in what actually gets tested.
  • Highly effective methods of prioritizing software tests (e.g., deciding which test scripts are executed vs. which potential test scripts are not executed) are are completely ignored.
  • This is because relatively few software testers are aware of these well-proven prioritization methods.