The initial statement of Anko Tijman (@agiletesternl) in his talks was, that agile testing is about migitating the risk, therefor a test strategy should focus on risks in
- code
- interfaces
- requirements
- architecture
- business acceptance
He also raised the point, that „potential shipable software“ does not necessarely mean „maintainable software“, and that we should go for the second point as well.
For him risk is, what the tester thinks about, what leads to the question, if comfirmative tests (like unit tests and acceptance tests) migitate the product risk?
The conclusion is, that risks should be captured in different test types and test levels; a balanced test strategy should consider the following items:
- Test cases with user stories
- Unit & Integration Tests
- Non functional tests
- Exploratory Testing
- Customer’s acceptance
Migitating risk in requirements
The value of user stories should be evaluated by questioning the story: What? Who? Why?
Why should you create test cases for your user stories?
- Beginning with the end status in mind, defines the end state
- „Limiting“ the story leads you to the point what is „good enough“
- A joint understanding let’s anybody know what the feature is about
Techniques and Tools
- Acceptance Test Driven Development (ATDD), e.g. FitNesse
- Behaviour Driven Development (BDD), e.g. Cucumber
- Specification by Example
A solid foundation
A solid foundation enables a potentiable shipable product, which is easy to change. Therefor tests should be run frequently to check most of the business logic, so that most defects are being found.
Architecture and Maintainability
Risk in architecture and maintainability should be migitated as well. To reach that goal, maintainability should be „build in“ during the iteration, not at the end of the project!
And so should all non-functional-testing be done within the iteration. If that´s not completely feasible, you should try using the pareto-principle: Spend 20 % effort, to cover 80 % of the test area, this already creates a good feedback loop.
In addition: It is better to measure „something“ reasonable (!), then to not measure at all. E.g. it is better to do performance testing with 100,000 datasets, when the production database has 2,000,000 datasets, and your normal database has 2,000 datasets, then to do no performance testing at all.
Unexpected Behaviour
One methodoloy to migitate risk in unexpected behaviour could be Exploratory Testing, furthermore it enables simualtanous learning, test design and test execution, as well as new information is being revealed. The difference between „testing“ and „checking“ (cf. @michaelbolton) should be considered.
This is useful for stuff that you can’t or don’t want to automate, or when (a piece of) software can’t remain untested, e.g. quality attributes, specifications, etc.
What does Exploratory Testing look like? Decisions to be taken are area, attribute and single or paired setup. For the charter you need to define scope, purpose, length (which is equivalent to the coverage), and the actions to be taken (what is the feedback loop?).
Stakeholder Expectations
Migitating risks in stakeholder expectations is about acceptance. There often are two kinds of acceptance: The informal feedback given by the users of the software, and the formal confirmation given by the stakeholders. Acceptance tests very often only focussing on formal confirmation.
Informal feedback is about building trust, understanding, and responding to change, but also about learning, since the team’s mental model is growing.
Formal confirmation is about acceptance criteria, production ready and maintainability.
Senior users are giving informal feedback, user groups are doing business acceptance and stakeholders are giving formal confirmation.
Both is needed, and often the informal feedback is needed to get the formal acceptance.