The idea -as well as a draft- for this blog posting are around for quite a while now, what finally forced me to finish and publish it, was a talk with Lasse Koskela (@lassekoskela) at the Agile Testing Days 2011, where I told him about the approach of „group testing“ sessions within our team, and he insisted that I had to write about it. So thanks Lasse, for forcing me to finish and finally posting this article.
In our team, the manual regression tests consist of a list of business use cases which are executed by all team members manually once before every 4-weekly release.
Each use case is written down on one index card, all the index cards are stored in a box. When the regression test starts, everybody pulls a card from the box, executes the scenario described on the card and when it´s finished, he pulls the next card from the box; regression testing is finished when the box is empty. Then a debrief meeting happens where the findings are described and classified; a decision is taken which ones need to be fixed before we can release.
New scenarios are added to the box in parallel to the development of new features. Of course we´re trying to automate as much as possible, we have really a lot of Unit Tests & FitNesse tests, being executed hourly by our continouuis build system. But as it isn´t possible to automate everything, and in any case, human testing (or exploratory testing), is still needed, the team takes up this challenge and deals with it.
I think there are some things worth mentioning them seperately:
- This all sounds to me quite similiar to an approach named „Session Based Testing“, which I for the first time heard some details about also at the Agile Testing Days 2011 in a talk of Mason Womack. Differences are, that we don´t have a fixed timebox, but we´re done, when the box is empty. So we don´t have a timebox as test-end-criteria, but a pre-defined list of tests to be done. We also don´t communicate much during the session; Mason pointed out, that either all the people are sitting in one room (which is the case for us), or at least they´re talking via chat (Skype or whatever) about there findings.
- The team came up with the practice of executing these test sessions long time before a dedicated tester (or better: a team member with a strong background in software testing) joined the team. Means: All the team members have a strong programming background, and they have a strong sense for quality, and as they saw the need for these manual regression test sessions, they took up the challenge and added it to the software development/ software release process.
- Athough some people strongly dislike this manual testing (it´s dull, it´s boring, it´s repeating the same tasks over and over) they are totally aware of the need for it and therefor are participating in it. And as far as I can see, they´re not „just“ clicking through the tests to get them done, but they´re really willing to make findings and try to find out the state of quality of the product, in order to know if it is „shippable“ or not.
I´m really proud and lucky to work in such a great team!