Testing

Acceptance Test Phase
Testing is the part that probably will vary most between different organizations.

In the ideal Scrum world, a sprint results in a potentially deployable version of your system. So just deploy it, right? Wrong! Usually this doesn't work. If quality has any sort of value to you, some kind of manual acceptance testing phase is required.

These testers are not part of the team and will come up with tests that the SCrum team couldn't think of, or didn't have time to do, or didn't have the hardware to do. The testers access the system in exactly the same way as the end users, which means they must be done manually (assuming your system is for human users).

The est team will find bugs, the Scrum team will have to do bug-fix releases, and sooner or later you will be able to release a bug-fixed version 1.0.1 to the end users, rather than the shaky version 1.0.0.

Acceptance test phase refers to the whole period of testing, debugging, and re-releasing until there's a version good enough for production release.

Minimize the Acceptance Test Phase
Although we can't get rid of the acceptance test phase, we can (and will do) try to minimize it. More specifically, minimize the amount of time needed for the acceptance test phase. This is done by:
 * Maximizing the quality of the code delivered from the Scrum team.
 * Maximizing the efficiency of the manual test work (i.e. find the best testers, give them the best tools, make sure they report time-wasting tasks that could be automated)

Sohow do we maximize the quality of the code delivered from the Scrum team? Here are two ways that we find work very well:
 * Put testers in the Scrum team.
 * Do less per sprint.

Increase quality by putting testers in the Scrum team
Scrum teams are supposed to be role-less so what is ment by "tester" in this case is a guy whose primary skill is testing rather than do only testing.

Developers are often quite lousy testers. Especially developers testing their own code. This tester has an important job as a signoff guy. Nothing is considered "done" in a sprint until he says it's done.

Once the tester has tested a feature, he should go through the "done" checklist (if ou have one) with the developer. A nice side effect of this is that the team now has a guy who is perfectly suited to organize the sprint demo.

When there's nothing to test, the tester should be preparing for tests. That is, writing test specs, preparing a test environment, etc. So when a developer has something that is ready to test, there should be no waiting.

If the team is doing TDD then people spend time writing test code from day 1. The tester should pair-program with developers that are writing test code. A good tester usually comes up with different types of tests than a good developer does, so they complement each other.

If the team is not doing TDD, he should simply do whatever he can to help the team achieve the sprint goal. When breaking down stories into tasks during the sprint planning meeting, the team tends to focus on programming tasks. Spend time trying to identify the non-programming tasks during the sprint planning phase.

Examples of non-programming tasks that often need to be done in a sprint:
 * Set up a test environment.
 * Clarify requirements.
 * Discuss deployment details with operations.
 * Write deployment documents (release notes, RFC, or whatever your organization does).
 * Contact with external resources (GUI designers for example).
 * Improve build scripts.
 * Further breakdown of stories into tasks.
 * Identify key questions from the developers and get them answered.

If the tester becomes the bottleneck, make everybody in the team into the tester's assistants. He decides which stuff he needs to do himself, and delegates grunt testing to the rest of the team. That's what cross functional teams are all about!

Increase quality by doing less per sprint
If you have quality problems, or long acceptance test cycles, do less per sprint. This will almost automatically lead to higher quality, shorter acceptance test cycles, fewer bugs affecting end users, and higher productivity in the long run. This is because the team can focus on new stuff all the time rather than fixing old stuff that keeps breaking.

It's almost always cheaper to build less, but build it stable, rather than to build lots of stuff and then have to do panic hot-fixes.

Should Acceptance Testing Be Part of the Sprint
A sprint is time-boxed. Acceptance testing is very difficult to time-box. What if time runs out and you still have a critical bug? Are you going to release to production with a critical bug? Are you going to wait until next sprint? In most cases both solutions are unacceptable. So we leave manual acceptance testing outside.

Sprint Cycles vs. Acceptance Test Cycles
In a perfect Scrum world you don't need acceptance test phases since each Scrum team releases a new production-ready version of your system after each sprint.

A more realistic view is that after sprint 1, a buggy version 1.0.0 is released. During sprint 2, bug reports start pouring in and the team spends most of its time debugging and is forced to do a mid-sprint bug-fix release 1.0.1. Then at the end of sprint 2 they release a new feature-version 1.1.0, which of course is even buggier since they had even less tim to get it right this time due to all the disturbances from last release, etc, etc...

The sad thing is that the problem remains even if you have an acceptance test team. The only difference is that most of the bug reports will come from the test team instead of from angry end users. That's a huge difference from a business perspective, but for developers it amounts to almost the same thing.

First of all, again, maximize the quality of the code that the Scrum team releases. The cost of dinding and fixing bugs early, within a sprint, is just so extremely low compared to the cost of finding and fixing bugs afterwards.

Still there will be bug reports coming after a sprint is complete. How to deal with that?

Approact 1: "Don't start building new stuff until the old stuff is in production
This will lead to a non time-boxed release period between sprints, where we do only testing and debugging until we can make a production release.

Approach 2: "OK to start building new stuff, but prioritize getting the old stuff into production"
This is our preferred approach. Right now at least.

Basically, when we finish a sprint we move on to the next one. But we expect to be spending some time in the next sprint fixing bugs from the last sprint. If the next sprint gets severely damaged because we had to spend so much time fixing bugs from the previous sprint, we evaluate why this happened and how we can improve quality. We make sure sprints are long enough to survive a fair amount of bug fixing from the previous sprint.

Gradually, over a period of many months, the maount of time spent fixing bugs from previous sprints decreased. In addition we were able to get fewer people involved when bugs did happen, so that the whole team didn't need to get disturbed each time.

During sprint planning meetings we set the focus factor low enough to accound for the time we expect to spend fixing bugs from last sprint. With time, the teams have gotten quite good at estimating this. The velocity metrics helps a lot.

Don't Outrun the Slowest Link in Your Chain
Lets say that the tester is the slowest link. It is always tempting for managers or product owners to schedule development of, say, 6 new features per week.

Don't! Reality will catch up to you one way or another, and it will hurt. Instead, schedule 3 new features per week and spend the rest of the time alleviating the testing bottleneck. For example:
 * Have a few developers work as testers instead.
 * Implement tools and scripts that make testing easier.
 * Add more automated test code.
 * Increase sprint length and have acceptance test included in sprint.
 * Define some sprints as "test sprints" where the whole team works as an acceptance test team.
 * Hire more testers (even if that means removing developers).

Retrospectives are a good forum for identifying the slowest link in the chain.

Back to Reality
I’ve probably given you the impression that we have testers in all Scrum teams, that we have a huge acceptance test teams for each product, that we release after each sprint, etc, etc.

Well, we don’t.

We’ve sometimes managed to do this stuff, and we’ve seen the positive effects of it. But we are still far from an acceptable quality assurance process, and we still have a lot to learn there.