Skip to main content

Whether you think you can or you can't

You're right!

About Us
Contact Us
Site Map
Member Login
Agile Resources
Team Behaviors in Agile A
The Scrum Masters Develop
Can a Waterfall Project b
Scrum: The Unity of Knowi
Retrospective Know Yourse
Scrum Process
Definition of Ready
Definition of Done
Zombification of Scrum
The Illusion of Control
Scaled Agile Framework fo
Agile Test Planning Prime
Understanding the Pull
Lean Coffee
Agile Potluck
PM Humor Catalog
Agile Coaching
What we offer
Business Services
Reid Lowery
Agile for Large Projects
Support Request
FAQ - Agile Test Planning Primer

In talking to companies about potentially adopting Agile, there is a lot of confusion with the idea of cross functional teams.  In particular, there is much confusion around how testing works in an agile company.  Bigger organizations, and organizations who are under regulatory scrutiny, have a lower risk tolerance than companies in other industries.  As a result of their lower risk tolerance, they place a premium on quality and testing.  A lot of the Agile literature talks about testing in very high level terms (use automation), but does not talk specifically about how to go about in testing with Agile while still keeping the quality concerns at bay.

Why Test Planning is Important, Even with Agile

Cross functional teams are important.  Short iterations are important.  Close collaboration is important, and demonstrations of working product at the end of every sprint is important.  However, quality and testing are also important, and companies need to understand how to address their quality concerns using Agile frameworks.  Additionally, most companies do not have QA teams that can start coding day 1, either as part of a cross functional team or as part of an automated testing team.  The transition to truly cross functional teams can take years.  In the meantime, you still have to deliver projects.  As such, companies need a stop gap/interim solution to the test planning problem.

Step 1, Deputize QA to Inject Quality Throughout the Process

Quality Injection, Painful But Worth It
Quality Injection, Painful But Worth It

Quality starts as a mindset.  As such, QA needs to think of themselves not just as “testers”, but as the voice of quality throughout the process.  This means they need to be involved early and often, rather than waiting for code to be tossed over the wall to test.

This is sometimes difficult for QA organizations that are used to operating in a certain way.  You will get the usual concerns about “productivity” (What is QA doing at the beginning of the sprint?  They can’t just sit around!)  Of course not.  At the beginning of the sprint, they should be

  • Helping the Product Owner Create Acceptance Criteria
  • Thinking about Testability of New Features
  • Creating Positive, Negative, and Boundary Cases
  • Working with Developers on Unit Tests

This serves to turn your “testers” into “Quality Advisers” for the rest of the team.  Finding issues in the process during story authorship and development planning is much more effective than finding issues after code as been written.

Step 2, Use The Acceptance Criteria As A Guide to Create Test Plans

Use Acceptance Criteria As A Guide
Use Acceptance Criteria As A Guide

When the Product Owner creates the initial set of acceptance criteria, its usually centered around 3-5 items the Product Owner will be able to inspect herself to see if the use story is complete to her satisfaction.  However, there is usually much more going on under the hood than what the Product Owner sees, and this is where QA can really help the team shine.

QA should be able to help the Product Owner author additional acceptance criteria as part of story authorship by asking a few powerful questions like, “How will we know that works?” or “How will we test that?”

Product Owners sometimes fall short in creating verifiable acceptance criteria for their user stories, and QA can help make the acceptance criteria more verifiable and robust.

Step 3, Expand on the Acceptance Criteria with Positive and Negative Test Cases

QA Engineer Thinking Up Negative Test Cases
QA Engineer Thinking Up Negative Test Cases

Most Acceptance Criteria is written as positive cases (when I do this, and do that, I should see this).  One of the first places to start in test planning is to create a negative test case for every positive case.

Next, have a discussion with the Product Owner to gather more cases of acceptable behavior for the system, and ensure each of those positive cases have a negative test case.  Once you have around 20 or so positive and negative cases, you have the beginning of something you can automate in order to create a smoke test and mini regression test.  Do this for every story and now you have a regression suite from the beginning of the project.

Step 4, Automate, starting at the Unit Level

Unit Level Automation is Deep Within the Code
Unit Level Automation is Deep Within the Code

The first place to start in terms of automation is with the positive and negative test cases.  Have QA work with developers to create the initial set of automated unit tests based on these cases.

QA can provide the blueprint in the form of input-process-output similar to how acceptance criteria are created.  This will give development an easy way to create the tests according to the plan.  For companies who want to be more aggressive in creating cross functional skillsets, have QA create the automated unit tests under the guidance of a developer.

Step 5, Automate at the Service Level

Not Automating at the Service Layer Makes People Sad
Not Automating at the Service Layer Makes People Sad

A lot of software today is based loosely on a 3 tier structure, where you have a front end, a service layer, and a back end.  Unfortunately, most of the time when I ask companies about test automation, it is exclusively at the front end, with nothing at the service level or at the unit level.

You can use tools like TestComplete to automate at the API level, or create a custom harness to exercise the API.  Use your positive and negative test cases as a guide to exercise the business logic at the API level.

Step 6, Automate at the UI Level

Typically, I am not a big fan of UI level automation, at least at first.  This is because UI automation tends to be fragile, and even when the UI controls themselves are being instrumented for test, if there are major changes to the front end, that work tends to be wasted.

Once exception to this is input validation.  When software relies on input validation (such as 0-9 or A-Z), the UI is usually the first place the input is validated.  It makes sense to validate at this level if you already have unit tests and API/Service Layer tests in place.

Use the input validation rules as a guide to create negative and positive test cases based on acceptable and unacceptable inputs.

Step 7, Explore the Boundaries

The great tragedy of QA in most large organizations is that we rely on armies of people to run rote test plans that were written months ago as a way to say “Yes, its tested.”  Even worse than that, QA almost never has enough time to manually regress everything, so they do the best they can with the time they are given.  However, rote work is mind-numbing and wastes the creativity of the people you have hired.  If you have test plans that need to be run the same way every time, that is a great candidate for automation, either at the unit level, the service level layer, or at the UI level.

This frees up your humans to do work that humans excel at, and computers cant do well, creative work.  You will get a much higher quality product by automating the rote work and using people do “try to break it” by doing things you didn’t think of, but your users certainly will, like funky edge cases, improper input, exiting a form before its complete, partial saves, bad characters, and other items like that.

Step 8, Add Bugs to Automation

When QA finds these strange edge cases, and they will, they need to collaborate with development to figure out where is the best way to automate testing for this regression again.  In some cases it will be at the unit level, in some cases the service level layer, and in some cases the UI layer.  In some cases, it will not be able to be tested automatically at all, which makes it a good candidate for the manual regression.

Hopefully this primer will give QA Managers, Agile Change Agents, and Technology Leaders a better understanding of how cross functional teams, QA, testing, requirements and development work hand in hand when using Agile frameworks.  Just because you’re moving faster and working differently does not mean you need to sacrifice quality.

Stay Agile, My Friends


Hi Jayne,

“How do other teams handle/manage these more detailed testing documents?”

Often, we spend a lot of time thinking about testing and testing documentation when we would be better served thinking about QUALITY. Quality is not something that can be easily added at the end of a development cycle. It is better added as part of the development cycle itself.

Most projects’ testing hiearchy looks like this, with mostly manual testing with some integration testing.

The un-automated testing triangle

You can remove a lot of the overhead associated with needing to create detailed test plan documentation by engaging in aggressive Test Driven Development/Unit Tests and Automated Acceptance Test Driven Development.

When projects save their testing to the end of the development cycle (dev->test), it has a couple of negative side effects:
1. It tends to make developers have the mindset that they are not responsible for testing.
2. It defers the quality risk to the end of the development cycle, where it costs the most to fix.

Test Driven Development/Unit Testing forces your developers to think about testibility up front, it terms of code as well as design, and makes them responsible for ensuring that the individual units/functions/methods are verified working as part of the build.

Acceptance Test Driven Development, with a tool such as cucumber, is a wonderful way to test the software in aggregate, automatically. Automated ATDD tests can be written by ANYBODY (PO, QA, Devs, SM, Stakeholders, etc) and the automation of the acceptance test criteria can be decoupled from the code, meaning they can be written in parallel.

As a result, an Agile testing hierarchy looks more like this:

The Automated Testing Triangle

With unit tests as a foundation and manual testing at the very top.

If you go down the road of automating unit tests and automating acceptance tests in parallel with the code being developed, you will gain a few things:

– Quality as part of the process, as part of the design, as a shared responsibility
– Quality not saved as a step near the end of the development cycle
– Less of a need for rote test plans, as most of the rote stuff will be automated, meaning that your human testers can do the kind of testing humans are good at, exploratory testing.
– Less of a need for a large manual regression cycle, since every build and every change can be automatically regressed at both the unit level and the acceptance level multiple times a day.

If you move toward unit test->acceptance test automation then the documentation becomes a lot more lightweight and less of a chore to “manage”.

A: Including a FAQ page is a user-friendly way of presenting information to customers so they can quickly look it up. It also can help reduce the number of service calls and e-mails.

A: Make a list of the top 10 questions people ask you about your business, services, or products. That's a good start.

A: Break the questions into sections by subject and give each section a subhead, such as Sales, Products, or Services.

A: You don't need to include information such as your company contact information, because it's easily found on the Contact Us page. You could, however, include a question such as: What is the quickest way to contact you?

A: It's a good idea to use a different format, such as bold, for the questions, so that it is easy to distinguish them from the answers.