Test and Verification in Scrum

In a perfect Scrum world, the team tests everything themselves. I think that misses an important point – the developers have a code-centric view on the domain. Good testing requires a user- or business-centric view on the domain. I think that it is impossible to both have a deep understanding of the code and to be a good tester.

That doesn’t relieve the developers from tests – developers having any pride in what they do of course unit test all their code. To get high quality software unit tests (whether automated or not, I’ll leave that discussion outside this post) is important, but not alone sufficient. There have to be system level tests and user/acceptance tests too.

I was recently introduced to the V model of testing by Kentor‘s test manager Maria Larsson. It emphasizes the different levels of tests that are required in a project.

A simple V model for a scrum project could have three levels of tests.

  • Unit tests performed by developers, against the sprint backlog item descriptions
  • System tests performed by a tester in the team, against the product backlog item user stories
  • Acceptance tests performed by the customer against the project goals.

Testers are Requirements Stakeholders

All of the tests above are performed against a specification. This means that not only the developers, but also the testers are stakeholders of the requirements in the project.

For the sprint backlog items and product backlog items it is quite simple, because those are produced anyways. For the acceptance tests it is harder, because in scrum there is no complete specification of the system produced. The developers don’t need it. For us, it is enough with product backlog items describing what to do in each sprint.

Changing Requirements

One of the fundamentals of agile software development is to embrace change. With scrum, that is handled through the product backlog which can contain both new features and changes to existing features. Consider for example a price calculation engine for a cab company. In each sprint, a new product backlog item which evolves the calculation is implemented.

  1. The fee is €2 per kilometer, plus a starting fee of €2 for each drive.
  2. On weekends and during nights the fees are 50% higher.
  3. There is no fixed starting fee, but a €2/minute fee when an ordered cab has to wait.
  4. There is a €1/minute fee during drive as well. The kilometer fee is €1.

For each of the four sprints, the requirements are clear. It is possible to do the system tests after each sprint against each of these product backlog items. When doing the acceptance tests on the complete system there is a problem: What requirements should the tests verify? Each of the product backlog items describes what to do in the sprint, but none of them holds the final, complete requirements.

A Complete Specification is Needed

To do the acceptance tests we have to know the final, complete, requirements. Even though no complete specification is produced for the developers, it is still needed for the tests. My experience is that the need for the complete specification grows over time.

When the deltas of the product backlog items are too many – or spans over a too long time frame – to be remembered, the complete specification has to be there. Especially when handing over a project to maintenance and getting the delivery accepted by the customer there has to be a specification of what the system does.

For an ordered cab, there is a €2/minute waiting fee until the cab starts driving. During driving the fee is €1/minute + €1/kilometer. On weekends and during nights the fees are 50% higher.

It doesn’t have to be more complicated than that, but the description has to be somewhere or there will inevitably be discussions of the order of the product backlog items and how they overrule each other.

When to do tests in Scrum?

The three levels of tests are performed at different times in a project. Starting at the bottom of the V model, unit testing should be part of the definition of done for sprint backlog items.

The system tests should be performed once the complete product backlog item is finished. I think that a good approach is to have the system tests performed in the sprint right after the functionality was implemented. The development and testing of a feature will be spread over several sprints.

  • Sprint 1: The original PBI (product backlog item) is implemented and unit tested
  • Sprint 2: The original PBI is system tested, and additional requirements are detailed.
  • Sprint 3: An adjustment PBI based on the test results and the additional requirements is included in the sprint, implemented and unit tested
  • Sprint 4: The adjustment BPI is system tested and the final specification is updated for the acceptance tests.
  • At the end of the project the final result is tested against the final specification

Testing the features in the sprint right after they where implemented includes the testing in the iterative approach of scrum. With the testing done continuously during the project the overall risk of the project is reduced. The earlier the testing starts, the earlier any problems can be addressed.

 

This post is part of the Scrum series.<< Disarming Different Estimates with a Deck of CardsScrum and the Business Implementation >>

  • Pingback: Five Blogs – 16 April 2012 « 5blogs on 2012-04-16
  • Maria Larsson on 2012-04-12

    Really good Anders. I am happy when I read sentences like this:
    “developers having any pride in what they do of course unit test all their code.” /…/ “There have to be system level tests and user/acceptance tests too”.

    When I come to Stockholm next time we must continue this chat; give me 30 minutes to talk about test – and I will change your life… :-) Maybe not your life, but your view of Coding – Test – Quality, … and in the happy end, we get satisfied users.

  • Jakob Nisses on 2012-04-13

    Very interesting reading Anders.
    We really would have needed to read this before our last “scrum” project!

  • Albin Sunnanbo on 2012-04-14

    My experience from having dedicated testers working during the sprint is that you get testing done as close to implementation as possible. Typically you get most bug reports on a sprint backlog item within a few days when the developer still has the case fresh in memory. It really speeds up fixing the bugs.

    Another social benefit of having the testers in the scrum team is that you get better team spirit. Both developers and testers are working in the same team with the same goal instead of working in opposite teams fighting each other.

  • Jim Rush on 2012-04-14

    Thank you for the posting, but I take a bit of an issue with, “I think that it is impossible to both have a deep understanding of the code and to be a good tester.” Testing, or validation, is the inverse of building. If you don’t have the mindset to be able to define and implement tests, how can you build correct systems that work correctly in all but the simplest implementations ? In the V model, test are derived from requirements, just as code is derived from the requirements.

    In practice, requirements are simple, but when applied to a complex system become very difficult to understand. For example, given some of the requirements above, what to do if the cab gets a flat? Simplistic V style requirements to test scenarios/acceptance criteria tend to go poorly.

    That “special quality” people tend to assign to good testers is usually just the ability to step out of the defined requirements box and look at the complex system for scenarios that are likely to be forgotten and fail. Subject matter expertise can also play a large factor.

    Developers are tend to be focused on implementing requirements, not looking for limitations in the requirements or ways a complex system could fail. Just enough and similar statements are common within Agile. If you are counting on a tester to figure out you didn’t do enough, that seems a bit risky unless the tester knows the system, requirements and code architecture to know where problems will arise. And, if you have a person that knows that much, they probably should be part of the requirements analysis and design. Not wasting their time running mundane scenarios.

    On other item: You call out who should do testing, which I think is a bit misleading. All level of tests should be performed by development and done in an automated manner when it makes sense to reduce long term regression testing. System tests, in particular, tend to have high value when automated. I admit that costs for system tests still can be a factor, but this seems to be going down every year. Leave testers for running regression tests that weren’t cost effective to automate. Issues that get detected by the customer should be limited to requirement errors.

    There’s no clarification here between exploratory testing and defined tests derived from acceptance criteria. The former is best done by subject matter and user experts. The latter should be done by the team as part of a TDD/BDD type of structure.

  • Henrik Andersson on 2012-04-15

    Anders,

    I´m not sure if you work as a tester but I get the impression that you don’t. So I assume that you have little or none experience of what you are writing about and are just play backing what someone else has told you.

    You should be very carful from whom you take advise form not everyone who claims to know something about what they are advising on actuality do so. So I expect you to do the same with me and look up my track record and credibility when you evaluate my thoughts.

    My first advise to you and everyone else that believes what you describe is a good way of working. STOP what you are doing, don’t move an inch! If you do there is a great risk that you just are making things worse then it already is!

    You can always come up with a context that your way of doing things will work but since you have not described any context I read this as a general advise. However I try to figure out why anyone would suggest this way of doing things and the only thing a can come up with is when you are in a project that suffer from a very harmful disease and all you are interested in doing is to put some make-up on it so it looks good instead of curing the problems. But still this is not a way you build credibility as a tester.

    Now that I hopefully got your attention that you might be on the wrong track. Let me pin this down to you by extracting some parts of your post and commenting on it and hopefully it will all become clearer. This is not everything but just some highlights.

    “In a perfect Scrum world, the team tests everything themselves”

    Even in a not so perfect scrum (what ever a “perfect scrum” is) the team can do the testing themselves and have that part of “Done”. As you point out, developers might not be best suited to do the testing. Instead they focus on “checking the software” and by that enables the tester that is part of the team do the testing (http://www.developsense.com/blog/2009/08/testing-vs-checking/ and http://blog.houseoftest.se/2012/03/26/checking-as-an-enabler-for-testing/)
    This has nothing to do with a V-model this is related to the principle that every team is set up to be is self sufficient to get the work done. The team consists of the skills and experiences it needs to accomplish their tasks. Notice that it does not mean that everyone needs to do everything equally well.

    Now to your picture:
    “A simple V model for a scrum project could have three levels of tests.”

    To understand this you need to do some homework on both scum and waterfall because the mix of the V-model and Scrum backlogs are really confusing.
    Now what we do not want to do is to build a waterfall model into our iterations. If you see value with working in waterfall then stick to that process and do not mess it up with scrum and vice versa. And messing it up is just what you are doing!
    Let´s start at the bottom of the V. Here I find the Sprint Backlog. A Sprint Backlog is what a team commits to deliver over the next sprint. In your view you only need to do unit testing of what is developed, no further testing needs to be done!
    Then the next level you have is Product Backlog. A Product Backlog is a “wish list” that the Product Owner manages. This lists consists of prioritized stories of what hopefully will be developed in the future sprints. This you suggest that we do system testing on! On something that has not been developed and might not make it into the product, you totally lost me here!
    On top of this you have Project Goals. Well, scrum is a Product centric management model and not a Project centric model. Usually you have sprint goals and release goals that put the product and its releases in focus so why is it important to have project goals and to acceptance test those? Isn’t it more reasonable to acceptance test on the values and expectations that the stakeholders have and why can’t that be partly included when testing any back log item?
    You see the whole mix of these two models does not connect or make any sense.

    “All of the tests above are performed against a specification”
    “What requirements should the tests verify?”

    These are two really scary statements! You are not allowed to do a test if you can’t relate it to a requirement! You are really constraining your testers to do poor testing? If every test comes from a specification are you then testing your product or a bunch of papers? Your testing will never be better then how well your specification is written. It does not matter how much time you spend writing your specification you will never cover every single aspect of your how your system could perform. Every tester worth his/her name knows that great testing does not come from ONLY specifications. Testing is about finding new information about the product by exploration, learning, investigation and evaluation. By following your statement you will never in an effective and efficient way succeed finding valuable information that your stakeholders can use as part of the foundation they base decisions on.

    “A Complete Specification is Needed”

    So I guess the agile manifesto principle “Working software over comprehensive documentation” goes out of the window.
    Let me ask you, what is a complete specification and why on earth is this more important for a tester to have then a developer?
    I truly don’t understand this? Are developers mind-readers and automatically understands what is left out and fill in the blanks with the one correct information. On the other hand Testers are stupid brain dead zombies that can’t do a thing without a detailed written piece of paper. Let me tell you my friend, those things are called computers not testers!

    Your example of a complete specification. Do you call that complete?

    So when I call to the cab company and place a order for a cab two days a head the meter starts ticking if the cab is not moving and when it start moving the meter goes to another fee, and this will go on for two days until my travel is complete? I think you have a great business case here!
    Also when does night begin? It might be nighttime for you but not for me. What about Bank holidays?
    I can go on for a while here but I think you see my point. You are not even close to complete!

    You use a user story as a base for further discussion to get everyone involved the same understanding on what is needed.

    When to do tests in Scrum?
    And we are back into the waterfall model instead of keeping our sprints.
    You are talking about testing early and then you suggest a model that tests late. This does not make sense to me!
    Why would you recommend a model that pollutes every sprint with bugs from previous sprints. Why have a model that it takes four (4) sprints to get a story done? If you like this then use a process that fits you better then scrum.
    There are very few reasons why you can´t do the greater majority of testing within a sprint. One point with having a sprint is to focus your attention on a set of stories and to get those as close as reasonably possible to a finished state. This is so you can leave those stories behind and then next focus on new stories without having to context switch and having multiple tasks running.
    What you are trying to cover up by doing this is that your product backlog items are too big and do not fit in one sprint. So instead of working with the real problem, why do we not get the stories to fit in a sprint, you create a bi-process to cover up your mess.

    My conclusion:
    There is nothing good or valuable about the process you suggest. Instead you will end up in a big mess.

    But my intention is not to just to inform you why this is bad I will also offer you better and more valuable ways of doing testing in a scrum project.

    You can read my article, SCRUM & SBTM: Let´s get married, that was published in STP Magazine.
    It will cost you your e-mail address to register for a membership but there is a lot of stuff worth reading.
    http://www.softwaretestpro.com/Item/5399/SCRUM–SBTM-Let%27s-Get-Married/STQA-Magazine

    You can also attend Agile Testing Days in November where I will do a presentation ”Excelling as an agile tester”
    http://www.agiletestingdays.com/program.php?p=41

    Or I offer you free skype coaching to get you on track. Hook me up on henrik,andersson (comma not dot)


    Henrik Andersson
    House of Test Consulting
    +46 702 76 88 81
    henrik.andersson@houseoftest.se
    http://www.houseoftest.se
    http://twitter.com/henkeandersson
    http://www.linkedin.com/in/henkman

    • Anders Abel on 2012-05-01

      Thanks for your feedback Henrik.

      In your comment you asked about context, and from reading your opinions I should have provided more context. I don’t argue against doing tests during the spring. Neither do I argue against having a tester in the team (System tests performed by a tester in the team…).

      What I’ve found hard in scrum projects is to handle tests that are dependent on people outside the team within the sprints. When doing customer specific systems, there is always a customer approval required. Unfortunately, I’ve found it incredibly hard to get the customer involved to the level where the customer’s testing can be done within the sprint. What I’m suggesting is that all testing that can be done within the sprint, by the team should be done there.

      The problem I’m trying to address is the tests that requires participation of people outside the team. If I were to to have those customer tests as part of definition of done, I would hardly ever be able to finish my sprints. Of course it can be argued that the customer isn’t dedicated enough for scrum, but I don’t want to let a great, productive model fail just because of that. Instead I try to work around it.

      Another way to view the outside-sprint testing is to term it “requirements refinement”. Working with scrum, I think it’s very hard to draw the exact line between testing and iterative requirements refinement.

      For the complete specification part that is based on experience from past projects. The projects have delivered working software, so the developers have had all specifications they need. The problem with the “sum” of the different product backlog items were never needed for the developers to succeed. Later on, the lack of the complete (result of all changed PBIs) specifications has become a problem.

      Last, the V model. I think that having different levels of testing still makes sense. When developing a single task (a sprint backlog item) it is tested as part of definition of done. zem>At this time all sprint backlog items derived from the PBI user story might not be completed, the entire user story cannot be tested.. When all sprint backlog items derived from the user story are completed, the entire story needs to be tested. It’s not enough to just test each part. That’s what I tried to illustrate.

      The highest level – project goals vs. acceptance testing can be thought of more as a stop condition for the requirements refinement – and the entire project. The steering committee has to decide when the project has reached its goals (the goals have probably changed during the project) and should be terminated. With an iterative process the steering committee isn’t locked to an original specification, instead they can evaluate the progress of the project, embrace change and make an informed decision based on what the project has produced.

  • Søren Harder on 2012-04-18

    Thanks, Henrik. You are writing many of the objections I had to this blog.

  • Leave a Reply

    Your name as it will be displayed on the posted comment.
    Your e-mail address will not be published. It is only used if I want to get in touch during comment moderation.
    Your name will be a link to this address.
Software Development is a Job – Coding is a Passion

I'm Anders Abel, a systems architect and developer working for Kentor in Stockholm, Sweden.

profile for Anders Abel at Stack Overflow, Q&A for professional and enthusiast programmers

The complete code for all posts is available on GitHub.

Popular Posts

Archives

Series