Wednesday 15 July 2009

User stories: making them small enough

In terms of our company's evolution as we adapt Agile, we've moved toward having the work we need to do expressed as "user stories." But (in addition to learning how to write them) we've had a problem figuring out how long it takes to do a story (its "size") and then figuring out how many "stories" we can do in a sprint. So the question is - what size is the right size? Several people have been trying to answer that question within my company, and they've come up with this answer (which has been distributed around the team). I've edited it, but I didn't write it.
_____________________________________________________________

Sizeable should be Small.

And the Small is quite important. You want a number of stories in each sprint to :

· Increase the chance of getting as much work “done-done” as possible.
· The more stories you have, the smaller they are, which means more detailed acceptance criteria which means more accurate estimates and manageability.

Many of the teams working in my company have a problem breaking things down. Part of this is getting the user stories right so that it is clear what the Product Owner wants to achieve. The other part is having skill in being able to break things into smaller pieces. Some techniques (for web based work) are:

1. Separate user stories per site. Make the changes on one site and then have separate stories for each.
2. Progressive enhancement. Make a very simple feature in the first story and then add extra sub features in subsequent stories. (Stories are meant to be independent but in the case of an enhancement that builds on another story, clearly you have to be less rigid). For example, you may choose to first implement a new workflow in one story and then make adding validation as another story.
3. CRUD (Create Read Update Delete) may not need to be done at the same time. Consider releasing an input form first and then later adding the edit or delete functionality.
4. Data partitions – Some complex projects require the migration or cleaning of large amounts of existing data. One approach is to focus on the most valuable data partition first and then add additional stories to deal with less important cases and edge cases. This avoids projects stalling while the team gets hung up dealing with non valuable edge cases. For example, my company had a partner ask us to de-dupe all of our data in our Billing and Contracts DB. We knew that a project had stalled the previous year when trying to de-dupe the data so we took the approach of just focusing on the Billing Contacts. This was the highest value group to the PO and was easy for us to de-dupe.
5. Evolution. Sometimes it’s better to start from scratch. But most of the time at my company, we are enhancing an existing piece of functionality. When creating new workflows, there is an option to “evolve” over a number of sprints - basically, take an existing workflow and tackle small stories one at a time.
6. Iterate. If your acceptance criteria sound like they will require a lot of work, sometimes all you need to do is take the criteria and turn them into child stories. When you do this, you then realize how big a single acceptance criterion is and you can add more criteria to it to help with the estimates and everyone’s understanding of the project. Sometimes you might find that you need to break these stories down even further. You can use record cards on a table in the backlog workshop and arrange them into a hierarchy if it helps. Visualizing the problem in this way can really help.
7. Draw pictures. Draw pictures of how new pages/controls/reports might looks. You can then use a red marker pen to outline sub elements and ask yourselves “can this bit of the pages be split of into a separate user story?”

Slicing isn’t easy. It takes time to rotate the problem in your mind and come up with different solutions. Sometimes, setting aside some time in a team’s backlog session to explicitly brainstorm and discuss different options helps; sometimes taking it away and thinking about it on your own is the way to go. Some good rules of thumb when slicing are:

1. Have we sliced it small enough so that we can fit more than one story in a sprint (3+ is ideal)?
2. Can this story be released on its own and give value for the Product Owner?

Generally brilliant user story presentation (not really about slicing):

http://www.mountaingoatsoftware.com/system/presentation/file/97/Cohn_SDWest2009_EUS.pdf

In depth article on splitting:

http://radio.javaranch.com/lasse/2008/06/13/1213375107328.html

Wednesday 1 July 2009

Agile pretty, agile ugly

In this last week I've seen some stunning examples of Agile working really well, just like I'd expect it to - but also really badly.

First, the good. We've got developers writing automation, which I was reviewing with them in conjunction with an automation QA specialist (who's been using our automation framework for 20 hours a week versus my 5 hours a month) who was providing direct information about our automation standards (i.e. "add in 'verify negative' tests here"). We saw where a link was hard-coded where it should have been dynamically generated - and the developer who's been coding the link was able to work on changing it immediately from our feedback. And the automation was also changed to remove a kludge that had been put in to make it work with the hardcoded link (which didn't work on our uniquely-addressed test servers). Five minutes of cross-team conversation leading to a host of changes and fixes with low overhead - totally cool.

Now the bad. It's my belief that the fetish for documenting things dynamically and on index cards really shortchanges the value of really solid research about the work we are supposed to be doing. Our project for this sprint has specified we make a certain change on two pages, which can be expressed either as .asp or .html pages, so that one of them is the canonical version - but the problem is that we actually have four different ways of accessing these pages (including through redirects from old versions of the pages and via links sent in emails), so there are actually a host of different versions of these essentially identical pages. But the team only had a narrow set of them identified/known to exist when we agreed to do this work, and now there are time considerations for not doing it for the other pages - which dilutes the effectiveness of the project. When you've got people working on areas they don't know very well this is bound to happen - but I think it's an inherent problem of Agile of underdefining what we need to do in order to acheive speed. Speed and mediocrity - as a QA person, it's not really a happy place to be in, releasing something that meets what was asked for but fails to achieve the spirit of it.