Tuesday, 23 February 2016
Back in the Saddle Again
Well, it's seven years and five jobs later. I'm now head of QA at a media company and I'm fluent in Agile and managing off shore teams. Watch this space ...
Friday, 7 August 2009
QA/testing in scrum team
This was the original guidance I gave the team on what to do after we migrated from waterfall to Scrum. I'll try to get back to this and see how things have changed in a month or so. FYI ... I'm giving up on this position and moving on as I don't really feel challenged as a QA specialist (who's not an automation specialist but rather a manager) in this way of working. I'm moving on to a position as a Test Strategy Manager at another company. More updates later!
QA in the Scrum teams
Release planning
1. Estimate risk areas for code
2. Are there any special QA considerations?
3. Is there an order for the chosen tasks to be done in to make a more meaningful release?
4. Are there items that should be done sooner because they are more risky?
5. Are there things that need to be developed to help QA, IE a method of accessing an algorithm that would otherwise require a GUI?
6. Lightweight test planning, identifying assumptions, risk, priority, scope. Does QA need time to learn things?
7. Test data sets? Define/create
8. Is there a test environment that isn’t a developer sandbox?
9. What sort of reporting is happening?
10. Be alert to a need for load or security testing.
Beginning of Sprint
1. Prioritization (planning poker) – include QA time.
2. Are there special QA items to be added into the sprint?
3. Think about “happy path” test cases and get to developers ASAP.
4. Be alert to items where the testing severely outweighs the development time
5. Create or obtain necessary test data
6. Be alert to the testability of the work items. Ask how to test if it’s not clear
7. Verify environment is ready for testing
Sprint Zero
1. Set up server and get it stable.
2. Investigate the items being considered for the backlog from a “what does it need to do” aspect – prepare for the product planning/prioritization meeting.
3. Get automation running on the team server.
4. Meet daily to discuss progress, etc. (start Scrum meetings).
5. Do you need something deployed that’s not part of the standard setup, such as Multimailer or Jobs Police? Work with the WSEs to get this created.
6. Spend time learning Selenium.
7. Attend workshops on the product backlog.
8. Investigate development/testing options for items in product backlog in preparation for the product planning meeting.
9. Work on integration with your team.
During Sprint
1. Lightweight scripts for “happy path” – execute as soon as done.
2. Daily code drops into environment (should get a report of what’s been finished or this work should go on information radiator) then test
3. Work with developer when testing? At the least show defects right away and get them fixed ASAP.
4. Script out more complex cases
5. Run automation against build daily
6. Daily standup – any blocking issues? Find out what is ready to test today and tomorrow.
7. Get information on areas that aren’t defined well to make sure you’re testing things as intended, not as designed
8. Develop scripts for non-happy path (bad data etc.) and run these when the developer says it’s ready.
9. Do exploratory testing regularly
10. Bugs that can’t be fixed in the time left should be put into the sprint backlog – bugs fixed right away don’t need to be documented
11. Define what automated tests should be added to the regression and work to get them created.
12. Keep track of bugs/code failures on information radiator
13. Make sure team is aware of QA progress in general
14. Work to improve quality of code in general
15. Keep an eye on the next iteration
16. Participate in product backlog definition if necessary
17. Be mindful of testing from other points of view
18. Be alert to estimates that didn’t include enough time for bug fixes
19. Review plans for testing with developers so they know what’s coming. Work to resolve discrepancies.
20. Ask developers for feedback on high risk areas of code.
End of Sprint
1. Participate in review of code with owner
2. Make sure carryover bugs have made it into product backlog
3. Execute end to end and system tests in a stable system
Pre-release
1. Define acceptance tests
2. Pass on information to Ops team about condition of code
3. Execute integration testing with other code likely to be released with this code
4. Verify automation has been created to capture new functionality
5. Define if there will be any tests that can only be tested in the live environment if this had not been previously done
Taking project through release
1. Verify pre-release environment is ready for testing
2. Execute automation as required
3. Execute acceptance tests
4. Perform integration, system, end to end, and exploratory testing against pre-release environment with new code
5. Work with developers to get bugs fixed ASAP
6. Work with release manager/scrum masters to make sure code quality is appropriate for release
7. Write up bugs so they can be monitored by all projects in release
8. Participate in release of product to live
9. Participate in testing of fixes to any errors found after release
QA in the Scrum teams
Release planning
1. Estimate risk areas for code
2. Are there any special QA considerations?
3. Is there an order for the chosen tasks to be done in to make a more meaningful release?
4. Are there items that should be done sooner because they are more risky?
5. Are there things that need to be developed to help QA, IE a method of accessing an algorithm that would otherwise require a GUI?
6. Lightweight test planning, identifying assumptions, risk, priority, scope. Does QA need time to learn things?
7. Test data sets? Define/create
8. Is there a test environment that isn’t a developer sandbox?
9. What sort of reporting is happening?
10. Be alert to a need for load or security testing.
Beginning of Sprint
1. Prioritization (planning poker) – include QA time.
2. Are there special QA items to be added into the sprint?
3. Think about “happy path” test cases and get to developers ASAP.
4. Be alert to items where the testing severely outweighs the development time
5. Create or obtain necessary test data
6. Be alert to the testability of the work items. Ask how to test if it’s not clear
7. Verify environment is ready for testing
Sprint Zero
1. Set up server and get it stable.
2. Investigate the items being considered for the backlog from a “what does it need to do” aspect – prepare for the product planning/prioritization meeting.
3. Get automation running on the team server.
4. Meet daily to discuss progress, etc. (start Scrum meetings).
5. Do you need something deployed that’s not part of the standard setup, such as Multimailer or Jobs Police? Work with the WSEs to get this created.
6. Spend time learning Selenium.
7. Attend workshops on the product backlog.
8. Investigate development/testing options for items in product backlog in preparation for the product planning meeting.
9. Work on integration with your team.
During Sprint
1. Lightweight scripts for “happy path” – execute as soon as done.
2. Daily code drops into environment (should get a report of what’s been finished or this work should go on information radiator) then test
3. Work with developer when testing? At the least show defects right away and get them fixed ASAP.
4. Script out more complex cases
5. Run automation against build daily
6. Daily standup – any blocking issues? Find out what is ready to test today and tomorrow.
7. Get information on areas that aren’t defined well to make sure you’re testing things as intended, not as designed
8. Develop scripts for non-happy path (bad data etc.) and run these when the developer says it’s ready.
9. Do exploratory testing regularly
10. Bugs that can’t be fixed in the time left should be put into the sprint backlog – bugs fixed right away don’t need to be documented
11. Define what automated tests should be added to the regression and work to get them created.
12. Keep track of bugs/code failures on information radiator
13. Make sure team is aware of QA progress in general
14. Work to improve quality of code in general
15. Keep an eye on the next iteration
16. Participate in product backlog definition if necessary
17. Be mindful of testing from other points of view
18. Be alert to estimates that didn’t include enough time for bug fixes
19. Review plans for testing with developers so they know what’s coming. Work to resolve discrepancies.
20. Ask developers for feedback on high risk areas of code.
End of Sprint
1. Participate in review of code with owner
2. Make sure carryover bugs have made it into product backlog
3. Execute end to end and system tests in a stable system
Pre-release
1. Define acceptance tests
2. Pass on information to Ops team about condition of code
3. Execute integration testing with other code likely to be released with this code
4. Verify automation has been created to capture new functionality
5. Define if there will be any tests that can only be tested in the live environment if this had not been previously done
Taking project through release
1. Verify pre-release environment is ready for testing
2. Execute automation as required
3. Execute acceptance tests
4. Perform integration, system, end to end, and exploratory testing against pre-release environment with new code
5. Work with developers to get bugs fixed ASAP
6. Work with release manager/scrum masters to make sure code quality is appropriate for release
7. Write up bugs so they can be monitored by all projects in release
8. Participate in release of product to live
9. Participate in testing of fixes to any errors found after release
Wednesday, 15 July 2009
User stories: making them small enough
In terms of our company's evolution as we adapt Agile, we've moved toward having the work we need to do expressed as "user stories." But (in addition to learning how to write them) we've had a problem figuring out how long it takes to do a story (its "size") and then figuring out how many "stories" we can do in a sprint. So the question is - what size is the right size? Several people have been trying to answer that question within my company, and they've come up with this answer (which has been distributed around the team). I've edited it, but I didn't write it.
_____________________________________________________________
Sizeable should be Small.
And the Small is quite important. You want a number of stories in each sprint to :
· Increase the chance of getting as much work “done-done” as possible.
· The more stories you have, the smaller they are, which means more detailed acceptance criteria which means more accurate estimates and manageability.
Many of the teams working in my company have a problem breaking things down. Part of this is getting the user stories right so that it is clear what the Product Owner wants to achieve. The other part is having skill in being able to break things into smaller pieces. Some techniques (for web based work) are:
1. Separate user stories per site. Make the changes on one site and then have separate stories for each.
2. Progressive enhancement. Make a very simple feature in the first story and then add extra sub features in subsequent stories. (Stories are meant to be independent but in the case of an enhancement that builds on another story, clearly you have to be less rigid). For example, you may choose to first implement a new workflow in one story and then make adding validation as another story.
3. CRUD (Create Read Update Delete) may not need to be done at the same time. Consider releasing an input form first and then later adding the edit or delete functionality.
4. Data partitions – Some complex projects require the migration or cleaning of large amounts of existing data. One approach is to focus on the most valuable data partition first and then add additional stories to deal with less important cases and edge cases. This avoids projects stalling while the team gets hung up dealing with non valuable edge cases. For example, my company had a partner ask us to de-dupe all of our data in our Billing and Contracts DB. We knew that a project had stalled the previous year when trying to de-dupe the data so we took the approach of just focusing on the Billing Contacts. This was the highest value group to the PO and was easy for us to de-dupe.
5. Evolution. Sometimes it’s better to start from scratch. But most of the time at my company, we are enhancing an existing piece of functionality. When creating new workflows, there is an option to “evolve” over a number of sprints - basically, take an existing workflow and tackle small stories one at a time.
6. Iterate. If your acceptance criteria sound like they will require a lot of work, sometimes all you need to do is take the criteria and turn them into child stories. When you do this, you then realize how big a single acceptance criterion is and you can add more criteria to it to help with the estimates and everyone’s understanding of the project. Sometimes you might find that you need to break these stories down even further. You can use record cards on a table in the backlog workshop and arrange them into a hierarchy if it helps. Visualizing the problem in this way can really help.
7. Draw pictures. Draw pictures of how new pages/controls/reports might looks. You can then use a red marker pen to outline sub elements and ask yourselves “can this bit of the pages be split of into a separate user story?”
Slicing isn’t easy. It takes time to rotate the problem in your mind and come up with different solutions. Sometimes, setting aside some time in a team’s backlog session to explicitly brainstorm and discuss different options helps; sometimes taking it away and thinking about it on your own is the way to go. Some good rules of thumb when slicing are:
1. Have we sliced it small enough so that we can fit more than one story in a sprint (3+ is ideal)?
2. Can this story be released on its own and give value for the Product Owner?
Generally brilliant user story presentation (not really about slicing):
http://www.mountaingoatsoftware.com/system/presentation/file/97/Cohn_SDWest2009_EUS.pdf
In depth article on splitting:
http://radio.javaranch.com/lasse/2008/06/13/1213375107328.html
_____________________________________________________________
Sizeable should be Small.
And the Small is quite important. You want a number of stories in each sprint to :
· Increase the chance of getting as much work “done-done” as possible.
· The more stories you have, the smaller they are, which means more detailed acceptance criteria which means more accurate estimates and manageability.
Many of the teams working in my company have a problem breaking things down. Part of this is getting the user stories right so that it is clear what the Product Owner wants to achieve. The other part is having skill in being able to break things into smaller pieces. Some techniques (for web based work) are:
1. Separate user stories per site. Make the changes on one site and then have separate stories for each.
2. Progressive enhancement. Make a very simple feature in the first story and then add extra sub features in subsequent stories. (Stories are meant to be independent but in the case of an enhancement that builds on another story, clearly you have to be less rigid). For example, you may choose to first implement a new workflow in one story and then make adding validation as another story.
3. CRUD (Create Read Update Delete) may not need to be done at the same time. Consider releasing an input form first and then later adding the edit or delete functionality.
4. Data partitions – Some complex projects require the migration or cleaning of large amounts of existing data. One approach is to focus on the most valuable data partition first and then add additional stories to deal with less important cases and edge cases. This avoids projects stalling while the team gets hung up dealing with non valuable edge cases. For example, my company had a partner ask us to de-dupe all of our data in our Billing and Contracts DB. We knew that a project had stalled the previous year when trying to de-dupe the data so we took the approach of just focusing on the Billing Contacts. This was the highest value group to the PO and was easy for us to de-dupe.
5. Evolution. Sometimes it’s better to start from scratch. But most of the time at my company, we are enhancing an existing piece of functionality. When creating new workflows, there is an option to “evolve” over a number of sprints - basically, take an existing workflow and tackle small stories one at a time.
6. Iterate. If your acceptance criteria sound like they will require a lot of work, sometimes all you need to do is take the criteria and turn them into child stories. When you do this, you then realize how big a single acceptance criterion is and you can add more criteria to it to help with the estimates and everyone’s understanding of the project. Sometimes you might find that you need to break these stories down even further. You can use record cards on a table in the backlog workshop and arrange them into a hierarchy if it helps. Visualizing the problem in this way can really help.
7. Draw pictures. Draw pictures of how new pages/controls/reports might looks. You can then use a red marker pen to outline sub elements and ask yourselves “can this bit of the pages be split of into a separate user story?”
Slicing isn’t easy. It takes time to rotate the problem in your mind and come up with different solutions. Sometimes, setting aside some time in a team’s backlog session to explicitly brainstorm and discuss different options helps; sometimes taking it away and thinking about it on your own is the way to go. Some good rules of thumb when slicing are:
1. Have we sliced it small enough so that we can fit more than one story in a sprint (3+ is ideal)?
2. Can this story be released on its own and give value for the Product Owner?
Generally brilliant user story presentation (not really about slicing):
http://www.mountaingoatsoftware.com/system/presentation/file/97/Cohn_SDWest2009_EUS.pdf
In depth article on splitting:
http://radio.javaranch.com/lasse/2008/06/13/1213375107328.html
Labels:
agile,
sizing,
sizing agile stories,
writing good user stories
Wednesday, 1 July 2009
Agile pretty, agile ugly
In this last week I've seen some stunning examples of Agile working really well, just like I'd expect it to - but also really badly.
First, the good. We've got developers writing automation, which I was reviewing with them in conjunction with an automation QA specialist (who's been using our automation framework for 20 hours a week versus my 5 hours a month) who was providing direct information about our automation standards (i.e. "add in 'verify negative' tests here"). We saw where a link was hard-coded where it should have been dynamically generated - and the developer who's been coding the link was able to work on changing it immediately from our feedback. And the automation was also changed to remove a kludge that had been put in to make it work with the hardcoded link (which didn't work on our uniquely-addressed test servers). Five minutes of cross-team conversation leading to a host of changes and fixes with low overhead - totally cool.
Now the bad. It's my belief that the fetish for documenting things dynamically and on index cards really shortchanges the value of really solid research about the work we are supposed to be doing. Our project for this sprint has specified we make a certain change on two pages, which can be expressed either as .asp or .html pages, so that one of them is the canonical version - but the problem is that we actually have four different ways of accessing these pages (including through redirects from old versions of the pages and via links sent in emails), so there are actually a host of different versions of these essentially identical pages. But the team only had a narrow set of them identified/known to exist when we agreed to do this work, and now there are time considerations for not doing it for the other pages - which dilutes the effectiveness of the project. When you've got people working on areas they don't know very well this is bound to happen - but I think it's an inherent problem of Agile of underdefining what we need to do in order to acheive speed. Speed and mediocrity - as a QA person, it's not really a happy place to be in, releasing something that meets what was asked for but fails to achieve the spirit of it.
First, the good. We've got developers writing automation, which I was reviewing with them in conjunction with an automation QA specialist (who's been using our automation framework for 20 hours a week versus my 5 hours a month) who was providing direct information about our automation standards (i.e. "add in 'verify negative' tests here"). We saw where a link was hard-coded where it should have been dynamically generated - and the developer who's been coding the link was able to work on changing it immediately from our feedback. And the automation was also changed to remove a kludge that had been put in to make it work with the hardcoded link (which didn't work on our uniquely-addressed test servers). Five minutes of cross-team conversation leading to a host of changes and fixes with low overhead - totally cool.
Now the bad. It's my belief that the fetish for documenting things dynamically and on index cards really shortchanges the value of really solid research about the work we are supposed to be doing. Our project for this sprint has specified we make a certain change on two pages, which can be expressed either as .asp or .html pages, so that one of them is the canonical version - but the problem is that we actually have four different ways of accessing these pages (including through redirects from old versions of the pages and via links sent in emails), so there are actually a host of different versions of these essentially identical pages. But the team only had a narrow set of them identified/known to exist when we agreed to do this work, and now there are time considerations for not doing it for the other pages - which dilutes the effectiveness of the project. When you've got people working on areas they don't know very well this is bound to happen - but I think it's an inherent problem of Agile of underdefining what we need to do in order to acheive speed. Speed and mediocrity - as a QA person, it's not really a happy place to be in, releasing something that meets what was asked for but fails to achieve the spirit of it.
Wednesday, 24 June 2009
Amazing differences between Agile teams
I've been moved to a new team as a part of a small reshuffle. I'm very happy about this - it's like being adopted by a family of people that all get along.
Note from today's planning session: it is SO MUCH NICER to come to a planning session when you've already worked out what the tasks are (mostly) for the stories you're estimating - it really reduces the effort in getting the time estimates done because you understand the work so much better.
Note from today's planning session: it is SO MUCH NICER to come to a planning session when you've already worked out what the tasks are (mostly) for the stories you're estimating - it really reduces the effort in getting the time estimates done because you understand the work so much better.
Friday, 22 May 2009
Four months into Agile: a retrospective
Our scrum master has quit, my best friend has been laid off (because the company doesn't need product analysts anymore), and I'm on the team that's considered the worst in the company.
My morale is low.
My morale is low.
Friday, 13 March 2009
Improving your chances of successfully rolling out Agile by being better prepared at the start
Over the course of the last month plus, I've been very frustrated by how things have gone within my new team - that has been newly made Agile - in a company that's just gone Agile (as of January 6th, a rolling rollout across seven or so teams). Working through this, I see several books in the making - because there is so much information that people need about how to really make this work and I'm not sure if there's enough literature out there to cover it. Yeah, sure, you can hire a coach to help your company make the transition, but it's really expensive, and how do you know you're going to get the right person for the job?
So ... our training, across the department, pretty much consisted of some two days that made us "Scrum Masters." Now, this is about the most useless training I've ever had for the money, and you aren't actually properly trained to be a "scrum master" (as in "the person who runs interference for a scrum team") and you're sure as hell not a master of Scrum practice just because you've spent two days learning about how to properly scope the amount of work you can do in a sprint. Worse yet, people come out of it thinking they know how to do Agile when in fact they only know how to do the tiniest bit of Agile. Scrum Master training is basically aimed at developers. It ignores the needs for requirements gathering, documentation, testing, and releasing a product - and if that's where you are involved in the software lifecycle, you will come out of the training thinking there is no role for you in the new world. But this is Not So.
So, if you're looking to switch your company to Agile, what can you do to most ensure success? I expect there is a lot more to say about this that I will in just this post, but what I would like to encourage people to do is learn how to build a product backlog. The moment when the team comes together for the first sprint estimation meeting is a month or two too late to be starting this work. The product owners need to learn how to start expressing what they need in a way that can help a team make good estimates, rather than just producing a list of tasks that all require further investigation, and someone needs to make sure the results of this research are somewhere that the team can access, so that (for example) the QA folks can work on developing extensive acceptance criteria during their down time in the sprint rather than waiting until the actual point where the work for a sprint is being chosen to decide what the verification points are going to be. (These meetings move fast, and its difficult to spend the time you need to develop these tests fully when it's holding up moving on to the next item for estimation.)
I think this is where we've critically failed. We didn't have enough in the backlog to start with, which meant we didn't have enough work to prioritize the first day (and we ran out of time rathe than running out of work to do), we didn't spend time during the sprint working to further flesh out our backlog, and ultimately our team probably operated at 60% of what our capacity should have been for the sprint. We didn't know where to go to get work and we didn't have the work prioritized so we could just "grab the next card." Result: a very frustrating first sprint, and, I'm sorry to say, a second sprint that is about to start without a sufficient definition of the work we are going to be doing.
Next up I expect I'll be talking about how to create user stories.
So ... our training, across the department, pretty much consisted of some two days that made us "Scrum Masters." Now, this is about the most useless training I've ever had for the money, and you aren't actually properly trained to be a "scrum master" (as in "the person who runs interference for a scrum team") and you're sure as hell not a master of Scrum practice just because you've spent two days learning about how to properly scope the amount of work you can do in a sprint. Worse yet, people come out of it thinking they know how to do Agile when in fact they only know how to do the tiniest bit of Agile. Scrum Master training is basically aimed at developers. It ignores the needs for requirements gathering, documentation, testing, and releasing a product - and if that's where you are involved in the software lifecycle, you will come out of the training thinking there is no role for you in the new world. But this is Not So.
So, if you're looking to switch your company to Agile, what can you do to most ensure success? I expect there is a lot more to say about this that I will in just this post, but what I would like to encourage people to do is learn how to build a product backlog. The moment when the team comes together for the first sprint estimation meeting is a month or two too late to be starting this work. The product owners need to learn how to start expressing what they need in a way that can help a team make good estimates, rather than just producing a list of tasks that all require further investigation, and someone needs to make sure the results of this research are somewhere that the team can access, so that (for example) the QA folks can work on developing extensive acceptance criteria during their down time in the sprint rather than waiting until the actual point where the work for a sprint is being chosen to decide what the verification points are going to be. (These meetings move fast, and its difficult to spend the time you need to develop these tests fully when it's holding up moving on to the next item for estimation.)
I think this is where we've critically failed. We didn't have enough in the backlog to start with, which meant we didn't have enough work to prioritize the first day (and we ran out of time rathe than running out of work to do), we didn't spend time during the sprint working to further flesh out our backlog, and ultimately our team probably operated at 60% of what our capacity should have been for the sprint. We didn't know where to go to get work and we didn't have the work prioritized so we could just "grab the next card." Result: a very frustrating first sprint, and, I'm sorry to say, a second sprint that is about to start without a sufficient definition of the work we are going to be doing.
Next up I expect I'll be talking about how to create user stories.
Subscribe to:
Posts (Atom)