Friday 7 August 2009

QA/testing in scrum team

This was the original guidance I gave the team on what to do after we migrated from waterfall to Scrum. I'll try to get back to this and see how things have changed in a month or so. FYI ... I'm giving up on this position and moving on as I don't really feel challenged as a QA specialist (who's not an automation specialist but rather a manager) in this way of working. I'm moving on to a position as a Test Strategy Manager at another company. More updates later!

QA in the Scrum teams

Release planning

1. Estimate risk areas for code
2. Are there any special QA considerations?
3. Is there an order for the chosen tasks to be done in to make a more meaningful release?
4. Are there items that should be done sooner because they are more risky?
5. Are there things that need to be developed to help QA, IE a method of accessing an algorithm that would otherwise require a GUI?
6. Lightweight test planning, identifying assumptions, risk, priority, scope. Does QA need time to learn things?
7. Test data sets? Define/create
8. Is there a test environment that isn’t a developer sandbox?
9. What sort of reporting is happening?
10. Be alert to a need for load or security testing.

Beginning of Sprint
1. Prioritization (planning poker) – include QA time.
2. Are there special QA items to be added into the sprint?
3. Think about “happy path” test cases and get to developers ASAP.
4. Be alert to items where the testing severely outweighs the development time
5. Create or obtain necessary test data
6. Be alert to the testability of the work items. Ask how to test if it’s not clear
7. Verify environment is ready for testing

Sprint Zero
1. Set up server and get it stable.
2. Investigate the items being considered for the backlog from a “what does it need to do” aspect – prepare for the product planning/prioritization meeting.
3. Get automation running on the team server.
4. Meet daily to discuss progress, etc. (start Scrum meetings).
5. Do you need something deployed that’s not part of the standard setup, such as Multimailer or Jobs Police? Work with the WSEs to get this created.
6. Spend time learning Selenium.
7. Attend workshops on the product backlog.
8. Investigate development/testing options for items in product backlog in preparation for the product planning meeting.
9. Work on integration with your team.

During Sprint
1. Lightweight scripts for “happy path” – execute as soon as done.
2. Daily code drops into environment (should get a report of what’s been finished or this work should go on information radiator) then test
3. Work with developer when testing? At the least show defects right away and get them fixed ASAP.
4. Script out more complex cases
5. Run automation against build daily
6. Daily standup – any blocking issues? Find out what is ready to test today and tomorrow.
7. Get information on areas that aren’t defined well to make sure you’re testing things as intended, not as designed
8. Develop scripts for non-happy path (bad data etc.) and run these when the developer says it’s ready.
9. Do exploratory testing regularly
10. Bugs that can’t be fixed in the time left should be put into the sprint backlog – bugs fixed right away don’t need to be documented
11. Define what automated tests should be added to the regression and work to get them created.
12. Keep track of bugs/code failures on information radiator
13. Make sure team is aware of QA progress in general
14. Work to improve quality of code in general
15. Keep an eye on the next iteration
16. Participate in product backlog definition if necessary
17. Be mindful of testing from other points of view
18. Be alert to estimates that didn’t include enough time for bug fixes
19. Review plans for testing with developers so they know what’s coming. Work to resolve discrepancies.
20. Ask developers for feedback on high risk areas of code.

End of Sprint
1. Participate in review of code with owner
2. Make sure carryover bugs have made it into product backlog
3. Execute end to end and system tests in a stable system

Pre-release
1. Define acceptance tests
2. Pass on information to Ops team about condition of code
3. Execute integration testing with other code likely to be released with this code
4. Verify automation has been created to capture new functionality
5. Define if there will be any tests that can only be tested in the live environment if this had not been previously done

Taking project through release
1. Verify pre-release environment is ready for testing
2. Execute automation as required
3. Execute acceptance tests
4. Perform integration, system, end to end, and exploratory testing against pre-release environment with new code
5. Work with developers to get bugs fixed ASAP
6. Work with release manager/scrum masters to make sure code quality is appropriate for release
7. Write up bugs so they can be monitored by all projects in release
8. Participate in release of product to live
9. Participate in testing of fixes to any errors found after release

Wednesday 15 July 2009

User stories: making them small enough

In terms of our company's evolution as we adapt Agile, we've moved toward having the work we need to do expressed as "user stories." But (in addition to learning how to write them) we've had a problem figuring out how long it takes to do a story (its "size") and then figuring out how many "stories" we can do in a sprint. So the question is - what size is the right size? Several people have been trying to answer that question within my company, and they've come up with this answer (which has been distributed around the team). I've edited it, but I didn't write it.
_____________________________________________________________

Sizeable should be Small.

And the Small is quite important. You want a number of stories in each sprint to :

· Increase the chance of getting as much work “done-done” as possible.
· The more stories you have, the smaller they are, which means more detailed acceptance criteria which means more accurate estimates and manageability.

Many of the teams working in my company have a problem breaking things down. Part of this is getting the user stories right so that it is clear what the Product Owner wants to achieve. The other part is having skill in being able to break things into smaller pieces. Some techniques (for web based work) are:

1. Separate user stories per site. Make the changes on one site and then have separate stories for each.
2. Progressive enhancement. Make a very simple feature in the first story and then add extra sub features in subsequent stories. (Stories are meant to be independent but in the case of an enhancement that builds on another story, clearly you have to be less rigid). For example, you may choose to first implement a new workflow in one story and then make adding validation as another story.
3. CRUD (Create Read Update Delete) may not need to be done at the same time. Consider releasing an input form first and then later adding the edit or delete functionality.
4. Data partitions – Some complex projects require the migration or cleaning of large amounts of existing data. One approach is to focus on the most valuable data partition first and then add additional stories to deal with less important cases and edge cases. This avoids projects stalling while the team gets hung up dealing with non valuable edge cases. For example, my company had a partner ask us to de-dupe all of our data in our Billing and Contracts DB. We knew that a project had stalled the previous year when trying to de-dupe the data so we took the approach of just focusing on the Billing Contacts. This was the highest value group to the PO and was easy for us to de-dupe.
5. Evolution. Sometimes it’s better to start from scratch. But most of the time at my company, we are enhancing an existing piece of functionality. When creating new workflows, there is an option to “evolve” over a number of sprints - basically, take an existing workflow and tackle small stories one at a time.
6. Iterate. If your acceptance criteria sound like they will require a lot of work, sometimes all you need to do is take the criteria and turn them into child stories. When you do this, you then realize how big a single acceptance criterion is and you can add more criteria to it to help with the estimates and everyone’s understanding of the project. Sometimes you might find that you need to break these stories down even further. You can use record cards on a table in the backlog workshop and arrange them into a hierarchy if it helps. Visualizing the problem in this way can really help.
7. Draw pictures. Draw pictures of how new pages/controls/reports might looks. You can then use a red marker pen to outline sub elements and ask yourselves “can this bit of the pages be split of into a separate user story?”

Slicing isn’t easy. It takes time to rotate the problem in your mind and come up with different solutions. Sometimes, setting aside some time in a team’s backlog session to explicitly brainstorm and discuss different options helps; sometimes taking it away and thinking about it on your own is the way to go. Some good rules of thumb when slicing are:

1. Have we sliced it small enough so that we can fit more than one story in a sprint (3+ is ideal)?
2. Can this story be released on its own and give value for the Product Owner?

Generally brilliant user story presentation (not really about slicing):

http://www.mountaingoatsoftware.com/system/presentation/file/97/Cohn_SDWest2009_EUS.pdf

In depth article on splitting:

http://radio.javaranch.com/lasse/2008/06/13/1213375107328.html

Wednesday 1 July 2009

Agile pretty, agile ugly

In this last week I've seen some stunning examples of Agile working really well, just like I'd expect it to - but also really badly.

First, the good. We've got developers writing automation, which I was reviewing with them in conjunction with an automation QA specialist (who's been using our automation framework for 20 hours a week versus my 5 hours a month) who was providing direct information about our automation standards (i.e. "add in 'verify negative' tests here"). We saw where a link was hard-coded where it should have been dynamically generated - and the developer who's been coding the link was able to work on changing it immediately from our feedback. And the automation was also changed to remove a kludge that had been put in to make it work with the hardcoded link (which didn't work on our uniquely-addressed test servers). Five minutes of cross-team conversation leading to a host of changes and fixes with low overhead - totally cool.

Now the bad. It's my belief that the fetish for documenting things dynamically and on index cards really shortchanges the value of really solid research about the work we are supposed to be doing. Our project for this sprint has specified we make a certain change on two pages, which can be expressed either as .asp or .html pages, so that one of them is the canonical version - but the problem is that we actually have four different ways of accessing these pages (including through redirects from old versions of the pages and via links sent in emails), so there are actually a host of different versions of these essentially identical pages. But the team only had a narrow set of them identified/known to exist when we agreed to do this work, and now there are time considerations for not doing it for the other pages - which dilutes the effectiveness of the project. When you've got people working on areas they don't know very well this is bound to happen - but I think it's an inherent problem of Agile of underdefining what we need to do in order to acheive speed. Speed and mediocrity - as a QA person, it's not really a happy place to be in, releasing something that meets what was asked for but fails to achieve the spirit of it.

Wednesday 24 June 2009

Amazing differences between Agile teams

I've been moved to a new team as a part of a small reshuffle. I'm very happy about this - it's like being adopted by a family of people that all get along.

Note from today's planning session: it is SO MUCH NICER to come to a planning session when you've already worked out what the tasks are (mostly) for the stories you're estimating - it really reduces the effort in getting the time estimates done because you understand the work so much better.

Friday 22 May 2009

Four months into Agile: a retrospective

Our scrum master has quit, my best friend has been laid off (because the company doesn't need product analysts anymore), and I'm on the team that's considered the worst in the company.

My morale is low.

Friday 13 March 2009

Improving your chances of successfully rolling out Agile by being better prepared at the start

Over the course of the last month plus, I've been very frustrated by how things have gone within my new team - that has been newly made Agile - in a company that's just gone Agile (as of January 6th, a rolling rollout across seven or so teams). Working through this, I see several books in the making - because there is so much information that people need about how to really make this work and I'm not sure if there's enough literature out there to cover it. Yeah, sure, you can hire a coach to help your company make the transition, but it's really expensive, and how do you know you're going to get the right person for the job?

So ... our training, across the department, pretty much consisted of some two days that made us "Scrum Masters." Now, this is about the most useless training I've ever had for the money, and you aren't actually properly trained to be a "scrum master" (as in "the person who runs interference for a scrum team") and you're sure as hell not a master of Scrum practice just because you've spent two days learning about how to properly scope the amount of work you can do in a sprint. Worse yet, people come out of it thinking they know how to do Agile when in fact they only know how to do the tiniest bit of Agile. Scrum Master training is basically aimed at developers. It ignores the needs for requirements gathering, documentation, testing, and releasing a product - and if that's where you are involved in the software lifecycle, you will come out of the training thinking there is no role for you in the new world. But this is Not So.

So, if you're looking to switch your company to Agile, what can you do to most ensure success? I expect there is a lot more to say about this that I will in just this post, but what I would like to encourage people to do is learn how to build a product backlog. The moment when the team comes together for the first sprint estimation meeting is a month or two too late to be starting this work. The product owners need to learn how to start expressing what they need in a way that can help a team make good estimates, rather than just producing a list of tasks that all require further investigation, and someone needs to make sure the results of this research are somewhere that the team can access, so that (for example) the QA folks can work on developing extensive acceptance criteria during their down time in the sprint rather than waiting until the actual point where the work for a sprint is being chosen to decide what the verification points are going to be. (These meetings move fast, and its difficult to spend the time you need to develop these tests fully when it's holding up moving on to the next item for estimation.)

I think this is where we've critically failed. We didn't have enough in the backlog to start with, which meant we didn't have enough work to prioritize the first day (and we ran out of time rathe than running out of work to do), we didn't spend time during the sprint working to further flesh out our backlog, and ultimately our team probably operated at 60% of what our capacity should have been for the sprint. We didn't know where to go to get work and we didn't have the work prioritized so we could just "grab the next card." Result: a very frustrating first sprint, and, I'm sorry to say, a second sprint that is about to start without a sufficient definition of the work we are going to be doing.

Next up I expect I'll be talking about how to create user stories.

Thursday 5 March 2009

The Eternal Questions of the Spotless Scrum

Much as people question, "Why does bread fall butter side down?" and "Why is something always in the last place you look for it?" I find that Agile has its own eternal questions. The one that is the most problematic is, "Why is it that it's so hard to get business owners to commit enough time to get the work done?"

Thinking about it, there are two things that probably ought to change. The first is in properly training the business owners about the change in their responsibilities. This should be part of the agile ramp-up in an organization (and was a step we skipped), and should include information like how to work the product backlog and how to develop user stories.

The second thing is more difficult to manage and has to do with freeing up their time. It's one thing for the development organization to say they're going Agile; it's another thing for a business owner, who, says, has responsibility for managing a team of (let's imagine) customer service agents, can suddenly make all of his other work go away and sit around "being available" for the team that needs his input rather than being there for the team he actually needs to manage. This is a much more difficult thing to work around, though colocation helps (not that I think a Dev team really wants to sit in a call center), but also anticipating this by reducing other duties a business owner might have is a possibility.

How has your company handled this?

Similarly, I'm curious about initial velocity calculation, for when you're trying to figure out hw much work you can get done in your first sprint. We had no estimations of the number of actual hours our team could work during a week, so we just made it up based on what we felt comfortable saying yes to - and wound up being 25% under our actual capacity based on when we've run out of work. Ideas on how to manage this?

Wednesday 25 February 2009

It's all gone very Lord of the Flies

We're at the end of our secnod month of the Agile conversion at work and things are ... rough. Many of the senior non-development people that work here are talking about leaving in private discussions. Excluding coding, the work to be done seems excessively trivial. Too many people feel like the skills and specialisms they've honed are no longer getting used, and instead they're frittering away their time on projects that are small in scope mentally as well as physically (if that makes sense). The conclusion is that if the economy was better we'd have already seen people leaving, but they're hanging on - though the resumes are already going out. Even the person who sits next to me and is consistently chipper and "let's look on the bright side of things" is saying that her work life on a daily basis is rotten.

Meanwhile, the atmosphere of "everyone can do any task" has led to an environment I liken to Lord of the Flies - the savages have painted their cheeks and declared themselves to be Product Managers (or UI developers, or Business Analysts) because they went to training where they were Told They Could and no one is stepping in to stop them. So in Sprint Planning meetings I have to listen to people spend two hours arguing over what "done" is because they are all now Agile experts, and in another team all seven people are having to work out what the requirements are for a project because They Are Empowered Now and no one bothered to do the research beforehand (not even enough to do an estimate) because to plan what you're doing before a sprint start is Crypto-Waterfall Apologism, don't you know. Too many people are running out of work to do long before their sprints are over (again with the lack of planning), but we're stuck with overly-long sprints created by executive fiat and fixed resources "because in training they said sharing people is a risk."

I am really hoping that a lot of these things will get ironed out in the next month but being stuck in the middle of things makes it very difficult to have a good attitude about it. I do feel like for a lot of senior specialists (like myself) Agile is not really offering the kind of career growth and challenges that I want, and I fear that between the abyssmal morale and the change in what's required of us, we will be having a substantial amount of staff turnover.

Lesson here: two days of training is not enough (everyone got the same training, from the developers to the scrum masters to the UI developers and QA) and a lot more planning should have been done before we started this process so that we could have hit the ground running and got a lot more out of our first sprints, rather than burning so many people out at the very beginning. I would have especially looked at something really intensive for the scrummasters so they knew how to get maximum use out of their teams at the very start, and they could have percolated expectations (and work) back down to their groups so that they knew what to do at the start rather than flailing around like we have been.

Monday 23 February 2009

How hard do people work in an Agile environment?

I have been thinking about some of the basic premises of Agile lately. One of them seems to be that people will work hard if they are empowered to do so, and that you don't need people to monitor anyone's work/boss anyone around under an Agile aegis - people will pick up work on their own.

However, this assumption goes completely against another basic tenet of many people when it comes to work; people are lazy and will do as little as possible to get by.

Now that I'm smack in the middle of it, the question is: which is true?

Well, oddly, what I'm seeing is a mixture. For the developers, apparently the scuttlebutt is that they feel they're now working harder than they ever have before! There's a great pressure to make sure that at the daily standup they're able to show that they've accomplished something and that they can "move a card" from the In Progress to the Ready to Review column, or from the Review to the Done column. I'm not sure how this is different from before. Perhaps it is the small scale of the tasks that they are needing to do that makes this possible - or maybe it's because they're actually being monitored on a daily basis! (Which would support the lazy theory, I think.)

But for the people who's roles have been poorly defined - or for the people on teams where the work to be done has more generally been poorly defined, or incorrectly scoped (and finished halfway through our month-long sprints), I've instead been seeing a kind of lethargy and unwillingness to find work. I think to some extent this is indicative of some morale problems at this point in the game - but I also think that what we're seeing is a complete lack of clarity about how things are supposed to be done right now. We're not all jumping into each other's roles, because we can't, we're not cross-trained appropriately; we don't know how it is we're supposed to do the work we probably should be doing on planning; and because this work wasn't done beforehand, the people who normally execute the planned work are adrift. Many of us are way out of our comfort zones, and we're not all turning into radical go-getters just because there's a vaccuum of power and control, and we're "supposed" to do everything now. I see the scrum masters not providing guidance, and I see the other team members wasting their hours away waiting for something to happen. Is the business owner's highest priority work happening? I think not.

I wonder sometimes if Agile is created under a certain kind of American optimism about the workplace. Me, I have been seeing a lot of struggling people - but I'm amused to see that the QA team, the one that was consistently worked into the ground before the transition, is actually getting more time to breathe these days, whereas the developers, around whom this process seems to revolve, are the ones who find themselves struggling under the painful searchlight of constant attention.

Wednesday 4 February 2009

QA "practice" evolution - as we approach the end of the first sprint

The first team is approaching the end of its first sprint - their release is scheduled for next week - and the QA team is polishing up what its doing as we continue our field trials and report back to each other on what's worked and what hasn't. Of the three teams "in the field" right now, two are keeping lists of their verification tests in a shared drive, while the third team has been writing their verification tests on the back of their index cards. Guess who lost all of their data for one of their work items, including the business requirements? Yep, it was the team that only had a hard copy. We'll now be keeping the verification tests in the shared drive for all groups. This also ensures that if someone has to step in and do the work there is a record of what has been done. With this week's snow keeping most of the company at home for a day or two, having this kind of record would have been useful; with my being asked to take over a project that hadn't even bothered with writing things up on cards (and yet called itself "Agile," presumably in the Dilbert way as there was zero documentation produced at all), it would have been invaluable. A record must be kept somewhere.

This also brings up the question of how to handle losing people mid-sprint. Conceptually anyone would take over the work, but our teams aren't cross-functional enough for that to be done. Another concept I've heard is that the entire sprint might be cancelled. I don't think we will want to do this. Currently we don't have any solution for this problem at all. I'll report back as we try to fix it.

As we come close to the release, we're needing to deal with the new process for getting builds out the door. Before, we'd have a test plan for the whole project, and a set of test cases flagged to be executed in the pre-staging/integration environment; but now we aren't writing any test plans at all and we're not using the old list of test cases that included the flag for testing in "Int." However, the Operations team had as a requirement for getting into Int that there be a test plan created! (This is in addition to their being provided a list of the changes to automation - a good plan, I think - and "the code has been passed by the team in the Dev environment," which is a bit vague, I think.) They didn't specify what they wanted, though, and said it was up to us to figure it out.

After some chatting, this is what we decided to do for a test plan. First, we'd have a list of the acceptance tests, which would be built from the list created for testing in the dev environment. Second, we agreed that all bugs found in the Integration environment would actually be written up formally - something we've gotten away from with the Agile testing but which we all agreed would be important for this environment in which multiple projects would be sharing the same code and needed to be kept aware of what was going wrong in case of possible overlaps (and also so the Ops team could be made aware of any quality issues). Third, we should identify who all was doing the testing and how long we thought it would take. Fourth, we should communicate whatever risk level the project had to the overall functionality of our site. Fifth, we needed to define what our quality was for a "pass" - even though we only have X number of days to do our testing (this is now strictly limited per the new release process), it's vital for us to say if we think it's not, in fact, ready to be released. This was a standard part of the old release process and it seems wise to not let it slip by the wayside.

All of this brought up some interesting issues with our release process. Although the development team is now controlling releases into their environment, it doesn't appear that they've actually been refreshing the various environments with the code that's been released to live. This mean that when the first team goes into the Integration environment on Monday, it will be testing for the first time with the code from three different releases that have happened since their servers were originally built. As I see it, "integration" should be for integration with other NEW projects that have been created simultaneously, NOT with pre-existing code. It's been a problem since I've been here that we don't do this at all until just before a release and the cause of most of our major pre-release headaches. I want to see ALL of the Agile development environments updated the day after every release so that we can end this schedule-blowing fault for once and for all, but three days before a release, we decided we probably couldn't get much done - especially as the most critical of the three releases is only happening on Saturday. So this is something I'm going to try to push forward at the next all-teams meetings, as a change to our overall process - that part of the acceptance criteria be that the release has been merged and tested against the LATEST release BEFORE going into the integration environment.

Finally, we looked at the question of how to update our automation. We've still got some developers creating automation in Selenium, which won't run in our Integration environment and isn't integrated into our main set of automation (all done with WebInject). While this is it's own problem, the real issue is that 1) we need to provide a list of automated tests and 2) there may be conflicts with tests created in different branches. We decided to handle this by having the QA manager and his automation assistant review all automation created before it is added to the set of tests to be run against a release. The goal of this was to control the possibility of overlapping/conflicting tests, but a nice side effect of this is that it will give them a chance to winnow the tests that are written down to an appropriately sized set. (With luck this review might also help them see which tests may have become invalidated due to changes in the code of a certain project, but this will most likely be revealed through the course of running automation against that project in the development environments.)

Now what we need to do is get the developers trained up on WebInject AND actually get these things written BEFORE we go to live (much less to the integration environmetn), and then our automation will really be rocking along!

An Agile/Scrum coach

I'm pleased to say that upon returning from my vacation last week, I found out we'd made the decisions to get a Scrum coach into the company to help us with our transition to Agile. Sadly, getting one in should have happened before we made the transition, but I get the feeling that management though that our two days of training was going to be adequate. It's become painfully clear that we're not really "there" with what we need to be doing to making this a success - too many people weren't trained up in what they needed to do before we started doing it, and even though we've been (desperately) sharing with each other to improve our practice as quickly as possible, things are going wrong and we don't really have the skills to improve what we're doing on our own. It's not quite Lord of the Flies but there is far too much anarchy and uncertainty, and to me it seems obvious some train wrecks are on the way. (Hopefully these won't involve too many pointy sticks and tribal rituals but there's no telling right now.)

Of the two interviews I took part in, I was most impressed by the second, a youngish woman I'll call L. While both candidates understood the need for flexibility across a variety of areas, L seemed to have a clear vision of exactly what she'd do to help us get up to speed as fast as possible, including running classes across a variety of work functions and doing personal coaching (and attending meetings such as sprint planning sessions and retrospectives).. I also really liked her answer to my question about what business owners should do in preparation for a sprint - a real sticking point for me as because of the way we were trained, people are convinced you are supposed to do NO work for a sprint before a sprint starts, except for the situation when you have a sprint zero. There is work to do before a sprint starts, but the business owners were not coached in what they needed to provide prior to their sprint plannign session, and we're struggling because of this. Anyway, in addition to the every-going sprint backlog maintenance, she said we need to be

1) getting the user stories to the right size (no point in having them all be at the half a year size - I think by "right" she means so they can be broken down into work items by the team and be close to one sprint size though perhaps only the first is true)

2) doing work with the product owners (not sure exactly what kind, probably in terms of defining the user stories and rechecking the prioritization of the backlog)

3) making sure the acceptance criteria for the user stories is testable and getting the product owner to agree to the define acceptance criteria.



I'm sure there's actually a lot more work that ought to be done in sprint perparation, but I've had very little experience in this area - anyone want to add to the list?

Thursday 29 January 2009

Second week learnings - what is Test Driven Development?

This is about a week and a half late, but still useful - last week was too busy to really say much.

At our review of the evolution of our Agile process, it came up that we're suffering from not having the developers do unit testing. Now, this was a problem before - they were always "too busy" to add it in - but based on my experience with an Agile roll-out at my last company, unit testing was a vital part of the process. This, however, didn't come up in our training. In fact, little was said at all about test driven development and what it means.

Just like last time, it looks like people are misreading TDD. The developers in one team have decided that creating automation in Selenium - basically, automation that works on the UI - they will have satisfied the requirements of having development that is tested with automation. It's kind of hard to explain the logic here but my understanding TDD requires setting up unit tests - which are almost always below the layer of the UI - and creating these tests before you start coding (I'm not sure if you have to define them or actually script them beforehand as I haven't done t hem myself). UI tests usually can't be written until way after the supporting code is created, and Selenium, like most tools of this sort, is so fragile that just tiny changes to the UI will break its scripts, so the code can't really be written until after the coding is done. So what we get is not coding driven by well-written unit tests; we're getting coding and then a pass at creating super-fragile automation that tests the GUI after the real work has been done.

A second problem that came up is that we have a huge body of code written (using WebInject) that tests our current code base, and this is the code that is run when we do our automated regression passes. The developers are now supposed to be adding to our automation to include elements that test the new areas they develop; but somehow some of them have decided that they can use a different program for the automation than that which we have been using in a standardized fashion for a few years. While we might migrate to a new platform, we would have to convert all of our other tests in order for this to happen; and the thought of running two different automation platforms at the same time just seems rather silly. It was determined that to better handle this we needed to train the developers in how to use WebInject. While we may change platforms in the future, we need to make sure we're actually meeting our current automation needs until a choice is made to do the change.

Finally, there were some issues with where the QA piece fit into the user stories. For one of the QA team's Scrum group, they had decided to have the QA story be just one card that lasted for the whole sprint. We generally agreed that that was a bad plan, as it meant you could never tell when the QA was done, or if the QA was done for an individual item. Better choices were thought to be having a separate QA story for each development story, or using colored dots/magnets on a story to indicate when "QA script written/Coding Done/QA complete" had happened, or alternately to move a coded story into a whiteboard column to show that it was ready for QA.

Next up: cross-functional team communication and developing communities of practice.

Tuesday 20 January 2009

The Agile workstyle

Today I am finding the Agile style of working, in which people are constantly interrupting me to talk to me, very irritating. I am operating under deadline but now that we've changed regimes people want to talk about stuff all of the time. I just want to get this work finished, and I have a hard deadline!

Friday 16 January 2009

Creating a new deployment process

The new deployment process has been made public at last. It's difficult to describe what we're doing now, it involves a lot of manual work and as our code sourcing isn't done very well, we often wind up having files forgotten or overwritten with old versions of code. While for a QA person this is a guarantee of long-term employment, in fact, finding the same bugs over and over again is quite dispiriting as it takes away any feeling of accomplishment with releases as it's almost guaranteed that some issue fixed before, and usually several of them, will be cropping up in the new release. We have automated testing, but our code base is so complicated we don't test every little thing with the automation, and growth of our automated code base is oh-so-frequently caused by the discovery of the latest new thing we've managed to break and subsequent addition of an automated check for said bit of code.

The new plan kind of goes like this: first, we get our servers set up; then we have daily automated code drops with some testing and a possible second test server if the code is really unstable; then, in the week that is the release sprint, we merge with any other code waiting to go live and everyone in the team tests like mad (or otherwise works on the release) for the remaining days until we go out.

My problem is this: I don't really see a point at which some time is being scheduled to do end to end testing of the code before we merge with the rest of the code. Also, there just isn't any contingency built in; it's assumed we won't need more time for testing before the merge, and that dumping a pile of bodies on the work will always be sufficient for after the merge and will be enough to guarantee that we'll be able to go live on schedule. Third, we're sticking to our silly, rigid deployment timeline. The teams are being told they pretty much have to release every cycle; the release team is anticipating that the code will be ready like clockwork. There's no wiggle room. And on top of all of this, the attitude is that if you say there are any possible holes in this process you're being negative about the conversion to Agile.

Really, this is quite a time we're having here. I do really hope that the automated build deployment gets up to speed quickly, and that it serves to quickly reduce the amount of problems we have and time we spend working on builds that have old or missing code in them. I still think we'll need to learn how to fit testing into the process better and come up with some more flexibility for our release schedules, but we can only fix one thing at a time, and this one is a biggie. Over time, it's my belief that we won't have to have the whole team drop out during the release, that release tasks can just be part of the next sprint and one or two developers can be on it as necessary and then move on to regular development work along with everything else, but in the beginning it makes sense to have the whole team work on it while we're ironing out the process.

Wednesday 14 January 2009

Learnings from first Product Review meeting

The first team starting Agile actually started their Sprint 1 this week with a day long prioritization meeting. Apparently this went a bit ... interestingly. A Tweeter in the meeting was sending out requests for help during it, and all indications were that around 4 PM things had become quite fractious and unpleasant.

According to a QA member at this meeting, there were three learnings. First, the meetings need to be a lot more structured, with time set to do X and time set to do Y. The user stories should have been prioritized, the estimates needed to be given faster without too much faffing, and the work items should have been broken into tasks (to enable estimation) before the meeting started.

Second, he said people needed to not get bogged down in conversations about trying to solve technical issues during the meeting. Sure, if a business owner has proposed X, and A will take 4 days and B will take 1, coming up with options A and B, presenting them to the owner and getting her instant approval for one or the other is vital. But these kinds of conversations need to be tightly monitored and focused, and the minute they get beyond the bare minimum you need for business feedback, you've probably gone to far.

(My experience, FYI, was that the first meeting took three to four times as long as it should have, so this kind of painful experience seems not atypical of a first time through.)

Finally, the scrum team would now like to not have the product owners there for the entire meeting. This is because one particular product owner kept pushing back on the estimates, saying they should all be smaller, and told them that the various things they needed to do as a part of the process were unnecessary. The team members considered this very disruptive and felt like it put too much pressure on them, that they couldn't give honest answers. I also heard that this led to the creation of an "us versus them" mentality - or perhaps reflected a pre-existing one.

The people who taught the SCRUM course we all took would probably see this as a "pigs versus chickens" thing, where the chickens (business owners) want to run things in the farmyard but it's the pigs who have "skin in the game." I found this a very negative simplification that ignored the fact that the chickens DO have skin in the game, as they will be the ones who get canned if their brilliant ideas don't bear fruit. What's unfortunate is that I see the Agile estimation process as a really vital way for the development team to communicate to the business owners the real costs of the choices they make in terms of effort; it gives people the opportunity to say, "Hey, why implement that as a text boxwhen we can do a drop down box for a third of the time and take care of a bunch of error scenarios while we're at it!"

Unfortunately now this team feels alienated from dealing with the business owners and wants to exclude them from estimation and task list creation, and I think this is a damned shame. Anyone out there have any ideas about how to handle this situation now that the damage has been done?

Monday 12 January 2009

Meeting "52 Pickup"

We're having a bit of trouble in only our second week of sticking to our community of practice meetings - two of the Agile teams are having meetings today, and it seemed like if we were to be able to share information on how Agile is going, we would do a lot better if the people who were already ramping up with Agile were sharing their experiences with the other members of the team. So I've moved it to tomorrow. Tuesday is probably going to have to be a permanent thing as with 7 teams, at least one person will be at an all-day sprint planning meeting every Monday (even though there's only supposed to happen once a month).

I'm sure there are comments to make about how the first sprint planning meeting went today, but I wasn't at the meeting. The feedback I got was that far more research should have been done to flesh out the ideas before everybody got into a room, and that nobody really knew what they were doing. It does really make me want to switch to Scrum-mastering - I'm sure I could help us get to a better state in our process if I were in the middle of things rather than on the sidelines doing mop-up work and just trying to keep the site going during this big transition.

Also, we're running short on QA resource and we are really getting to "squeezing blood from a stone" as people attempt to find non-existent "free" time in the schedule of the QA team members who've started with the Agile. I am not sure how this is coming as a surprise to the PMs in question, but it is!

Friday 9 January 2009

Sticking to Scrum if it kills us

I just went to the weekly meeting where we discuss the Agile implementation and how things are going with it (the one in which I brought up the need for QA to get DB access again).

I am unhappy that the dogmatic tone that the trainers took about how to do Scrum has filtered down to some fairly senior and influential people. Basically, to do anything in a manner that is contrary to what was recommended by these trainers is being interpreted as "Scrum, but."

The effect of this is that instead of liberating people to find their own ways within their product teams, we're shackling them with executive fiats simply on the basis of "this isn't what the trainers said." And what is frustrating about that is that the trainers were just giving one flavor of Agile, in which every work item and every sprint MUST result in visible, executable code ... and be one month long ... among other pronouncements (my favorite being the one that everyone on the scrum team does ALL work, even though our company is not constituted of 100% developers but has project managers, quality assurance analysts, graphic designers and business analysts among other sorts of specialists - as well as Dot Net and HTML developers).

Now the team we have that's been doing weekly deployments for some ten months or so is getting pushback that they MUST be on a monthly release cycle, less we become "Scrum, BUT." And the team that does invisible infrastructure work is being told to rejigger all of their work so that their aim is to produce visible "slices" of product for every release. In essence, we're trying to fix things that aren't broken, and make them blindly adhere to process when there is no reason to do so.

This, to me, seems as profoundly anti-Agile as we could be. It's not about valuing people over process; it's about setting up process as a deity with commandments we must follow. And it's creating even more control from above rather than empowerment for those who are doing the work. It grates on every bone in my body.

In other work areas, the PMs are fighting like mad dogs to not let any of the QA people roll over to 100% Agile until the leftover projects get out the door. I want people to be doing their Agile work without interruption and finding their own rhythm, but I'm really unhappy about the state some of this code is in, and to me, while it's bad not to get Agile going on the schedule it's supposed to be, it seems worse to get things started by releasing code to live that's going to cause us to lose customers and have a never ending series of emergencies for two weeks. So while I want to run around and say, "No! How dare you take away the QA person's time from the Agile team! How are we going to make things work if we keep having our time nickled and dimed away like this?" On the other hand, the person that lives in me that hates seeing the quality of our live code get kicked can't quite overcome that other voice in me, because there's nothing I hate more than frantic phone calls from business owners telling us about what large account we're about to or have just lost because of a bunk release. So I'm keeping mum for now.

Thursday 8 January 2009

Keeping track of tasks

It's official - we're currently using the Post-it note method of work tracking, which leaves us at the mercy of broad handbags and the cleaners.

I know it's the preferred method for many, but I HATE not having a record of a work item description and its status that can't accidentally go away. To me it's an invitation to disaster. But, you know, QA, I always expect the worst - and am so rarely let down! That's why I keep my printouts of project plans and test cases for years after a project has gone live - even backup servers can be wiped out, and I've lost at least a month of work before when this has happened on one occasion. ("But I thought that was just test data!" "Yeah ... all of our test plans and test cases ...")

And the whiteboards - the doom of the open office! We just don't have enough wall space here. Or privacy, or quiet.

Seems like it's time for my 3 PM cup of tea.

How to deal with the code stability issue

I was worrying Monday about the problems with testing code under the "continuous integration" regime - if we're testing one item, and we think we're done, then it gets changed again (I imagine it being affected by another area of code), how does QA know it's "done done?" Under the version of the build process I saw on Monday, this was a real possibility: QA was basically expected to test in what I saw as a dev "sandbox," which is extremely unstable and only a step up from testing against a developer's own box - no control on changes and data. However, the version of the deployment workflow I saw yesterday had a new "optional deploy to second stable INT server for QA" bubble on it next to the "automated daily build on team branch" bubble, which makes me think that someone is actually taking into account our concerns. This is cheering.

I've also realized this week that it's time to get QA access to the databases again. I think this was taken away as part of an over-enthusiastic rollout of Sarbanes-Oxley - we can't possibly see any confidential data if we don't have access to the DB, right? However, every professional QA shop I've worked at has expected the test team to be doing verifications in the DB, and the fact that we are not here to me has long shown a hole in our processes. Apparently we had it at some point before I showed up; it's time to get it back. I'll be bringing this up at the Agile Implementation review meeting on Friday and see if I can get traction. The Developers want to be able to control what's on the boxes so they can do their work better - it's time for us to be able to do this, too, at least for the data. Just read access would be sufficient! (I wonder if there's an SQL licensing problem associated with this? If there wasn't before, there might be now ...)

Wednesday 7 January 2009

Building morale one spoon of sugar at a time

Some developers just came over to my desk to follow up on some defects I'd sent to them. For one, it turns out I'd read an ambiguous requirement in a different way than they had (but when we discussed them, I realized their interpretation was correct) and we were able to sort out and pass the bug; for the second, the developer was basically trying to say, "It wasn't me!" and I was able to say, right away, that I was sure it wasn't and that there were just some deployment issues to get fixed (my suspicion being either that some code wasn't merged correctly or something had been overwritten at some point).

And then I gave them both chocolates from the huge pile I have next to my desk (Ghirardelli, marzipan, and Hotel Chocolat), they teased me about my tea collection, and we had a nice little chat for a minute or two.

It was nice to have them come to me, it does seem a much more Agile way to handle things, though with five more items awaiting my tender mercies, I'd rather not be running back and forth for everything, but rather just cranking through this big pile as fast as possible.

WHEW. Time for chocolate for me, I think. I'll pull from the secret stash of Vosges Haut Chocolat, because I deserve it, and stale chocolate is a crime.

Filling in the holes in the Agile process

Today one of the testers has been asked to spend hours and hours updating their test cases for a project that's going out at the end of the month so that they can be used as the documentation for training the team that's going to use this product.

This bothers me for a few reasons. First, I don't think test cases should be used for training. We write them quickly and with the assumption of considerable area expertise - and to use them as training material requires considerable rework (as in what's happening now) and is likely to lead in holes in training coverage based in the differing needs of QA people and end users. Second, we actually need this person to do other, higher priority work - but as the team that's doing this project "owns" his time, I can't take him off of it. I mean, maybe I could, but it doesn't seem like the right fight for today.

Third, this points out to me a potential hole in Agile. If we're producing pretty much no documentation at all, how is this kind of need supposed to be handled? For this particular project, what I see is that there wasn't proper business analyst support put onto this task, but for the future, I don't see this being done at all. QA won't really have much of anything written down at all and certainly not enough to form the basis for a training program. Who's responsibility is it to get this done going forward?

Finally, in a meeting yesterday to deal with the scheduling problems caused by all of the last waterfall projects trying to make it out at the same time, I was dressed down for being negative "here at the start of the year, and really bringing down the attitude as we move into our new Agile workplace." It's really horrible for a QA person to be told to not be negative lest morale be affected; it's our job to point out when things aren't going right, and I'd say calling attention to a project being understaffed during the time of its release is a pretty valid thing to do and exactly what my job requires I do so that our product development work can continue without snags. To me, hearing "don't be negative" is pretty much like hearing someone tell me to just keep my mouth shut in general, which means I lose my effectiveness as a QA person and as a team leader. We haven't made the transition to Agile yet and the QA team is being seriously squeezed during this time; isn't it right to try to get this work done, and is it unreasonable that the team at the end of this process is the one that's under pressure? I guess in development land it all looks like the Land of Oz, and for us, we're still at the Wicked Witch's castle trying to get out - but pointing out the gaping chasm we're about to step into "is bad for morale."

Sigh. I hate feeling unappreciated.

Tuesday 6 January 2009

Breaking code with a feather touch, and: too many meetings

So under the new team structure I'm supposed to be doing loads more manual testing (as opposed to team management, which is what I really enjoy and why I'm a "lead" and not a "senior technical tester" or something like that) ...

and I have a huge load of work to get through this week, but haven't been able to touch it until the 1) environment was upgraded with the new code and a fix for the financial servers and 2) I took care of all of that management stuff which hasn't really gone away ...

and the upgrade changed a financial transaction thing from "using the live servers instead of the test servers" (which is really seriously wrong) to "this transaction isn't allowed" (which means I can't even start testing) ...

and then the first defect I tried to test that wasn't about transactions also failed. So even though I wasn't able to get to "real" work until 3 PM (3 PM!) in my work day, what I did work on pretty much crumbled when I touched it.

On the other hand, you know, if everyone coded stuff correctly, I'd be out of a job. I might as well complain that it's cold outside, which it is, but instead I'll complain about having another meeting (to deal with finding enough warm bodies to get through the work backlog by our arbitrary "start people working Agile and stop continuing to support the old work flow" deadline) that will keep me from making more code go 'splodey, because at least when I fail defects early enough, there's a chance they'll get fixed in time for the release.

First scrum - though not with new team

Today I had my first official Scrum meeting - or so it was billed. All things considered it was a status meeting for the little set of bugs we're supposed to get out the door before I can get onto my actual Agile team. The transition is continuing to be difficult; one developer, when asked if he was working on at item that was supposed to be in the release, said, well, he wasn't, as no one had said he should be. It's frustrating that he doesn't know if he's supposed to be doing work, and it's also frustrating that I have some work that I don't know whether or not I'm supposed to be working on. The communication issues between the various group are holding us back right now. At our level, we're not sure which area we're supposed to be working on - our new projects or the old ones. At the same time, there are clearly business owners elsewhere in the business who expect that work is being done for them ... but the information is not being shared.

Meanwhile the rest of the projects that are trying to get out the door before the Great Agile Wall Falls are scrambling for resources - but they're converting to the Agile teams so fast I can't really commit them to be able to work on the other projects. I'm sure this won't all be a big car crash but I'm frustrated that we can't coordinate things as smoothly as we normally would. Can things please be more simple once we're all on the new methodology?

Monday 5 January 2009

Community of Practice meeting or bitch session?

Today we had our first post-transition "community of practice" meeting. Because my functional area team (QA) has been divvied up among several product team, we're in dire need of having meetings to enable us to share our knowledge about our specific specialization. Even more importantly, we're needing to share information about how we're implementing Agile across the different teams. While one guy said that this was so we could standardize our practice, in fact, I'm not too concerned about having us all do the same thing right now; this is the time for "let[ting] a hundred flowers bloom/And a hundred schools of thought contend." Our practices can grow organically at present and be customized to the needs of each team, but it is really vital that we let each other know about anything we've discovered that really works - and anything that just doesn't (or something we forgot about).

Currently the question is: what is the role of automation in our new QA frontier? Our current automation framework only allows automation to be created after the code is complete and the GUI done; this means it's not really suitable for the kind of QA practice described by Lisa Crispin, in which QA spends most of its time creating automated tests in anticipation of Dev dropping the code to them. We're also inhibited by the fact that we're mostly manual testers.

The guy who's the QA rep in the team that's first out the chute, though, is trained as a developer, and he can do a lot more. His team is looking at doing their unit test automation in Selenium, with the QA role set up as "running automation and doing some manual tests." However, it's my belief the developers will be happy with the tiny bit of testing done by their unit tests and won't attempt to do the kind of deep testing we would normally do, with lots of negative case scenarios and DB checks. But with only one day set for "testing merged code" before live, it's my fear we won't have nearly enough time to do deep testing before we're told to stop, and that code will be far too unstable for us to have ever got a solid picture of its condition. Our code base is both brittle and unwieldy, and I don't think it's really got what we need to handle lots of rapid changes. Refactoring, however, does not appear to be in the cards.

A final problem with our automation is that the platform we're currently using isn't set up to run against seven modestly different code bases all running in the same environment. Conceptually, we're supposed to be running this stuff over and over again to make sure each deployment isn't breaking things; but we've only got it set up to run against one instance per environment. Looking at this issue, I see that we've got two people assigned 50% each to doing nothing but making our automation work; I expect that the next month is going to be really chock full of challenges for them.

Damming/damning the waterfall

The conversion is hitting its first snag - the backlogged projects are clogging up resources and making it impossible for us to "get finished and move on." My group (QA) is the one that's most committed for getting work finished up so it can make it out the door - and with everyone split up into their new teams I'm suffering from not being able to pick up spare resource (usually found when a project had been held up for one reason or another) and drop it into another one to help it get finished up.

It's looking like the projects that we were "committed" to finishing before we transitioned to Agile are going to be holding up our transition, in part because over the month of December none of them managed to get deployed slash handed off for testing. (Rumor: QA is the bottleneck. Truth: just because we haven't tested it doesn't mean we're being slow or holding things up!)
The other option for managing this is to somehow shoehorn them into Agile, but what's bad about that is that it will guarantee they're delivered far later than promised (always our problem - there should never be these promises!) and that the people who have been working on them aren't available to do so (as they're on different teams), which will lead to a big flustercluck of trying to do handoffs and dealing with Ye Olde Learning Curve. Bleah. Time to hit the box of Vosges Haut Chocolat.

Starting the Agile Testing Adventure

Today's the day - we've moved the desks, and the first of the seven teams is supposed to start doing things "Agile" today.

However, we don't really seem to be ready. The plan was that this was going to be the zero week for that team, a ramp up week, but what with the end of the Christmas holidays and a huge desk move (and unpacking, and piles of email), I'd be really shocked if they'd had a stand up or if their scrum master really had a plan of attack.

My team is a late starter - we're rolling things out team by team, week by week, and we're not supposed to start going until almost the end of January. Typically, the week we're starting is a week I'm scheduled to be on holiday - at least for the first two days.

Until then, I and most of my former teammates are still operating under the old development methodology, which means there's a pile of work to do and I need to get on it.

Welcome to the New Year!