Wednesday 4 February 2009

QA "practice" evolution - as we approach the end of the first sprint

The first team is approaching the end of its first sprint - their release is scheduled for next week - and the QA team is polishing up what its doing as we continue our field trials and report back to each other on what's worked and what hasn't. Of the three teams "in the field" right now, two are keeping lists of their verification tests in a shared drive, while the third team has been writing their verification tests on the back of their index cards. Guess who lost all of their data for one of their work items, including the business requirements? Yep, it was the team that only had a hard copy. We'll now be keeping the verification tests in the shared drive for all groups. This also ensures that if someone has to step in and do the work there is a record of what has been done. With this week's snow keeping most of the company at home for a day or two, having this kind of record would have been useful; with my being asked to take over a project that hadn't even bothered with writing things up on cards (and yet called itself "Agile," presumably in the Dilbert way as there was zero documentation produced at all), it would have been invaluable. A record must be kept somewhere.

This also brings up the question of how to handle losing people mid-sprint. Conceptually anyone would take over the work, but our teams aren't cross-functional enough for that to be done. Another concept I've heard is that the entire sprint might be cancelled. I don't think we will want to do this. Currently we don't have any solution for this problem at all. I'll report back as we try to fix it.

As we come close to the release, we're needing to deal with the new process for getting builds out the door. Before, we'd have a test plan for the whole project, and a set of test cases flagged to be executed in the pre-staging/integration environment; but now we aren't writing any test plans at all and we're not using the old list of test cases that included the flag for testing in "Int." However, the Operations team had as a requirement for getting into Int that there be a test plan created! (This is in addition to their being provided a list of the changes to automation - a good plan, I think - and "the code has been passed by the team in the Dev environment," which is a bit vague, I think.) They didn't specify what they wanted, though, and said it was up to us to figure it out.

After some chatting, this is what we decided to do for a test plan. First, we'd have a list of the acceptance tests, which would be built from the list created for testing in the dev environment. Second, we agreed that all bugs found in the Integration environment would actually be written up formally - something we've gotten away from with the Agile testing but which we all agreed would be important for this environment in which multiple projects would be sharing the same code and needed to be kept aware of what was going wrong in case of possible overlaps (and also so the Ops team could be made aware of any quality issues). Third, we should identify who all was doing the testing and how long we thought it would take. Fourth, we should communicate whatever risk level the project had to the overall functionality of our site. Fifth, we needed to define what our quality was for a "pass" - even though we only have X number of days to do our testing (this is now strictly limited per the new release process), it's vital for us to say if we think it's not, in fact, ready to be released. This was a standard part of the old release process and it seems wise to not let it slip by the wayside.

All of this brought up some interesting issues with our release process. Although the development team is now controlling releases into their environment, it doesn't appear that they've actually been refreshing the various environments with the code that's been released to live. This mean that when the first team goes into the Integration environment on Monday, it will be testing for the first time with the code from three different releases that have happened since their servers were originally built. As I see it, "integration" should be for integration with other NEW projects that have been created simultaneously, NOT with pre-existing code. It's been a problem since I've been here that we don't do this at all until just before a release and the cause of most of our major pre-release headaches. I want to see ALL of the Agile development environments updated the day after every release so that we can end this schedule-blowing fault for once and for all, but three days before a release, we decided we probably couldn't get much done - especially as the most critical of the three releases is only happening on Saturday. So this is something I'm going to try to push forward at the next all-teams meetings, as a change to our overall process - that part of the acceptance criteria be that the release has been merged and tested against the LATEST release BEFORE going into the integration environment.

Finally, we looked at the question of how to update our automation. We've still got some developers creating automation in Selenium, which won't run in our Integration environment and isn't integrated into our main set of automation (all done with WebInject). While this is it's own problem, the real issue is that 1) we need to provide a list of automated tests and 2) there may be conflicts with tests created in different branches. We decided to handle this by having the QA manager and his automation assistant review all automation created before it is added to the set of tests to be run against a release. The goal of this was to control the possibility of overlapping/conflicting tests, but a nice side effect of this is that it will give them a chance to winnow the tests that are written down to an appropriately sized set. (With luck this review might also help them see which tests may have become invalidated due to changes in the code of a certain project, but this will most likely be revealed through the course of running automation against that project in the development environments.)

Now what we need to do is get the developers trained up on WebInject AND actually get these things written BEFORE we go to live (much less to the integration environmetn), and then our automation will really be rocking along!

No comments:

Post a Comment