Monday 5 January 2009

Community of Practice meeting or bitch session?

Today we had our first post-transition "community of practice" meeting. Because my functional area team (QA) has been divvied up among several product team, we're in dire need of having meetings to enable us to share our knowledge about our specific specialization. Even more importantly, we're needing to share information about how we're implementing Agile across the different teams. While one guy said that this was so we could standardize our practice, in fact, I'm not too concerned about having us all do the same thing right now; this is the time for "let[ting] a hundred flowers bloom/And a hundred schools of thought contend." Our practices can grow organically at present and be customized to the needs of each team, but it is really vital that we let each other know about anything we've discovered that really works - and anything that just doesn't (or something we forgot about).

Currently the question is: what is the role of automation in our new QA frontier? Our current automation framework only allows automation to be created after the code is complete and the GUI done; this means it's not really suitable for the kind of QA practice described by Lisa Crispin, in which QA spends most of its time creating automated tests in anticipation of Dev dropping the code to them. We're also inhibited by the fact that we're mostly manual testers.

The guy who's the QA rep in the team that's first out the chute, though, is trained as a developer, and he can do a lot more. His team is looking at doing their unit test automation in Selenium, with the QA role set up as "running automation and doing some manual tests." However, it's my belief the developers will be happy with the tiny bit of testing done by their unit tests and won't attempt to do the kind of deep testing we would normally do, with lots of negative case scenarios and DB checks. But with only one day set for "testing merged code" before live, it's my fear we won't have nearly enough time to do deep testing before we're told to stop, and that code will be far too unstable for us to have ever got a solid picture of its condition. Our code base is both brittle and unwieldy, and I don't think it's really got what we need to handle lots of rapid changes. Refactoring, however, does not appear to be in the cards.

A final problem with our automation is that the platform we're currently using isn't set up to run against seven modestly different code bases all running in the same environment. Conceptually, we're supposed to be running this stuff over and over again to make sure each deployment isn't breaking things; but we've only got it set up to run against one instance per environment. Looking at this issue, I see that we've got two people assigned 50% each to doing nothing but making our automation work; I expect that the next month is going to be really chock full of challenges for them.

1 comment:

  1. Hopefully you'll be using Selenium RC, not just plain Selenium. When we shifted from the old way to RC it made things *worlds* easier.

    ReplyDelete