Thursday 29 January 2009

Second week learnings - what is Test Driven Development?

This is about a week and a half late, but still useful - last week was too busy to really say much.

At our review of the evolution of our Agile process, it came up that we're suffering from not having the developers do unit testing. Now, this was a problem before - they were always "too busy" to add it in - but based on my experience with an Agile roll-out at my last company, unit testing was a vital part of the process. This, however, didn't come up in our training. In fact, little was said at all about test driven development and what it means.

Just like last time, it looks like people are misreading TDD. The developers in one team have decided that creating automation in Selenium - basically, automation that works on the UI - they will have satisfied the requirements of having development that is tested with automation. It's kind of hard to explain the logic here but my understanding TDD requires setting up unit tests - which are almost always below the layer of the UI - and creating these tests before you start coding (I'm not sure if you have to define them or actually script them beforehand as I haven't done t hem myself). UI tests usually can't be written until way after the supporting code is created, and Selenium, like most tools of this sort, is so fragile that just tiny changes to the UI will break its scripts, so the code can't really be written until after the coding is done. So what we get is not coding driven by well-written unit tests; we're getting coding and then a pass at creating super-fragile automation that tests the GUI after the real work has been done.

A second problem that came up is that we have a huge body of code written (using WebInject) that tests our current code base, and this is the code that is run when we do our automated regression passes. The developers are now supposed to be adding to our automation to include elements that test the new areas they develop; but somehow some of them have decided that they can use a different program for the automation than that which we have been using in a standardized fashion for a few years. While we might migrate to a new platform, we would have to convert all of our other tests in order for this to happen; and the thought of running two different automation platforms at the same time just seems rather silly. It was determined that to better handle this we needed to train the developers in how to use WebInject. While we may change platforms in the future, we need to make sure we're actually meeting our current automation needs until a choice is made to do the change.

Finally, there were some issues with where the QA piece fit into the user stories. For one of the QA team's Scrum group, they had decided to have the QA story be just one card that lasted for the whole sprint. We generally agreed that that was a bad plan, as it meant you could never tell when the QA was done, or if the QA was done for an individual item. Better choices were thought to be having a separate QA story for each development story, or using colored dots/magnets on a story to indicate when "QA script written/Coding Done/QA complete" had happened, or alternately to move a coded story into a whiteboard column to show that it was ready for QA.

Next up: cross-functional team communication and developing communities of practice.

Tuesday 20 January 2009

The Agile workstyle

Today I am finding the Agile style of working, in which people are constantly interrupting me to talk to me, very irritating. I am operating under deadline but now that we've changed regimes people want to talk about stuff all of the time. I just want to get this work finished, and I have a hard deadline!

Friday 16 January 2009

Creating a new deployment process

The new deployment process has been made public at last. It's difficult to describe what we're doing now, it involves a lot of manual work and as our code sourcing isn't done very well, we often wind up having files forgotten or overwritten with old versions of code. While for a QA person this is a guarantee of long-term employment, in fact, finding the same bugs over and over again is quite dispiriting as it takes away any feeling of accomplishment with releases as it's almost guaranteed that some issue fixed before, and usually several of them, will be cropping up in the new release. We have automated testing, but our code base is so complicated we don't test every little thing with the automation, and growth of our automated code base is oh-so-frequently caused by the discovery of the latest new thing we've managed to break and subsequent addition of an automated check for said bit of code.

The new plan kind of goes like this: first, we get our servers set up; then we have daily automated code drops with some testing and a possible second test server if the code is really unstable; then, in the week that is the release sprint, we merge with any other code waiting to go live and everyone in the team tests like mad (or otherwise works on the release) for the remaining days until we go out.

My problem is this: I don't really see a point at which some time is being scheduled to do end to end testing of the code before we merge with the rest of the code. Also, there just isn't any contingency built in; it's assumed we won't need more time for testing before the merge, and that dumping a pile of bodies on the work will always be sufficient for after the merge and will be enough to guarantee that we'll be able to go live on schedule. Third, we're sticking to our silly, rigid deployment timeline. The teams are being told they pretty much have to release every cycle; the release team is anticipating that the code will be ready like clockwork. There's no wiggle room. And on top of all of this, the attitude is that if you say there are any possible holes in this process you're being negative about the conversion to Agile.

Really, this is quite a time we're having here. I do really hope that the automated build deployment gets up to speed quickly, and that it serves to quickly reduce the amount of problems we have and time we spend working on builds that have old or missing code in them. I still think we'll need to learn how to fit testing into the process better and come up with some more flexibility for our release schedules, but we can only fix one thing at a time, and this one is a biggie. Over time, it's my belief that we won't have to have the whole team drop out during the release, that release tasks can just be part of the next sprint and one or two developers can be on it as necessary and then move on to regular development work along with everything else, but in the beginning it makes sense to have the whole team work on it while we're ironing out the process.

Wednesday 14 January 2009

Learnings from first Product Review meeting

The first team starting Agile actually started their Sprint 1 this week with a day long prioritization meeting. Apparently this went a bit ... interestingly. A Tweeter in the meeting was sending out requests for help during it, and all indications were that around 4 PM things had become quite fractious and unpleasant.

According to a QA member at this meeting, there were three learnings. First, the meetings need to be a lot more structured, with time set to do X and time set to do Y. The user stories should have been prioritized, the estimates needed to be given faster without too much faffing, and the work items should have been broken into tasks (to enable estimation) before the meeting started.

Second, he said people needed to not get bogged down in conversations about trying to solve technical issues during the meeting. Sure, if a business owner has proposed X, and A will take 4 days and B will take 1, coming up with options A and B, presenting them to the owner and getting her instant approval for one or the other is vital. But these kinds of conversations need to be tightly monitored and focused, and the minute they get beyond the bare minimum you need for business feedback, you've probably gone to far.

(My experience, FYI, was that the first meeting took three to four times as long as it should have, so this kind of painful experience seems not atypical of a first time through.)

Finally, the scrum team would now like to not have the product owners there for the entire meeting. This is because one particular product owner kept pushing back on the estimates, saying they should all be smaller, and told them that the various things they needed to do as a part of the process were unnecessary. The team members considered this very disruptive and felt like it put too much pressure on them, that they couldn't give honest answers. I also heard that this led to the creation of an "us versus them" mentality - or perhaps reflected a pre-existing one.

The people who taught the SCRUM course we all took would probably see this as a "pigs versus chickens" thing, where the chickens (business owners) want to run things in the farmyard but it's the pigs who have "skin in the game." I found this a very negative simplification that ignored the fact that the chickens DO have skin in the game, as they will be the ones who get canned if their brilliant ideas don't bear fruit. What's unfortunate is that I see the Agile estimation process as a really vital way for the development team to communicate to the business owners the real costs of the choices they make in terms of effort; it gives people the opportunity to say, "Hey, why implement that as a text boxwhen we can do a drop down box for a third of the time and take care of a bunch of error scenarios while we're at it!"

Unfortunately now this team feels alienated from dealing with the business owners and wants to exclude them from estimation and task list creation, and I think this is a damned shame. Anyone out there have any ideas about how to handle this situation now that the damage has been done?

Monday 12 January 2009

Meeting "52 Pickup"

We're having a bit of trouble in only our second week of sticking to our community of practice meetings - two of the Agile teams are having meetings today, and it seemed like if we were to be able to share information on how Agile is going, we would do a lot better if the people who were already ramping up with Agile were sharing their experiences with the other members of the team. So I've moved it to tomorrow. Tuesday is probably going to have to be a permanent thing as with 7 teams, at least one person will be at an all-day sprint planning meeting every Monday (even though there's only supposed to happen once a month).

I'm sure there are comments to make about how the first sprint planning meeting went today, but I wasn't at the meeting. The feedback I got was that far more research should have been done to flesh out the ideas before everybody got into a room, and that nobody really knew what they were doing. It does really make me want to switch to Scrum-mastering - I'm sure I could help us get to a better state in our process if I were in the middle of things rather than on the sidelines doing mop-up work and just trying to keep the site going during this big transition.

Also, we're running short on QA resource and we are really getting to "squeezing blood from a stone" as people attempt to find non-existent "free" time in the schedule of the QA team members who've started with the Agile. I am not sure how this is coming as a surprise to the PMs in question, but it is!

Friday 9 January 2009

Sticking to Scrum if it kills us

I just went to the weekly meeting where we discuss the Agile implementation and how things are going with it (the one in which I brought up the need for QA to get DB access again).

I am unhappy that the dogmatic tone that the trainers took about how to do Scrum has filtered down to some fairly senior and influential people. Basically, to do anything in a manner that is contrary to what was recommended by these trainers is being interpreted as "Scrum, but."

The effect of this is that instead of liberating people to find their own ways within their product teams, we're shackling them with executive fiats simply on the basis of "this isn't what the trainers said." And what is frustrating about that is that the trainers were just giving one flavor of Agile, in which every work item and every sprint MUST result in visible, executable code ... and be one month long ... among other pronouncements (my favorite being the one that everyone on the scrum team does ALL work, even though our company is not constituted of 100% developers but has project managers, quality assurance analysts, graphic designers and business analysts among other sorts of specialists - as well as Dot Net and HTML developers).

Now the team we have that's been doing weekly deployments for some ten months or so is getting pushback that they MUST be on a monthly release cycle, less we become "Scrum, BUT." And the team that does invisible infrastructure work is being told to rejigger all of their work so that their aim is to produce visible "slices" of product for every release. In essence, we're trying to fix things that aren't broken, and make them blindly adhere to process when there is no reason to do so.

This, to me, seems as profoundly anti-Agile as we could be. It's not about valuing people over process; it's about setting up process as a deity with commandments we must follow. And it's creating even more control from above rather than empowerment for those who are doing the work. It grates on every bone in my body.

In other work areas, the PMs are fighting like mad dogs to not let any of the QA people roll over to 100% Agile until the leftover projects get out the door. I want people to be doing their Agile work without interruption and finding their own rhythm, but I'm really unhappy about the state some of this code is in, and to me, while it's bad not to get Agile going on the schedule it's supposed to be, it seems worse to get things started by releasing code to live that's going to cause us to lose customers and have a never ending series of emergencies for two weeks. So while I want to run around and say, "No! How dare you take away the QA person's time from the Agile team! How are we going to make things work if we keep having our time nickled and dimed away like this?" On the other hand, the person that lives in me that hates seeing the quality of our live code get kicked can't quite overcome that other voice in me, because there's nothing I hate more than frantic phone calls from business owners telling us about what large account we're about to or have just lost because of a bunk release. So I'm keeping mum for now.

Thursday 8 January 2009

Keeping track of tasks

It's official - we're currently using the Post-it note method of work tracking, which leaves us at the mercy of broad handbags and the cleaners.

I know it's the preferred method for many, but I HATE not having a record of a work item description and its status that can't accidentally go away. To me it's an invitation to disaster. But, you know, QA, I always expect the worst - and am so rarely let down! That's why I keep my printouts of project plans and test cases for years after a project has gone live - even backup servers can be wiped out, and I've lost at least a month of work before when this has happened on one occasion. ("But I thought that was just test data!" "Yeah ... all of our test plans and test cases ...")

And the whiteboards - the doom of the open office! We just don't have enough wall space here. Or privacy, or quiet.

Seems like it's time for my 3 PM cup of tea.

How to deal with the code stability issue

I was worrying Monday about the problems with testing code under the "continuous integration" regime - if we're testing one item, and we think we're done, then it gets changed again (I imagine it being affected by another area of code), how does QA know it's "done done?" Under the version of the build process I saw on Monday, this was a real possibility: QA was basically expected to test in what I saw as a dev "sandbox," which is extremely unstable and only a step up from testing against a developer's own box - no control on changes and data. However, the version of the deployment workflow I saw yesterday had a new "optional deploy to second stable INT server for QA" bubble on it next to the "automated daily build on team branch" bubble, which makes me think that someone is actually taking into account our concerns. This is cheering.

I've also realized this week that it's time to get QA access to the databases again. I think this was taken away as part of an over-enthusiastic rollout of Sarbanes-Oxley - we can't possibly see any confidential data if we don't have access to the DB, right? However, every professional QA shop I've worked at has expected the test team to be doing verifications in the DB, and the fact that we are not here to me has long shown a hole in our processes. Apparently we had it at some point before I showed up; it's time to get it back. I'll be bringing this up at the Agile Implementation review meeting on Friday and see if I can get traction. The Developers want to be able to control what's on the boxes so they can do their work better - it's time for us to be able to do this, too, at least for the data. Just read access would be sufficient! (I wonder if there's an SQL licensing problem associated with this? If there wasn't before, there might be now ...)

Wednesday 7 January 2009

Building morale one spoon of sugar at a time

Some developers just came over to my desk to follow up on some defects I'd sent to them. For one, it turns out I'd read an ambiguous requirement in a different way than they had (but when we discussed them, I realized their interpretation was correct) and we were able to sort out and pass the bug; for the second, the developer was basically trying to say, "It wasn't me!" and I was able to say, right away, that I was sure it wasn't and that there were just some deployment issues to get fixed (my suspicion being either that some code wasn't merged correctly or something had been overwritten at some point).

And then I gave them both chocolates from the huge pile I have next to my desk (Ghirardelli, marzipan, and Hotel Chocolat), they teased me about my tea collection, and we had a nice little chat for a minute or two.

It was nice to have them come to me, it does seem a much more Agile way to handle things, though with five more items awaiting my tender mercies, I'd rather not be running back and forth for everything, but rather just cranking through this big pile as fast as possible.

WHEW. Time for chocolate for me, I think. I'll pull from the secret stash of Vosges Haut Chocolat, because I deserve it, and stale chocolate is a crime.

Filling in the holes in the Agile process

Today one of the testers has been asked to spend hours and hours updating their test cases for a project that's going out at the end of the month so that they can be used as the documentation for training the team that's going to use this product.

This bothers me for a few reasons. First, I don't think test cases should be used for training. We write them quickly and with the assumption of considerable area expertise - and to use them as training material requires considerable rework (as in what's happening now) and is likely to lead in holes in training coverage based in the differing needs of QA people and end users. Second, we actually need this person to do other, higher priority work - but as the team that's doing this project "owns" his time, I can't take him off of it. I mean, maybe I could, but it doesn't seem like the right fight for today.

Third, this points out to me a potential hole in Agile. If we're producing pretty much no documentation at all, how is this kind of need supposed to be handled? For this particular project, what I see is that there wasn't proper business analyst support put onto this task, but for the future, I don't see this being done at all. QA won't really have much of anything written down at all and certainly not enough to form the basis for a training program. Who's responsibility is it to get this done going forward?

Finally, in a meeting yesterday to deal with the scheduling problems caused by all of the last waterfall projects trying to make it out at the same time, I was dressed down for being negative "here at the start of the year, and really bringing down the attitude as we move into our new Agile workplace." It's really horrible for a QA person to be told to not be negative lest morale be affected; it's our job to point out when things aren't going right, and I'd say calling attention to a project being understaffed during the time of its release is a pretty valid thing to do and exactly what my job requires I do so that our product development work can continue without snags. To me, hearing "don't be negative" is pretty much like hearing someone tell me to just keep my mouth shut in general, which means I lose my effectiveness as a QA person and as a team leader. We haven't made the transition to Agile yet and the QA team is being seriously squeezed during this time; isn't it right to try to get this work done, and is it unreasonable that the team at the end of this process is the one that's under pressure? I guess in development land it all looks like the Land of Oz, and for us, we're still at the Wicked Witch's castle trying to get out - but pointing out the gaping chasm we're about to step into "is bad for morale."

Sigh. I hate feeling unappreciated.

Tuesday 6 January 2009

Breaking code with a feather touch, and: too many meetings

So under the new team structure I'm supposed to be doing loads more manual testing (as opposed to team management, which is what I really enjoy and why I'm a "lead" and not a "senior technical tester" or something like that) ...

and I have a huge load of work to get through this week, but haven't been able to touch it until the 1) environment was upgraded with the new code and a fix for the financial servers and 2) I took care of all of that management stuff which hasn't really gone away ...

and the upgrade changed a financial transaction thing from "using the live servers instead of the test servers" (which is really seriously wrong) to "this transaction isn't allowed" (which means I can't even start testing) ...

and then the first defect I tried to test that wasn't about transactions also failed. So even though I wasn't able to get to "real" work until 3 PM (3 PM!) in my work day, what I did work on pretty much crumbled when I touched it.

On the other hand, you know, if everyone coded stuff correctly, I'd be out of a job. I might as well complain that it's cold outside, which it is, but instead I'll complain about having another meeting (to deal with finding enough warm bodies to get through the work backlog by our arbitrary "start people working Agile and stop continuing to support the old work flow" deadline) that will keep me from making more code go 'splodey, because at least when I fail defects early enough, there's a chance they'll get fixed in time for the release.

First scrum - though not with new team

Today I had my first official Scrum meeting - or so it was billed. All things considered it was a status meeting for the little set of bugs we're supposed to get out the door before I can get onto my actual Agile team. The transition is continuing to be difficult; one developer, when asked if he was working on at item that was supposed to be in the release, said, well, he wasn't, as no one had said he should be. It's frustrating that he doesn't know if he's supposed to be doing work, and it's also frustrating that I have some work that I don't know whether or not I'm supposed to be working on. The communication issues between the various group are holding us back right now. At our level, we're not sure which area we're supposed to be working on - our new projects or the old ones. At the same time, there are clearly business owners elsewhere in the business who expect that work is being done for them ... but the information is not being shared.

Meanwhile the rest of the projects that are trying to get out the door before the Great Agile Wall Falls are scrambling for resources - but they're converting to the Agile teams so fast I can't really commit them to be able to work on the other projects. I'm sure this won't all be a big car crash but I'm frustrated that we can't coordinate things as smoothly as we normally would. Can things please be more simple once we're all on the new methodology?

Monday 5 January 2009

Community of Practice meeting or bitch session?

Today we had our first post-transition "community of practice" meeting. Because my functional area team (QA) has been divvied up among several product team, we're in dire need of having meetings to enable us to share our knowledge about our specific specialization. Even more importantly, we're needing to share information about how we're implementing Agile across the different teams. While one guy said that this was so we could standardize our practice, in fact, I'm not too concerned about having us all do the same thing right now; this is the time for "let[ting] a hundred flowers bloom/And a hundred schools of thought contend." Our practices can grow organically at present and be customized to the needs of each team, but it is really vital that we let each other know about anything we've discovered that really works - and anything that just doesn't (or something we forgot about).

Currently the question is: what is the role of automation in our new QA frontier? Our current automation framework only allows automation to be created after the code is complete and the GUI done; this means it's not really suitable for the kind of QA practice described by Lisa Crispin, in which QA spends most of its time creating automated tests in anticipation of Dev dropping the code to them. We're also inhibited by the fact that we're mostly manual testers.

The guy who's the QA rep in the team that's first out the chute, though, is trained as a developer, and he can do a lot more. His team is looking at doing their unit test automation in Selenium, with the QA role set up as "running automation and doing some manual tests." However, it's my belief the developers will be happy with the tiny bit of testing done by their unit tests and won't attempt to do the kind of deep testing we would normally do, with lots of negative case scenarios and DB checks. But with only one day set for "testing merged code" before live, it's my fear we won't have nearly enough time to do deep testing before we're told to stop, and that code will be far too unstable for us to have ever got a solid picture of its condition. Our code base is both brittle and unwieldy, and I don't think it's really got what we need to handle lots of rapid changes. Refactoring, however, does not appear to be in the cards.

A final problem with our automation is that the platform we're currently using isn't set up to run against seven modestly different code bases all running in the same environment. Conceptually, we're supposed to be running this stuff over and over again to make sure each deployment isn't breaking things; but we've only got it set up to run against one instance per environment. Looking at this issue, I see that we've got two people assigned 50% each to doing nothing but making our automation work; I expect that the next month is going to be really chock full of challenges for them.

Damming/damning the waterfall

The conversion is hitting its first snag - the backlogged projects are clogging up resources and making it impossible for us to "get finished and move on." My group (QA) is the one that's most committed for getting work finished up so it can make it out the door - and with everyone split up into their new teams I'm suffering from not being able to pick up spare resource (usually found when a project had been held up for one reason or another) and drop it into another one to help it get finished up.

It's looking like the projects that we were "committed" to finishing before we transitioned to Agile are going to be holding up our transition, in part because over the month of December none of them managed to get deployed slash handed off for testing. (Rumor: QA is the bottleneck. Truth: just because we haven't tested it doesn't mean we're being slow or holding things up!)
The other option for managing this is to somehow shoehorn them into Agile, but what's bad about that is that it will guarantee they're delivered far later than promised (always our problem - there should never be these promises!) and that the people who have been working on them aren't available to do so (as they're on different teams), which will lead to a big flustercluck of trying to do handoffs and dealing with Ye Olde Learning Curve. Bleah. Time to hit the box of Vosges Haut Chocolat.

Starting the Agile Testing Adventure

Today's the day - we've moved the desks, and the first of the seven teams is supposed to start doing things "Agile" today.

However, we don't really seem to be ready. The plan was that this was going to be the zero week for that team, a ramp up week, but what with the end of the Christmas holidays and a huge desk move (and unpacking, and piles of email), I'd be really shocked if they'd had a stand up or if their scrum master really had a plan of attack.

My team is a late starter - we're rolling things out team by team, week by week, and we're not supposed to start going until almost the end of January. Typically, the week we're starting is a week I'm scheduled to be on holiday - at least for the first two days.

Until then, I and most of my former teammates are still operating under the old development methodology, which means there's a pile of work to do and I need to get on it.

Welcome to the New Year!