Wednesday 25 February 2009

It's all gone very Lord of the Flies

We're at the end of our secnod month of the Agile conversion at work and things are ... rough. Many of the senior non-development people that work here are talking about leaving in private discussions. Excluding coding, the work to be done seems excessively trivial. Too many people feel like the skills and specialisms they've honed are no longer getting used, and instead they're frittering away their time on projects that are small in scope mentally as well as physically (if that makes sense). The conclusion is that if the economy was better we'd have already seen people leaving, but they're hanging on - though the resumes are already going out. Even the person who sits next to me and is consistently chipper and "let's look on the bright side of things" is saying that her work life on a daily basis is rotten.

Meanwhile, the atmosphere of "everyone can do any task" has led to an environment I liken to Lord of the Flies - the savages have painted their cheeks and declared themselves to be Product Managers (or UI developers, or Business Analysts) because they went to training where they were Told They Could and no one is stepping in to stop them. So in Sprint Planning meetings I have to listen to people spend two hours arguing over what "done" is because they are all now Agile experts, and in another team all seven people are having to work out what the requirements are for a project because They Are Empowered Now and no one bothered to do the research beforehand (not even enough to do an estimate) because to plan what you're doing before a sprint start is Crypto-Waterfall Apologism, don't you know. Too many people are running out of work to do long before their sprints are over (again with the lack of planning), but we're stuck with overly-long sprints created by executive fiat and fixed resources "because in training they said sharing people is a risk."

I am really hoping that a lot of these things will get ironed out in the next month but being stuck in the middle of things makes it very difficult to have a good attitude about it. I do feel like for a lot of senior specialists (like myself) Agile is not really offering the kind of career growth and challenges that I want, and I fear that between the abyssmal morale and the change in what's required of us, we will be having a substantial amount of staff turnover.

Lesson here: two days of training is not enough (everyone got the same training, from the developers to the scrum masters to the UI developers and QA) and a lot more planning should have been done before we started this process so that we could have hit the ground running and got a lot more out of our first sprints, rather than burning so many people out at the very beginning. I would have especially looked at something really intensive for the scrummasters so they knew how to get maximum use out of their teams at the very start, and they could have percolated expectations (and work) back down to their groups so that they knew what to do at the start rather than flailing around like we have been.

Monday 23 February 2009

How hard do people work in an Agile environment?

I have been thinking about some of the basic premises of Agile lately. One of them seems to be that people will work hard if they are empowered to do so, and that you don't need people to monitor anyone's work/boss anyone around under an Agile aegis - people will pick up work on their own.

However, this assumption goes completely against another basic tenet of many people when it comes to work; people are lazy and will do as little as possible to get by.

Now that I'm smack in the middle of it, the question is: which is true?

Well, oddly, what I'm seeing is a mixture. For the developers, apparently the scuttlebutt is that they feel they're now working harder than they ever have before! There's a great pressure to make sure that at the daily standup they're able to show that they've accomplished something and that they can "move a card" from the In Progress to the Ready to Review column, or from the Review to the Done column. I'm not sure how this is different from before. Perhaps it is the small scale of the tasks that they are needing to do that makes this possible - or maybe it's because they're actually being monitored on a daily basis! (Which would support the lazy theory, I think.)

But for the people who's roles have been poorly defined - or for the people on teams where the work to be done has more generally been poorly defined, or incorrectly scoped (and finished halfway through our month-long sprints), I've instead been seeing a kind of lethargy and unwillingness to find work. I think to some extent this is indicative of some morale problems at this point in the game - but I also think that what we're seeing is a complete lack of clarity about how things are supposed to be done right now. We're not all jumping into each other's roles, because we can't, we're not cross-trained appropriately; we don't know how it is we're supposed to do the work we probably should be doing on planning; and because this work wasn't done beforehand, the people who normally execute the planned work are adrift. Many of us are way out of our comfort zones, and we're not all turning into radical go-getters just because there's a vaccuum of power and control, and we're "supposed" to do everything now. I see the scrum masters not providing guidance, and I see the other team members wasting their hours away waiting for something to happen. Is the business owner's highest priority work happening? I think not.

I wonder sometimes if Agile is created under a certain kind of American optimism about the workplace. Me, I have been seeing a lot of struggling people - but I'm amused to see that the QA team, the one that was consistently worked into the ground before the transition, is actually getting more time to breathe these days, whereas the developers, around whom this process seems to revolve, are the ones who find themselves struggling under the painful searchlight of constant attention.

Wednesday 4 February 2009

QA "practice" evolution - as we approach the end of the first sprint

The first team is approaching the end of its first sprint - their release is scheduled for next week - and the QA team is polishing up what its doing as we continue our field trials and report back to each other on what's worked and what hasn't. Of the three teams "in the field" right now, two are keeping lists of their verification tests in a shared drive, while the third team has been writing their verification tests on the back of their index cards. Guess who lost all of their data for one of their work items, including the business requirements? Yep, it was the team that only had a hard copy. We'll now be keeping the verification tests in the shared drive for all groups. This also ensures that if someone has to step in and do the work there is a record of what has been done. With this week's snow keeping most of the company at home for a day or two, having this kind of record would have been useful; with my being asked to take over a project that hadn't even bothered with writing things up on cards (and yet called itself "Agile," presumably in the Dilbert way as there was zero documentation produced at all), it would have been invaluable. A record must be kept somewhere.

This also brings up the question of how to handle losing people mid-sprint. Conceptually anyone would take over the work, but our teams aren't cross-functional enough for that to be done. Another concept I've heard is that the entire sprint might be cancelled. I don't think we will want to do this. Currently we don't have any solution for this problem at all. I'll report back as we try to fix it.

As we come close to the release, we're needing to deal with the new process for getting builds out the door. Before, we'd have a test plan for the whole project, and a set of test cases flagged to be executed in the pre-staging/integration environment; but now we aren't writing any test plans at all and we're not using the old list of test cases that included the flag for testing in "Int." However, the Operations team had as a requirement for getting into Int that there be a test plan created! (This is in addition to their being provided a list of the changes to automation - a good plan, I think - and "the code has been passed by the team in the Dev environment," which is a bit vague, I think.) They didn't specify what they wanted, though, and said it was up to us to figure it out.

After some chatting, this is what we decided to do for a test plan. First, we'd have a list of the acceptance tests, which would be built from the list created for testing in the dev environment. Second, we agreed that all bugs found in the Integration environment would actually be written up formally - something we've gotten away from with the Agile testing but which we all agreed would be important for this environment in which multiple projects would be sharing the same code and needed to be kept aware of what was going wrong in case of possible overlaps (and also so the Ops team could be made aware of any quality issues). Third, we should identify who all was doing the testing and how long we thought it would take. Fourth, we should communicate whatever risk level the project had to the overall functionality of our site. Fifth, we needed to define what our quality was for a "pass" - even though we only have X number of days to do our testing (this is now strictly limited per the new release process), it's vital for us to say if we think it's not, in fact, ready to be released. This was a standard part of the old release process and it seems wise to not let it slip by the wayside.

All of this brought up some interesting issues with our release process. Although the development team is now controlling releases into their environment, it doesn't appear that they've actually been refreshing the various environments with the code that's been released to live. This mean that when the first team goes into the Integration environment on Monday, it will be testing for the first time with the code from three different releases that have happened since their servers were originally built. As I see it, "integration" should be for integration with other NEW projects that have been created simultaneously, NOT with pre-existing code. It's been a problem since I've been here that we don't do this at all until just before a release and the cause of most of our major pre-release headaches. I want to see ALL of the Agile development environments updated the day after every release so that we can end this schedule-blowing fault for once and for all, but three days before a release, we decided we probably couldn't get much done - especially as the most critical of the three releases is only happening on Saturday. So this is something I'm going to try to push forward at the next all-teams meetings, as a change to our overall process - that part of the acceptance criteria be that the release has been merged and tested against the LATEST release BEFORE going into the integration environment.

Finally, we looked at the question of how to update our automation. We've still got some developers creating automation in Selenium, which won't run in our Integration environment and isn't integrated into our main set of automation (all done with WebInject). While this is it's own problem, the real issue is that 1) we need to provide a list of automated tests and 2) there may be conflicts with tests created in different branches. We decided to handle this by having the QA manager and his automation assistant review all automation created before it is added to the set of tests to be run against a release. The goal of this was to control the possibility of overlapping/conflicting tests, but a nice side effect of this is that it will give them a chance to winnow the tests that are written down to an appropriately sized set. (With luck this review might also help them see which tests may have become invalidated due to changes in the code of a certain project, but this will most likely be revealed through the course of running automation against that project in the development environments.)

Now what we need to do is get the developers trained up on WebInject AND actually get these things written BEFORE we go to live (much less to the integration environmetn), and then our automation will really be rocking along!

An Agile/Scrum coach

I'm pleased to say that upon returning from my vacation last week, I found out we'd made the decisions to get a Scrum coach into the company to help us with our transition to Agile. Sadly, getting one in should have happened before we made the transition, but I get the feeling that management though that our two days of training was going to be adequate. It's become painfully clear that we're not really "there" with what we need to be doing to making this a success - too many people weren't trained up in what they needed to do before we started doing it, and even though we've been (desperately) sharing with each other to improve our practice as quickly as possible, things are going wrong and we don't really have the skills to improve what we're doing on our own. It's not quite Lord of the Flies but there is far too much anarchy and uncertainty, and to me it seems obvious some train wrecks are on the way. (Hopefully these won't involve too many pointy sticks and tribal rituals but there's no telling right now.)

Of the two interviews I took part in, I was most impressed by the second, a youngish woman I'll call L. While both candidates understood the need for flexibility across a variety of areas, L seemed to have a clear vision of exactly what she'd do to help us get up to speed as fast as possible, including running classes across a variety of work functions and doing personal coaching (and attending meetings such as sprint planning sessions and retrospectives).. I also really liked her answer to my question about what business owners should do in preparation for a sprint - a real sticking point for me as because of the way we were trained, people are convinced you are supposed to do NO work for a sprint before a sprint starts, except for the situation when you have a sprint zero. There is work to do before a sprint starts, but the business owners were not coached in what they needed to provide prior to their sprint plannign session, and we're struggling because of this. Anyway, in addition to the every-going sprint backlog maintenance, she said we need to be

1) getting the user stories to the right size (no point in having them all be at the half a year size - I think by "right" she means so they can be broken down into work items by the team and be close to one sprint size though perhaps only the first is true)

2) doing work with the product owners (not sure exactly what kind, probably in terms of defining the user stories and rechecking the prioritization of the backlog)

3) making sure the acceptance criteria for the user stories is testable and getting the product owner to agree to the define acceptance criteria.



I'm sure there's actually a lot more work that ought to be done in sprint perparation, but I've had very little experience in this area - anyone want to add to the list?