Bob Balaban May 13 2008 07:13:39 AMGreetings, Geeks!
I'm hoping that many of you out there are fans (or recovering fans) of The Simpsons, in which case you'll have no trouble recognizing the names in this week's post as the characters in the always-fighting cat and mouse TV show beloved by all the children of fictional Springfield. If you're closer to my age (Boomer) than to Gen-X, and don't have any kids that you watch TV with, the equivalent characters of our time were probably Ben and Jerry (the cat and mouse cartoons, NOT the ice cream makers!)
So, the thing I want to get at thoday is, why can't we all just get along? Nah, that's not it. Let me try again: Why is it so darn hard to release quality software? Nope, that's not really it either (but it's a good question). Ok, I think I have it now: If you were working in a relatively small software-making company (let's say, between 20 and 200 employees, and by "software maker" I mean, creates and sells software as a primary business), and you had to figure out how to organize the people doing the actual software creation and (we hope) testing, how would you set that up?
Of course we need to constrain the question a little bit more, so as to be answerable in our lifetimes. Here are the (arbitrarily ranked) goals I would want to maximize in such a situation:
Minimize the nuber of defects (bugs, glitches, doc errors, user errors whatever) reported by customers (defined as, the people you pay you for your software). Notice that I lump "user errors" in with the more common kinds of problems. I do that because, IMHO, a "user error" (the user did something "wrong", resulting in a bad outcome, but the software is "working as designed") is rather more likely to result from a problem with the User Interface (UI) or perhaps with the documentation, leading the user to expect a result other the one s/he got when they did whatever it was. We could argue whether these are "real" bugs, or "equal" in some way to "real" software problems, but I don't care. By my definition, this kind of error is a quality problem in the product.
Maximize the amount of automated testing a product can undergo before release. I put this item in here because experience (and arithmetic) has shown me that release cycles are dramatically reduced when you apply automated testing to a product. I won't belabor the point, and yes, it's true that you can't generally achieve 100% automation (at reasonable cost), but test automation is a good thing.
Keep as many employees happy in their jobs as possible. Yeah, yeah, I know. There are a LOT of factors that affect employee satisfaction, but here's one thing I have noticed myself, and at more than one company where I have worked: implementing caste systems leads to unhappiness among employees. Creating sub-divisions within the broader "development organization" where one group of people (let's call them the "Lords") is responsible for the fun, creative, and more highly paid work of writing (inventing, architecting, designing, coding...) software, and another group (the "Serfs") are responsible for taking the work product of the Lords and testing it (looking for defects, broadly defined, which might also include things like unacceptable performance, poor UI, etc.). Most companies at which I have been an employee implement (whether blindly, or on purpose) this kind of 2-tiered system. The Lords (developers) get more money, and more status than the Serfs (QA/QE*, pick your terminology). Some people feel that this system is "natural", after all, the developer's job is harder and more creative. The "QE" role is to receive and test, a rather more "mundane", yet unfortunately necessary step in the release cycle. Anyhow, the point here is, your job (as the hypothetical keeper/maker of the org chart in this example) is to try to not have a caste system.
Minimize the cost of producing quality software. Since we're talking about the software business, there has to be a business constraint on all of this. Headcount is expensive. Back in the days when Lotus was mostly a spreadsheet company, the normal ratio of QA to Developer headcount on any major project was 3:1 or so. For every developer creating code, there were 3 people testing it. Today that would be inconceivable.
So? Now what? We can probably all agree that there's a difficult problem here. What's the answer?
Speaking for myself, I don't have an "answer". There probably is no one single "answer". What I have, though, is a hypothesis. Which is:
If you view the problem of "software product quality" from a broad perspective (as I described it above), then your approach to building the software cannot treat "QA" (or "QE", or whatever you call it) as a distinct operation from "development". **
One of the things this implies for software companies is that "Development" should probably not be a separate organization, or department from "Testing" (QA, etc.). I would almost advocate that within a "Software Development" organization (let's say, a team dedicated to creating and evolving a single product, or suite of products), there is indeed reason to declare and nurture a "specialization" of skill that is distinct (i.e., not all architects/designers/coders need to be expert in testing, and not all testers need to be expert in software design/implementation), but really, isn't there an awful lot of overlap? Don't testers benefit from understanding the product's construction? Don't developers benefit from an understanding of a testing process, so that they can (as one possible example) instrument their code appropriately? For sure, once you get to thinking about test automation, someonehas to build the automation harnesses, script the test runs, etc. Right? Is that not "development"?
So, is there a benefit to organizing around a "QD" (Quality Development?) job function, and organize a primarily "development" team where perhaps there is some internal specialization (but not necessarily a clear distinction) between creating software and making sure it works properly?
What do YOU think? How would YOU organize for quality, if you could?
* In fact, when I first started working at Lotus Development in the late '80s, the "testing" department was called "QA" (Quality Assurance). Around the time that we shipped Notes V4 (mid-'90s, before IBM bought Lotus), the Lotus QA organization working on Notes was merged with the Iris organization (developers of Notes), which had been composed almost entirely of developers. Somewhere in there, the term "QA" was explicitly switched to "QE", or "Quality Engineering". When I asked why, I was told that people felt it was a better name, because it more accurately reflected the engineering basis of the task, and that it might improve the level of respect given to the job title and to the people. To this day, the terminology (at the Lotus division of IBM, anyway) remains "QE", though I, personally, haven't detected any significant reduction in the caste differential (though of course these things are always in the eye of the beholder, YMMV).
**Again, the underlying assumption here is that you're trying to optimize for quality. Naturally, many, many companies (businesses) do not inherently optimize for quality, they try to optimize for profits, or maybe for market share, or maybe for personal career growth, or maybe for something else. I'm not primarily a business person (maybe if I were, I would still be running my own company!), so I can't say whether quality is the thing for which a business should globally optimize, and certainly, from an employee perspective, working on a "quality" product doesn't make me happy if I can't get paid for it. But it does seem to me that quality should be an important objective of product development, a "differentiator", if you like. :-) But that's another discussion.
(Need expert application development architecture/coding help? Contact me at: bbalaban, gmail.com)
This article ©Copyright 2009 by Looseleaf Software LLC, all rights reserved. You may link to this page, but may not copy without prior approval.
- Comments