Tuesday, February 23, 2010

Startups need a senior "voice of testing"

The lead visual designer of Google quit recently. He made an interesting point:

"When I joined Google as its first visual designer, the company was already seven years old. Seven years is a long time to run a company without a classically trained designer. Google had plenty of designers on staff then, but most of them had backgrounds in CS or HCI. And none of them were in high-up, respected leadership positions. Without a person at (or near) the helm who thoroughly understands the principles and elements of Design, a company eventually runs out of reasons for design decisions." "When I joined, I thought there was potential to help the company change course in its design direction. But I learned that Google had set its course long before I arrived."

This logic is sound and can be carried forward with ease to my current field of work -- software testing. 

The mistake I see startups (in India) make is to begin work without *any* voice of testing at all, i.e., no one is entrusted the task of understanding and being the custodian of the "principles and elements" of  testing.

Somewhere along the way the startup realizes there are "quality issues" and hires a junior tester, usually a "fresher" who helps discover the most obvious glaring-in-your-face errors. The founders are pleased, and as time goes by, "augment" the test team with more resources of the same profile.

Unfortunately, although the most obvious bugs are found (fixed, and then *usually* manually re-tested), this model begins to break down after a while for a number of reasons (in random order):

  1. It is not scalable (manual testing never is in a startup not dedicated to providing testing "services"). Bugs are found, fixed, manually verified and then creep back in ever so often.
  2. No one mentors testers to look beyond finding the most obvious bugs and open their horizons to varied testing that a robust application necessarily needs.
  3. Junior testers are usually too inexperienced to help clarify product workflows and find long-chain faults/failures. This leads to seemingly random errors being reported by customers which are are very hard to replicate in-house.
  4. Often junior testers are unable to voice concerns about the extent of testing required and get pressured into doing a hurried one-hour-before-release smoke testing. Needless to say this leads to problems in the field.
  5. Specialized testing such as performance testing and security testing (to name just two) get completely left out; only "functional" testing gets done with this model.
  6. The "negative" aspects of testing get ignored. Testing gets reduced to a validation activity -- verify that the system works as stated, rather than look for ways to break it.
  7. It is hard for junior testers to voice their opinion on usability -- they are either too raw to spot usability issues or get shot down quickly owing to their position in the pecking order. When the voice of usability is missing, software starts to calcify in ways that disallows usability from being injected later.
  8. Building testable systems requires time, skill, effort, and most importantly, a champion. Since there is no custodian -- testability, building the system so that it can be tested with ease, gets completely ignored.
After some time, the startup realizes that "something" is missing, and hires a "senior" "quality" person. This is usually the second mistake -- getting a QA Manager who is a process/quality/ISO/CMMi Manager.

The QA Manager comes in, looks around and deems that problems exist due to "lack of process". He (I will NOT use "she" here :-P) starts to point out the benefits of "say-what-you-do", "do-what-you-say-you-do", and begins to put in place a process/quality/metrics regime, the bedrock of which is usually the infamous timesheet. Testers are made to report to this person, and are "scientifically" appraised using metrics such as "number of valid bugs raised". You can see where *that* will lead, don't you?

The testers *still* don't get mentored (except perhaps in the fine art of corporate politicking), and performance testing, security testing, etc. *still* don't happen. We didn't say we would do those and we didn't -- 100% process compliance ;-).

Bleak. So, can something be done? Here's what I have seen works.

Get a software professional who has worked in the field of software "developing", "defining" and "managing" software projects and get her to don the Testing Hat. Right from the beginning.

Wearing her Testing Hat, she will quickly figure out that manual testing is "a crime against humanity" (Martin Fowler). The minute a testing team begins to regard test automation as non-negotiable, doors open for performance, security and other kinds of testing that *cannot* be done manually. This person will also figure that paid software testing tools do not offer significant value-for-money, and introduce open-source tools within the testing organization (thus saving you a lot in license costs ;-)).

The experience that this person brings, having built and used software in the past, will be handy when taking design decisions that impact usability.

THEN hire the fresh graduates and have her mentor them. The way such a person mentors is by initially working WITH them. So she *has* to be able to do EVERYTHING from writing test scenarios to manually testing to creating test automation. Oh, and manage the team -- ranging from work allocation to perception issues that tend to regard testers as lesser mortals.

Over time, the team will get savvy enough to put in place automated builds, deploys, regression and all the other good stuff that help prop error-free software.

Such people *do* exist. Trust me :-) :-).

2 comments:

  1. A reader commented (on chat):
    >>
    "Get a software professional who has worked in the field of software "developing", "defining" and "managing" software projects and get her to don the Testing Hat."

    and why would u think a person who has developed, managed, defined projects would want to do testing? thats not a forward step in career path progression
    <<

    IMO this is *precisely* the mindset -- of viewing testing as a lowly uninteresting task to be grudgingly performed by lesser mortals -- that leads to the kind of chaos I have highlighted above.

    ReplyDelete
  2. Hi, Your blog is really nice and informative. I liked this article since it is very useful to me. Thanks for sharing it. Bye.

    ReplyDelete