Software-Quality Discussion List
Digest # 021

Tenberry Home
Software-Quality Home
Reducing Defects
Automated QA
Site Index

           Software-Quality Discussion List

      S O F T W A R E - Q U A L I T Y   D I G E S T

      "Cost-Effective Quality Techniques that Work"
List Moderator:                      Supported by:
Terry Colligan                       Tenberry Software, Inc.     
July 5, 1998                         Digest # 021




    ==== CONTINUING ====

    Subject: Zero-defect bonuses
    From: "Niall Hammond" 

    Subject: Zero-defect bonuses
    From: Jerry Weinberg 

    Subject: Re: Test of Quality (Software-Quality Digest # 020)
    From: "Richard Hendershot" 

    Subject: Re: Test of Quality (Software-Quality Digest # 020)
    From: Jerry Weinberg 

    Subject: re: Where is QA (in SQ # 020)
    From: "Richard Hendershot" 

    Subject: re: Where is QA (in SQ # 020)
    From: Jerry Weinberg 

    Subject: Re: Where's the Quality? (Part II)
    From: "Danny R. Faught" 

    ===== NEW POST(S) =====

    Subject: Just thinking
    From: "Phillip Senn" 

    Subject: FW: Windows '98 source code.
    From: John Cameron 

    Subject: Performance-Test
    From: Roland Petrasch 



  I am pleased to get Software Quality #021 out only five days after
  #020.  This is due mostly to the good and numerous posts, and to
  a working e-mail system.

  I have split up several posts, particularly Richard Hendershot's,
  so as to keep topics together.  My apologies, Richard, if you are
  offended by this.

  To make my job a bit easier, I would appreciate it if you could make
  your posts:

    1) Make multiple posts if you are responding to multiple items.
       (My thanks to Jerry Weinberg, who has done this faithfully since
       the beginning.)

    2) Contain a meaningful title, other than the automatically
       generated "RE: Software-Quality # xxx".  I have put in made-up
       titles for Jerry's and Richard's posts to make the topics
       clearer.  (Actually, about 95% of the posts come with the
       "RE: Software-Quality # xxx" title, and I have been making up
       titles to make the directory clearer.)

  Question for the list:  Is making up titles and bunching the
      discussions on a topic helpful?  It does take time...

==== CONTINUING ====

++++ New Post, New Topic ++++

From: "Niall Hammond" 
Subject: Zero-defect bonuses

>> I see a big problem
>> with your proposal:  what do you do with code written and tested by
>> engineers who no longer work at your organization?

Sorry, I should have made it clear that only the developer and tester
causing the bug would miss out on the bonus whilst the bug was being
fixed and tested. If either the developer or the tester had left the
employ of the company then their replacements would not be penalized in
such a way as fixing or testing the code would qualify them for the
bonus, as would any case when another developer did the work.

As a really radical, but I suspect very useful, extension of the idea
the same scheme would also apply to the management, as to my mind a
team/project leader in part gets paid extra to take responsibility for
the code produced.

++++ Moderator Comment ++++

  Although I am sympathetic to what you are trying to do, I still see
  several problems:

    1) I suspect people will optimize their payout, rather than work
       to the goals you are trying to proceed.  As an example, I don't
       think working on fixing old bugs (their own or not) would be
       attractive to engineers, since they couldn't be earning a bonus.

    2) Because you are changing compensation, you will have to get
       management buy-in at a high level.  (Not necessarily bad, but
       probably harder.)

    3) You and your team will be spending time interpreting rules,
       making up new ones, and adjudicating disputes.  At least, when
       sales people are paid with schemes like this, that's what

  I think that this is a really interesting idea!

  The main difficulty is that you can't directly measure what you want
  -- maintainable, understandable, defect-free features implemented
  promptly -- so you have to measure some approximation.  Any
  approximation has a risk of maximizing what you measure, not what
  you want/need.  (Think of paying programmer by the line of code.)

++++ New Post, Same Topic ++++

From: Jerry Weinberg 
Subject: Zero-defect bonuses

From: "Niall Hammond" 
>So if a developer is paid $3,000 per month, they may receive an
>additional $250 per week for each complete week spent writing fresh
>code. The same for the test team (though with an extra zero
>perhaps). For weeks when the developers code is having a bug fixed
>and also weeks when that bug fix is being tested, no bonus would be
>paid to either the developer or the tester that signed the code off.
>This would reflect the fact that new functionality gave the corporation
>added value and should soon discourage the developer and tester letting
>questionable things slip by.

I think your heart is in the right place, Niall, but my experience says
this will backfire.

Developers and testers will, in the short run, let bug fixing slide
whenever possible, often playing the game of "hot potato" with other

In the long run, they will simply leave when they're mired in their old
bugs and not getting bonuses.  (People quickly adapt to bonuses and
believe they are part of their due.)

Yes, this could be a good way of getting rid of the poor testers and
developers, but who then takes care of the crap they left behind - and
loses money because of it?

website =
email =

++++ Moderator Comment ++++

  Jerry's comments mirror exactly the behavior I've seen when trying
  to make salesman's compensation more closely match a company's goals.
  It's very hard to get the effect you want because the measurements
  aren't usually directly of what you want, and are hard to make non-
  distortable.  If you spend *lots* of time getting the measurements
  precise enough, you run the risk of being perceived as bureaucratic
  by your engineers (and your customers.)  They may be right.

  This might be easier to try in a brand-new organization than in one
  with lots of existing non-defect-free code.

++++ New Post, New Topic ++++

From: "Richard Hendershot" 
Subject: Re: Test of Quality (Software-Quality Digest # 020)

>  Therefore I propose the following test for a quality system:
>  Either the system must be drop-dead reliable (failures measured in
>  the 1 per 100,000 hours of usage or better), OR, there must be useful
>  diagnostics for identifying common failure modes and reporting
>  progress.
>  (Note: I'm not suggesting the above as the *only* test for a quality
>  system, just one why to check for quality.)
>  Comments?  Do your systems have self-diagnostics?  Debugging aids
>  that are available to the end-user?  Why or why not?

re: Either/Or

    I think the problem with this concept is embodied in the question
of "What is a failure".  Each problem has a varying amount of
significance.  If my document won't print, it's a failure.  But, if I
can save the document, restart the program, reload the document and
print it - - the WorkAround can get me by.  Some workarounds are so
quick and easy that I, personally, barely notice my doing them!  The
user who would not think of such a workaround, though, is severely
impacted.  So, we can identify a user-perspective and a
production-perspective to the impact of a defect. Additionaly, it would
be nearly impossible to make a management decision to
Include/Don't_Include the necessary diagnostic code in the production
build based on numerics which have not as yet been identified; the
Defect Count is still being made!

    For the user-perspective, without a very well-defined market of
sufficient technical expertise, diagnostics can be useless.  Messages
of a pedantic nature (and especially which must be continuously
dismissed manually!) can - to the technically competent - become a
serious irritant. In short, it's tough to cut a compromise which might
suit all concerned.

    My most relevant experience is an Aviation Electronics (Avionics)
device where the engineer in charge did, in planning, consider the need
for hardware diagnostics in the firmware.  QA/Tech was never consulted
as to which/what things would be useful so the project continued
without much of this through *all* the pains of InitialRelease.  On the
bench, however, the extent of those implemented proved to be woefully
inadequate.  A slow period some 6 months later allowed some work in
this area.  Metrics were never gathered to show the benefit of the new
code... but it was a *very* small company and it was, really, obvious
there was a significant impact in the Tech areas having the new
information.  The problem here is that diag routines/capabilities which
were suggested, and discarded for one reason or another, could never be
shown to have a measurable impact or cost or benefit since no metrics
were in effect in the first place.

    I would suggest that QA must be the "owner" of the diagnostics; Let
them gather such stubs, test routines and utilities which might be
produced in Development - add what they might create themselves - and
organize into a hand off state.  Those customers which might have a use
for such utilities (and these *are* easily identified) could be
provided with them.  In our case, we never gave away the access
key-sequence to the highest levels except where it seemed appropriate.
ISV's have a special problem; Ship the extra or not?  In your case, as
I gather it, internet distribution as a "patch" or some such would have
been little help ;)

    Like you, I bemoan the move to discontinue diagnostic distribution
with products.  When helpful, they are REALLY helpful. For most,


++++ Moderator Comment ++++

  First, I say that if the user has to do a work-around, it's a defect
  or failure.  It may not be catastrophic, but it's still a failure.
  It still takes user time and effort to get around the defect.

  Secondly, if diagnostics are designed in to the product, I don't see
  why they should have any different ease-of-use issues than the rest
  of the product.

  Thirdly, I was suggesting that there be diagnostic code for situations
  which are likely to be encountered by the user:

   - Needed library not installed/wrong version.

   - Needed driver not installed.

   - Needed hardware not installed.

   - Needed hardware not operating/turned off.

   - Communications line down.

   - Communications line through-put too low.

  These are not internal failures, but rather external ones, and ones
  where the user is most in need of help.

  I also think that internal diagnostics, which seems to be what you
  are talking about, should be included in a system.

  Fourth, I think Tech Support should be the "owner" of these
  diagnostics, while QA should own the internal ones.

++++ New Post, Same Topic ++++

From: Jerry Weinberg 
Subject: Re: Test of Quality (Software-Quality Digest # 020)

>  Therefore I propose the following test for a quality system:
>  Either the system must be drop-dead reliable (failures measured in
>  the 1 per 100,000 hours of usage or better), OR, there must be useful
>  diagnostics for identifying common failure modes and reporting
>  progress.
>  (Note: I'm not suggesting the above as the *only* test for a quality
>  system, just one why to check for quality.)
>  Comments?  Do your systems have self-diagnostics?  Debugging aids
>  that are available to the end-user?  Why or why not?

I think maybe this is getting down one level too low, in that it
suggests how to achieve what a user wants to achieve.  For instance,

For some users/failures, it's not the number of failures, but the total
time not solving the user's problem.  (e.g., you lose revenue every
minute the system is not up.  At a million dollars a minute, you can
invest a lot of money in millisecond recovery processes for frequent

For others, each failure counts, no matter how long it takes to get
back. (e.g., you lose data with each failure.  At a million dollars a
failure, you can invest a lot of money in failure prevention, even,
say, at the cost of slowing the system.)

So, I think you have to start with the cost/value of each failure type
for each user type, and design your ways of
handling/preventing/recovering failures based on that.

website =
email =

++++ Moderator Comment ++++

  I agree that any diagnostic or failure recovery should be economically
  sound, and that you probably don't want to spend a million dollars
  adding recovery to a system that will only be used once.  (Unless
  it's the Mars lander...)

++++ New Post, New Topic ++++

From: "Richard Hendershot" 
Subject: re: Where is QA (in SQ # 020)

>  What are other peoples experiences?  If Gerry Weinberg is still
>  listening, you must have seen lots of companies -- do you see any
>  difference in effectiveness based upon the structure of the QA/Test
>  engineers?

    I've yet to experience a separate QA dept situation where there was
not some Us vs. Them contention.  I remain a firm believer that the
developmental process does not end until the end-user holds it in her
hands! Every employee should, IMO, attribute a portion of his workweek
to QA.

    Actually, it does never end as the user feedback can be the most
significant information provided to Support and NewProject decision

>  So far, we hadn't considered using compensation as a tool for defect-
>  free, other than rewarding the "good ones".  I see a big problem]

    Is that really a problem?  If the signees are not assigned, for one
reason or another, to the project maintenance then it's clearly (to
the assignee's perspective) New Functionality and should invoke such

    Would maintenance cycles be "put-off" though?

>  with your proposal:  what do you do with code written and tested by
>  engineers who no longer work at your organization?  The scheme you
>  propose will make it hard to find someone willing to fix problems in
>  someone else's code, since they would take a financial hit as well as
>  the normal unpleasantness.


++++ New Post, New Topic ++++

From: Jerry Weinberg 
Subject: re: Where is QA (in SQ # 020)

>>>    Should there be a separate testing/quality department at all?
>>>   (Or should this function be part of the development team?)

Perhaps the question should be posed differently.  Then I would answer
this way:

There should be a separate testing/quality department.

The responsibility for quality should always be part of the development
team's charter.

In other words, the separate testing/quality department is a tool
available to the development team, just as many other specialist teams.
It's up to the development team to use it well.  (It's not, as often
seems to be the case, the job of the testing department to coax the
developers into producing a quality product, and the job of the
developers to blame the testers for not testing quality into their

website =
email =

++++ Moderator Comment ++++

  What's the purpose/benefit of the separate testing/quality

  It seems like having a C department, or a SQL department to me...

++++ New Post, New Topic ++++

From: "Danny R. Faught" 
Subject: Re: Where's the Quality? (Part II)

Your story about trouble accessing your email remotely was not at all
surprising.  On the rare unfortunate occasion that I must use Windows
95, I often find that it doesn't work on any but the simplest tasks.
For example, on recent trips I was surprised to find that dial-up
networking seemed to be able to support most of my complex dialing
procedure - including dialing 8 for an outside line, and dialing using
a calling card.  But in reality I rarely got it to work properly.  Most
computer users are conditioned to justify it, saying "Well, I was
really pushing the computer beyond its limits".  As a tester, I say, if
you ship a feature, it should work.  No excuses.

++++ Moderator Comment ++++

  I agree 100%!!!!!  No excuses!

++++ End Moderator Comment ++++

I recently bought a modem, and was dismayed when I found that the Users
Guide was all fluff (I refused to buy the much cheaper "WinModem",
hoping to avoid this sort of problem).  Not a word about the modem's
command set.  I had to download the real reference manual over the web,
in a format that didn't print out well.  It is likely that you would
need this sort of information in order to connect at a lower speed,
which is a common remedy for many different problems with flaky

++++ Moderator Comment ++++

  So, let's see -- to get your modem working, so you can get on the
  net, you first need to get on the net, so you can download the manual
  -- No wonder I was frustrated!

  I classify this scheme as very low quality, no matter what the bug
  rate is.

++++ End Moderator Comment ++++

For more information on what your expectations for your software
suppliers should be, watch for Cem Kaner's new book _Bad Software_, due
out soon.


++++ Moderator Comment ++++

  Sounds good -- I'll review it when it comes out.

===== NEW POST(S) =====

++++ New Post, New Topic ++++

From: "Phillip Senn" 
Subject: Just thinking

I remember when I was programming in dbase III, and the VP came into my
office. "Can you replace all these records's status field to 0?" he
asked. "Let's see" I said.....


1299 records replaced.

He looked at me.

I looked at him.

"You mean you just did it?" he asked.
"Yeah" I said, using my no-big-deal look.

He was just so used to these COBOL programmers who required a service
request form to be filled out so that it could be added to the backlog
and gotten around to in the next 6 months.

I was too naive to know the power of what I had just done.

++++ New Post, New Topic ++++

From: John Cameron 
Subject: FW: Windows '98 source code.

    I thought a peek at windows 98 might help solve your on the road

Windows '98 source code.
     TOP SECRET Microsoft(c)  Code
     Project: Chicago(tm)
     Projected release-date: Summer 1998
    #include "win31.h"
    #include "win95.h"
    #include "evenmore.h"
    #include "oldstuff.h"
    #include "billrulz.h"
    #define INSTALL = HARD
    char make_prog_look_big[1600000];
    void main()
            if (first_time_installation)
            if (still_not_crashed)
        if (detect_cache())
        if (fast_cpu())
            set_mouse(speed, very_slow);
            set_mouse(action, jumpy);
            set_mouse(reaction, sometimes);
        /* printf("Welcome to Windows 3.11"); */
       /* printf("Welcome to Windows 95"); */
       printf("Welcome to Windows 98");
       if (system_ok())
            system_memory = open("a:\swp0001.swp", O_CREATE);

++++ Moderator Comment ++++

  For John:  I assume this is meant as humor. ;-)

  For Microsoft's lawyers:  The Justice Department is on the other
             line.  ;->

  I put this in because I thought it was funny, although it's pretty
  common these days to take shots at Microsoft.  I have put it in as
  an experiment.

  Is this type of humor desirable for our newsletter?

++++ New Post, New Topic ++++

From: Roland Petrasch 
Subject: Performance-Test

I'm looking for Definitions / Books for Performance-Tests. Who can
help ?

**** Moderator question: *****

  Do you do economic justifications for your quality systems/efforts?

  If so, can you share your numbers with us?

  If not, how do you decide "how much quality you can afford"?

The Software-Quality Digest is edited by:
Terry Colligan, Moderator.

And published by:
Tenberry Software, Inc.     

Information about this list is at our web site,
maintained by Tenberry Software:

To post a message ==>

To subscribe ==>

To unsubscribe ==>

Suggestions and comments ==>

=============  End of Software-Quality Digest ===============

[Tenberry] * [Software-Quality] * [Zero Defects] * [Automated QA] * [Consulting] * [Site Map]
Last modified 1998.7.6. Your questions, comments, and feedback are welcome.