Software-Quality Discussion List
Digest # 017


Tenberry Home
Software-Quality Home
Reducing Defects
Automated QA
Consulting
Site Index

============================================================
           Software-Quality Discussion List

      S O F T W A R E - Q U A L I T Y   D I G E S T

      "Cost-Effective Quality Techniques that Work"
============================================================
List Moderator:                      Supported by:
Terry Colligan                       Tenberry Software, Inc.
moderator@tenberry.com               http://www.tenberry.com
============================================================
May 15, 1998                        Digest # 017
============================================================

====IN THIS DIGEST=====

    ==== MODERATOR'S MESSAGE ====

    Back in Business

    Fewer Comments


    ==== CONTINUING ====

    No job listings here, plz
      markw@ncube.com (Mark Wiley)

    Re: Software-Quality Digest # 016 #1
      Jerry Weinberg 

    Re: Software-Quality Digest # 016 #2
      Jerry Weinberg 

    RE: Software-Quality Digest # 016
      "Petersen, Erik" 

    RE: Software-Quality Digest # 016
      David Bennett 


    ===== NEW POST(S) =====

    Eiffel
      David Bennett 

    Intro with a special bonus:  a correction for the moderator
      Julie Clare Zachman 

    Software Testing Tools
      Rodolfo.Moeller@temic.de

    Book Reviews
      "Phillip Senn" 

    Book Review
      "Danny R. Faught" 



==== MODERATOR'S MESSAGE ====

  Back in Business

  Several of you may have noted a long delay between #017 and #016.
  The delay was caused by a series of medical problems, which have
  turned out to not be serious (or so we think.)

  In any case, I apologize for the delay.  I hope that this long delay
  doesn't destroy the several interesting threads we have going.

  If it appears that another similar episode will happen, I will find
  an alternate moderator, or switch to an unmoderated operation.

  Anyway, I'm glad to be back!  (Specially considering the
  alternative! :)

  I have a few more posts waiting, so the next issue should come out
  with a much smaller delta time.


  Fewer Comments

  At least one person suggested that I'm getting carried away with my
  comments, serving more as an advocate than a moderator. I think the
  complaint has validity, so I will try to "tone it down" a bit...



==== CONTINUING ====

++++ New Post, New Topic ++++

From: markw@ncube.com (Mark Wiley)
Subject: No job listings here, plz

> ++++ Moderator Comment ++++
>
>   I wrote back to Sean saying that I didn't think it was appropriate.
>   Upon reflection, I thought maybe others might have different
>   views than me.  So I'm opening this for discussion -- do we want
>   job postings?
>
>   My reasons why not:
>
>     1) We are geographically very diverse, from many different
>        countries, not to mention states, territories, etc.  Any one
>        job would be likely to interest only a very few readers.
>
>     2) We are positionally diverse: so far, I've identified testers,
>        programmers, Software Legends, QA engineers, educators,
>        students, and moderators!  Plus managers of all the preceeding


>        (except for students and Software Legends! ;-).  Again, any
>        one job would likely only interest a very few readers.
>
>     3) There are other places where jobs can be posted.
>
>     4) Posting jobs doesn't scale very well, and I'd prefer to keep
>        focused on the ways of improving software quality.
>
>   What do you think?

I think that there are more than enough ways for people to find out
about what positions are available without posting them here.

And I agree with your why not reasons also :-).

Markw



++++ New Post, New Topic ++++

From: Jerry Weinberg 
Subject: Re: Software-Quality Digest # 016

David Bennett says:
>I'm not advocating a hard and fast position.  I was wavering there
>for a while, but I'm becoming more and more convinced that there
>will always be code (lots of code!) I want to put in the debug build
>but less code (maybe only a little bit less) which I want to ship.
>
>++++ Moderator Comment ++++
>
>  Neither am I.  I'm just saying that, in my experience, the less
>  difference there is between a debug version and the final product,
>  the better things work.  I'm also saying that as we have moved to
>  this position, we haven't noticed any increase in development
>  time or effort.

I'd like to stay out of this fine controversy, but add something about
what "debug code" should look like IF you're going to leave it in the
final product.  Please, please, please do not leave code that produces
cryptic messages that the ordinary user cannot possibly understand.
The messages don't have to be totally self-explanatory, but they should
guide the user to know, for example, that this is not something s/he
can solve alone.  Manuals with error-codes might help, but most users
of consumer products wouldn't know to look there. Maybe just an error
handler that says "Call customer service and tell them you got a 'CODE
XYZ' message" would be sufficient.  Perhaps others have other
suggestions as to how to keep debug code from making users feel stupid
(a very poor quality practice).

Jerry
website = http://www.geraldmweinberg.com
email = hardpretzel@earthlink.net

++++ Moderator Comment ++++

  Our solution was to reserve a range of errors for debug or assert
  failures, and to change the error message printer to say that the
  user isn't expected to understand or at fault for this error, but
  would he be so kind as to forward it to us.  This seems to work
  fairly well.



++++ New Post, New Topic ++++

From: Jerry Weinberg 
Subject: Re: Software-Quality Digest # 016

I said:
 (I work with clients ranging from internal IT
organizations to shrink wrap vendors to 24-hour-day service
providers to embedded system builders to life-critical system
builders.  All of them understand this principle.)

++++ Moderator Comment ++++

  My point was that for most of the programs I have been involved
  with, the person paying for the program either never saw the
  source code, or couldn't/didn't read it, or both!  (The both
  case is by far the most common.)

I think I see where we've been having trouble understanding each
other. I never said anything about the person paying for the code
being able to see and/or understand the source code.  That sounds
like some old COBOL nonsense, and is absolutely not what I'm talking
about.

I'm talking about clients understanding the following *principle*:

1.  If the original writer of some code is the only one who has seen
it, you'd better assume that at most one person is able to understand
it.

2.  If you buy software that at most one person is able to
understand, you are buying software of unknown quality.

3.  If you buy software whose quality cannot be known, then you're
buying software whose quality is known to be poor.  (This is, of
course, a statistical principle - but it's always proved true in my
experience.)

One special case of this is the "genius programmer" rule: if you know
your software has been developed by a genius because nobody besides
this "genius" can understand it, you know your software is crap.

More generally, clients who don't read code themselves (or don't want
to) can easily apply this rule if they can find out about the
*process* that went into building the software.  If they discover
that nobody besides the developer laid eyes on the code, then they
know they cannot rely upon the quality.  (They might be lucky, but
I've never observed a whole lot of luck in this business when it
comes to getting quality by luck.)

For instance, they may discover that the testers could not see or
were unable to understand the code when constructing test suites.
Or, they may discover that there was no code reviewing in the
process, or that it was skipped.

Where does this process information come from?  That depends on what
type of software we're talking about.  On a custom-built system, the
buyer just insists on auditing the process, or having a consultant do
it for them.  I get a lot of this kind of work, and I never have to
read anybody's code - I just find out if somebody else did.

Working with many shrink-wrap software vendors, I've seen that those
who do not follow this principle will find their early releases
besieged by severe failures - from which some products never recover.
Of course, when buying shrink-wrap software, you may not have access
to this kind of process information.  In that case, I always
recommend that you wait until somebody else has taken the cost of
field-testing the stuff before you buy it.

Jerry
website = http://www.geraldmweinberg.com
email = hardpretzel@earthlink.net

++++ Moderator Comment ++++

  Now that I understand what you mean, I pretty much agree completely.
  Well said!

  I once had a product besieged by severe failures, and it indeed
  never recovered. :(



++++ New Post, New Topic ++++

From: "Petersen, Erik" 
Subject: RE: Software-Quality Digest # 016

Terry,
    Can you please end your moderator comments with a line of pluses,
like you start them, cos it's really hard to work out where they end.

++++ Moderator Comment ++++

  Good idea!  Will do!  (Also, I always indent my comments 2 spaces.)

+++++++++++++++++++++++++++

Jerry wrote

> No amount of testing can guarantee the correctness of a black box.
> You show me the sequence of tests you have done on a black box; and
> I will show you a program that can be inside that box, give all the
> correct answers to your tests, and never give another correct answer
> again. Without looking inside, you could never know which program is
> inside.
> Jerry
> website = http://www.geraldmweinberg.com
> email = hardpretzel@earthlink.net
>
This has actually happened apparently.  An early draft of Brian
Marick's "Classic Testing Mistakes" included a footnote that didn't
make it into the final paper, "One person who worked in a
pathologically broken organization told me they were given the
acceptance test in advance. They coded the program to recognize the
test cases and return the correct answer, bypassing completely the
logic that was supposed to calculate the answer."!

Can we use the terms behavioral and structural instead of black box
and white box?  In a recent post to swtest, Boris Bezier said these
terms predate black and white box, so why do we hold on to jargon
that few people outside of the testing arena can understand?

P.S John Cameron, how old is dirt? If it's in tabloids, it's fresh
that morning!

P.P.S  With all this talk of puddings, can someone explain why
Christmas puddings have coins in them?

cheers,
    Erik
=========================================================
Erik Petersen, Software Testing Consultant, ANZ Bank, Melbourne,
Australia

Mistakes are to life what shadows are to light. Ernst Junger

Disclaimer: All opinions are my own and I can neither confirm or deny
whether they coincide with those of my employer but if they do I'm
not telling!



++++ New Post, New Topic ++++

From: David Bennett 
Subject: RE: Software-Quality Digest # 016

Hi Terry

Nothing new.  Just responses.

*   I am making a slight change in the format of this issue.  When
*   someone makes a very long post, I am trying to add "Moderator
*   Comments" nearer to the text being commented on, even though this
*   breaks up the long post.  I think it will be clear, but let me
*   know if you don't like it.

For me, as the first recipient of this new technique, it's fine except
for 2 things.

1. You should mark the *end* of your comments as well as the beginning.

++++ Moderator Comment ++++

  Good idea!  Will do!  (Also, I always indent my comments 2 spaces.)

+++++++++++++++++++++++++++

2. You are falling into the temptation of writing more than is good for
moderation.  You are becoming a protagonist, not just a referee.  You
should indeed draw out inconsistencies, inaccuracies, hyperbole, edit
for brevity, etc, but if you have something serious to say perhaps you
should post it like the rest of us, and maybe wait until some other
responses come in.

++++ Moderator Comment ++++

  Noted, and agreed.  To help get the discussions off the ground, I
  deliberately participated aggressively.  I should be backing off now.
  Besides, there will be so many posts coming in that I will be busy
  moderating! :)

+++++++++++++++++++++++++++

*   While I agree that you should test your code if at all possible,
*   I'm not sure I agree with how strongly you state things here.
*   In particular, I think a "can't happen" case is important in
*   complicated decision nets, and I don't see any way to test
*   these cases automatically.  I do think the programmer should
*   manually force these kinds of error cases during the development
*   process.

Every line of code in a shipping production version of software which
you can't or won't or don't test in QA is a timebomb waiting to happen.
This I believe.

If you can't QA it, then use the pre-processor to remove it before the
production build.  If you insist on leaving it in, then you *must* find
a way to ensure that the "can't happen" condition is exactly as wide as
you think it is.  Over time, with maintenance, the relevance of the
programmer's original testing becomes zero.  If you don't actually
check that the "can't happen" code actually takes the right action, how
can you in good conscience ship the code?

So why do I care?  I said before that I have only occasionally run
across situations where the debug and production versions differ, and
those are generally not that hard to find and fix.  I stand by that.

I have often run across situations where the error handling is
erroneous.  I'll go further: *most* software does not handle exceptions
correctly.  It's *very* hard to devise methods of testing things which
"can't happen" but they do and cause software to fail disastrously.
That varies from false positives and false negatives to cascading
failures in the fault handling code itself.

Incidentally, we had one in your software, years ago.  There was a
problem where a program running under the DOS4GW (and DOS4G) extender
recursively ran programs under the same extender.  I forget the
details, but the results were disastrous and most misleading.  I don't
suggest that this is related to the current discussion, just that it
illustrates the difficulty in handling unanticipated conditions fairly
close to home.

++++ Moderator Comment ++++

  Yes, I consider our DOS extenders to be our 'before' situation.  We
  made many, if not most, of the mistakes possible in software
  development.  Although we eventually got to a fairly reliable state,
  it took 3-10 times longer than it would have if we had followed the
  many good ideas David and Gerry have listed.

  It's part of why I get so passionate about some of these issues.

+++++++++++++++++++++++++++

*   I think that Gerry Weinberg has it right here -- being
*   understandable is what matters.  While there is a strong positive
*   correlation between shortness and understandability, squeezing the
*   last few lines out of a segment of code can reduce (sometimes
*   dramatically) understandability.
*
*   Shorter is good, but understandable is much better!

I won't labour the point, but I wasn't talking about squeezing.  I
would not try to justify that making the code 1% shorter but 10x harder
to read is a good trade-off.

However, if you can find a way to re-write a piece of software from
10,000 lines of code to 1,000 or even 100 lines, but the readability
goes from 1st year Computer Science student to PhD, then I say do it.
This type of improvement can happen, for example by rewriting a simple
COBOL or BASIC program using (for example) compiler technology, data
driven software, p-code and virtual machine, etc.

Not everyone agrees with me.

*   Another way of stating your point is "If the code contains large
*   amounts of easily readable, textual logging and debugging
*   material, it's easier to understand."  Exactly why I think it's
*   important to write code this way!

I think you (intentionally?) misunderstood me.  As I said, I want to
*write* the code this way, I just don't want to *ship* it this way.  I
want the *source* code to be very understandable but I don't want the
*object* code to provide undue assistance to amateur sleuths.

++++ Moderator Comment ++++

  It's not intentional...  As you can tell, we don't agree on debug
  versus shipping versions...  At least not yet.  :)

+++++++++++++++++++++++++++

* We agree entirely here, if you read my original statement with
* respect to time savings.  Each iteration, there can only be one such
* saving place. I do worry about a programmer (and the program that
* programmer writes) who writes code that allows you to take more than
* one or two fruitful iterations like this.  But, except for our
* definition of "design," I think David and I are in complete
* agreement on this subject.

Just for interest, the project involved SQL middleware.  SQL is unique
in my experience in that tiny changes in the way of doing things can
multiply into orders of magnitude changes in how long it takes.  We
found that certain applications triggered unexpected behaviour in the
SQL engine, and it took us 3-4 iterations to get them all licked.
Mostly, I agree, we see 1 or 2.

In this case our original "design" concentrated on getting the correct
results.  We had to revisit the "implementation" to achieve the desired
performance without thereby getting incorrect results.  Mostly that is
called "optimisation", but these are semantic points of no great
importance.

Regards
David Bennett
[ POWERflex Corporation     Developers of PFXplus ]
[ Tel:  +61-3-9888-5833     Fax:  +61-3-9888-5451 ]
[ E-mail: sales@pfxcorp.com   support@pfxcorp.com ]
[ Web home: www.pfxcorp.com   me: dmb@pfxcorp.com ]



===== NEW POST(S) =====

++++ New Post, New Topic ++++

From: David Bennett 
Subject: Eiffel

Hi Terry

Another small contribution...

I recently attended a presentation of Eiffel by its creator, Bertrand
Meyer.  He hangs out in Melbourne a lot.

Eiffel has a structure where you develop and debug in a workbench. This
provides highly productive programmer tools including an incremental
compiler for near-instantaneous edit-compile-run cycles. Neat stuff,
based on an intermediate byte code for the bits you changed, but binary
for the bits you didn't.

Also, Eiffel has support for pre-conditions, post-conditions and
invariants, plus strong type checking, and deferred type checking if
needed, which eliminate a lot of standard stupid errors.  You get array
bounds checking free (pre-condition on the array class), no dangling
pointers, garbage collection, and so on.  Saves a lot of stupid bugs,
and just leaves you with the interesting ones.

At some point you push a button and it squirts out ANSI C which you
feed into a C compiler and build an optimised production executable. At
this point the pre-conditions etc can all be disabled.

It's a powerful approach, but it looks impossible to reconcile with
your proposal.  You cannot develop and debug on the shipping
executable, nor would you want to.  Possibly you have to QA both the
debug executable and the shipped version.  Maybe you should plan to
leave in all the pre/post conditions.  I would be interested to get
Meyer's views on this too.  He has pretty strong views on a lot of
other related topics!

[for those who are interested, check out Eiffel on www.eiffel.com]

Regards
David Bennett
[ POWERflex Corporation     Developers of PFXplus ]
[ Tel:  +61-3-9888-5833     Fax:  +61-3-9888-5451 ]
[ E-mail: sales@pfxcorp.com   support@pfxcorp.com ]
[ Web home: www.pfxcorp.com   me: dmb@pfxcorp.com ]

++++ Moderator Comments ++++

  1. We built InstantC, an incremental compiler for C which had a
	 nearly instantaneous edit-compile-run cycle, so you don't need
	 Eiffel for that.  It is a great way to program, though.  I still
	 have old users calling to wish to have InstantC for whatever
	 platform or language they are using now.  They felt it gave
	 50-100% boosts in productivity.  I certainly miss it.

  2. Does anyone have experience using Eiffel?  It seems like all the
     extra declarations of pre-conditions, etc. would be a big win --
	 does it work in practice?

  3. If I were using Eiffel, the only thing QA'd would be the optimized
     versions squirted out.  I see no conflict here.  How you get to
	 the executable to be tested is kind of interesting, but not very
	 relevant to the points I was making.



++++ New Post, New Topic ++++

From: Julie Clare Zachman 
Subject: Intro with a special bonus:  a correction for the moderator

Terry:

 1) I think the entire quote is "The proof of the pudding is in the
eating"

 2)  The meaning is that the plum pudding you make at Christmas time
for *next* Christmas may or may not "fail", and there's no way to
know the outcome until you put it in your mouth.

I have been reading this Digest since Issue #5.  It is one of the
best I have ever read.  You just can't get this kind of wisdom from
books, in my opinion.  I read  every word and have even started
emailing myself notes with cut and pasted  highlights from each of
the issues.  Then I file the note with the main issue.  Keep up the
good work!

INTRODUCTION
I work in a university research group as a programmer.  My boss (the
principal investigator) is starting a company to build and market a
new radiation treatment machine that is being developed by this
group.  Another company that he started with former employees will
provide the software. It has fallen to me to be a liaison with the
software company and write a customer requirements specification.  I
will  be in a position to affect (and possibly effect) user
interface, functionality, the service module, the physics module
(including self-check and self-calibration), and just about
everything software-related, including quality, of course 8-)

The module that will handle our machine will be based on previously
existing modules.  The existing software has been on the market for
several years.  The  software company has a QA guy, but he is a PhD
medical physicist, and I would not be surprised if he limited his
scope to, say, algorithm testing, statistical analysis, benchmarking
of dose calculation, etc.

I hope to incorporate into the CRS features that will improve the
software quality, without
    a) appearing to be making any assumptions about the existing
software, which I'm not that familiar with (actually, my
understanding is that it is the best around) and
    b) heavy-handedly requiring features that would be an undue
burden to implement and/or would not give the intended software
quality benefit.

I should add that I am not truly a customer.  We are not buying the
software.  There is a mutual goal for us to be able to say to
potential system buyers "the software is available" and for the
software company to have the opportunity to sell more treatment
planning software.

Anybody have any suggestions or words of wisdom for me?  I hope I am
not going beyond the scope of this list, and my apologies if I am.

On the subject of job postings:  Maybe you could have a web page with
postings no more than two lines in length -- the company, title, and
URL. If you don't want to bother with that, I wouldn't be too upset
if I didn't see any job postings in the list. The beauty of this list
is the high information density.

Julie Zachman
Julie Clare Zachman
University of Wisconsin
Department of Medical Physics
zachman@madrad.radiology.wisc.edu
(608) 262-3425

++++ Moderator Comment ++++

  Thanks for the nice feedback!  Can I quote you when I start to
  promote the list?

  I have one suggestion for your situation:  Make it a requirement that
  the software system be fully automatically testable, so that QA time
  is spent on creating new tests, rather than trying to rerun old
  tests.  This should be fairly easy to specify and will have a huge
  impact on overall quality.

  Others?  Agree?  Disagree?



++++ New Post, New Topic ++++

From: Rodolfo.Moeller@temic.de
Subject: Software Testing Tools

Hello,

I've just joined this list and I'd like to ask you for your help...

I'm an electrical engineer who develops software for embedded systems
and I'm looking for DOS/WIN95/WINNT testing tools for the C (or C++)
programing language. Most tools I know aren't suitable for embedded
systems development and I'd be grateful if you could give me any
advice about tools or information sources you know. Please feel free
to write also about tools for PC software testing (perhaps they can
be "adapted" to my needs...).

Thanks in advance,

Rodolfo Moeller
Temic GmbH
Nuremberg, Germany.

++++ Moderator Opinion ++++

  I think the testing tools should be built in to the application.
  I think this goes double for embedded systems.

  (Shameless plug:)
  Our web site, www.tenberry.com, has several pages about how to add
  full testing automation to your application.  We would be happy to
  help you do so.



++++ New Post, New Topic ++++

From: "Phillip Senn" 
Subject: Book Reviews

I subscribe to the MCSD VB Study Group, and one of the comments
recently made was:

"One book to stay away from is 'Visual Basic 5 Certification Exam
Guide', Metzer and Hillier, publ. McGraw-Hill".

"If you want to see my scathing review of this book, go to Amazon.com
and lookup the book by this isbn number: 0079136710"

I think Amazon.com's book reviews could be a valuable resource to dig
through.



++++ New Post, New Topic ++++

From: "Danny R. Faught" 
Subject: Book Review

Terry Colligan wrote:
> >I wrote up a review of The Craft of Software Testing by Brian Marick
> >that I could contribute.
>
>   Well, I would certainly publish it, assuming you would allow it.

Here's my review for the Software-Quality list - note that the intended
audience is software developers.

Book Review - The Craft of Software Testing, Brian Marick
review by Danny Faught

Now tied for third place in the unscientific book poll in the
comp.software.testing FAQ, _The Craft of Software Testing:  Subsystem
Testing_ is probably the only book that focuses on testing done by
software developers rather than testing specialists.  High points
include the later chapters on reusable software and object-oriented
software, though Shel Siegel's Object Oriented Software Testing is
slightly more recent and probably goes into more detail.  Marick also
recommends _Object-Oriented Software Construction_ by Bertrand Meyer.
The most valuable parts of the book are the test requirements catalogs
and checklists in the appendices.  These checklists are a valuable
resource for a functional tester.

Before reading through the book, read Marick's own review at
http://www.rational.com/connection/books/reviews/marick/index.html.  He
says he regrets assuming that the reader will be working from
well-written formal specifications and acknowledges that no one will
design tests using the rigid format that he proposes.  There are some
valuable things to learn throughout the book, though, so it would be
wise to go ahead and give it a try, but skip over most of the long
examples.

On the Rational web site, Marick lists other references that are useful
for developer-driven testing, including "Testing Made Palatable", by
Marc Rettig, in Communications of the ACM, May 1991, Volume 34, Number
5.  This five page article (plus Marick's commentary at
http://www.rational.com/connection/books/reviews/rettig/index.html) may
be itself be a palatable introduction to testing for the overworked
software engineer.

The Craft of Software Testing, by Brian Marick, Englewood Cliffs, NJ:
Prentice Hall, 1995.  ISBN 0-13-177411-5.  http://www.prenhall.com.
The author may be reached at marick@testing.com.

++++ Moderator Comment ++++

  Thanks, Danny!

=============================================================
The Software-Quality Digest is edited by:
Terry Colligan, Moderator.      mailto:moderator@tenberry.com

And published by:
Tenberry Software, Inc.               http://www.tenberry.com

Information about this list is at our web site,
maintained by Tenberry Software:
    http://www.tenberry.com/softqual

To post a message ==> mailto:software-quality@tenberry.com

To subscribe ==> mailto:software-quality-join@tenberry.com

To unsubscribe ==> mailto:software-quality-leave@tenberry.com

Suggestions and comments ==> mailto:moderator@tenberry.com

=============  End of Software-Quality Digest ===============

[Tenberry] * [Software-Quality] * [Zero Defects] * [Automated QA] * [Consulting] * [Site Map]
Last modified 1998.5.18. Your questions, comments, and feedback are welcome.