Software-Quality Discussion List
Digest # 006


Tenberry Home
Software-Quality Home
Reducing Defects
Automated QA
Consulting
Site Index

============================================================
           Software-Quality Discussion List

      S O F T W A R E - Q U A L I T Y   D I G E S T

      "Cost-Effective Quality Techniques that Work"
============================================================
List Moderator:                      Supported by:
Terry Colligan                       Tenberry Software, Inc.
moderator@tenberry.com               http://www.tenberry.com
============================================================
February 19, 1998                     Digest # 006
============================================================

====IN THIS DIGEST=====

    ==== MODERATOR'S MESSAGE ====


    ==== CONTINUING ====

      Size of pieces of code
        Jerry Weinberg 

      RE: Software-Quality Digest # 005
        David Bennett 

      Customer reactions
        Jerry Weinberg 


    ===== NEW POST(S) =====

      A Question
        BIKER ON THE ROAD 

      Questions for David Bennett
        Barbara Truesdale  

      Software Quality Newsletter.
        Jim Cook 



==== MODERATOR'S MESSAGE ====

  Immediately after lamenting in the last issue that not
  many postings were coming in, and worrying about how I 
  would find the time to write filler material, we received 
  five posts!

  Keep them coming!  I'd much rather read your postings
  anyway!

  I also note that almost everyone is "grizzled" or working
  on it! ;-)  Do you have to be "grizzled" to care about
  quality?  (What does "grizzled" mean, anyway?)



==== CONTINUING ====

++++ New Post, New Topic ++++

Subject: Size of pieces of code
From: Jerry Weinberg 

>From: "Miller, Marion" 

>Cut your project into tiny, tiny pieces.

>Each piece will be easy to understand and program.

>A unit of code longer than one page/screen is too long.

>Progress is rapid since each unit is error free.

>Problems increase geometrically with the complexity of the
>coding.

>++++ Moderator's Revised comment: ++++

>  These are like good ideas!  We use them in our own 
>  development.

>  Do you have any specific rules of thumb to help
>  get the right pieces?

Here are a few I've found successful over the years:

1.  The one screen >= one unit of code is a rule of thumb. This is a pretty constant rule because human capabilities
aren't changing much - though screen sizes are, so perhaps
this rule should be modified for those who have very large
screens.  Perhaps the tried and true paper size is a good
rule - if it won't fit (uncrowded) on one side of one sheet
of paper, then it's too big.

2.  What size piece can you build so that the probability 
of <= 1 fault is, say, .95?  That's a size limit, because 
of the rule of thumb about testing: Software (or hardware,
for that matter) with no faults is much easier to test than
software with faults; software with more than one fault is
non-linearly harder to test than software with zero or one
fault.

3.  When the connections between pieces begin to be as
complicated as the pieces, the pieces are too small.  
You are just pushing the complexity into the interfaces.

4.  Notice that the size limits are not fixed, but depend 
on the complexity of what you're working on.  Don't be 
afraid to have a piece of code that's two lines long, if 
that's what it takes to make it understandable.  The 
underlying rule is based on the idea that code will be read 
many more times than it is written.  Therefore, if it's 
harder to understand than to write from scratch, it's too 
big (or just plain written wrong).

My bio: 40+ years in software, 40+ books about it, and 
still at it.  That should tell you something.  To find 
out more, take a look at my website  
 .

Jerry
website = http://www.geraldmweinberg.com
email = hardpretzel@earthlink.net

++++ Moderator Comment ++++

  Jerry is the person responsible for starting me thinking
  about software quality and process a long, long time ago
  (in a galaxy far, far away -- NOT! ;-)  He wrote a book
  called the "Psychology of Computer Programming", which
  is a bit dated (I still read it for the stories) but
  which made me think!

  I like the suggestion that the correct size is what
  you can reliably write with no defects, but I think
  what other people can easily understand is a more
  important size limit. 

  Any suggestions for an understandability metric?



++++ New Post, Same Topic ++++

Subject: RE: Software-Quality Digest # 005
From: David Bennett 

>In SQ#003, Marion Miller wrote a bit of very good advice, 
>which with I agree.  In an attempt to generate some 
>discussion and maybe raise a controversy or two, my 
>response didn't come out the way I intended.  Several 
>people criticized me for "dumping" on Ms. Miller.  

I thought they were valid comments, which might possibly have been slightly more diplomatically expressed.

>* Cut your project into tiny, tiny pieces.
>* Each piece will be easy to understand and program.
>* A unit of code longer than one page/screen is too long.
>* Progress is rapid since each unit is error free.
>* Problems increase geometrically with the complexity of the 
>  coding.

At first I agreed - but after further thought I disagree.  
Strongly.  With provisos.

Reason 1: The problem is that as you cut into smaller pieces, 
the pieces become simpler but the connections become more 
complex.

If you need to write 100,000 lines of code and you cut into 
pieces of 20 lines, there are 5,000 pieces.  Every piece is 
easily understood, but the list of pieces is a document of 
at least 5,000 lines, which is totally incomprehensible. 
This is not a solution!

Reason 2: This strategy ONLY works if you have a reasonable 
idea what the pieces will be, which is only possible if you 
have written a similar system before.  If you get your 
partitioning into pieces wrong, you have an unmitigated 
disaster.  

In other words, this strategy on its own just moves the 
problem from one place (the code) to another place (the 
analyst).  It's a part of the solution, but nowhere near 
enough.

My own experience (which closely parallels much written by 
others much cleverer than me) is that you break big systems 
into smaller systems, systems into sub-systems, sub-systems 
into modules, modules into sub-modules or classes/objects 
(if you have them) and then into functions.  

At each level you aim for a fan-out of about 5-8, but in 
practice greater fan-outs are permissible as long as the 
complexity at that level is low.  

At the bottom level you aim for functions/methods of 5-25 
lines of code.  Again in practice, as long as the median is 
in this range, some extreme values are tolerable.

The major design effort goes into the interfaces between 
sub-systems and modules.  Interfaces, interfaces and more 
interfaces.  External data structures are just persistent 
interfaces.

If you get your interfaces and data structures right, the 
coding is easy.  Anyone can do it.  If a programmer gets a 
chunk wrong, you throw it out and rewrite it.

If you get those wrong, the programmers can't fix it.  It 
won't fly.  When a butterfly flaps its wings in a tiny 
function somewhere, you get earthquakes and blizzards 
somewhere else.

Personal view: that overall structure (the architecture) 
can usually be done by only one person or a tiny team and if 
it's done right the software can live forever.

Regards,
David Bennett
[ POWERflex Corporation   Developers of PFXplus ]
[ Tel:  +61-3-9888-5833   Fax:  +61-3-9888-5451 ]
[ E-mail: sales@pfxcorp.com support@pfxcorp.com ]
[ Visit our Web Site:           www.pfxcorp.com ]

++++ Moderator Comment ++++

  The concern of getting the structure right is why
  I wrote my original, flip answer to Ms. Miller.

  Any suggestions on *how* to get the structure right?



++++ New Post, New Topic ++++

From: Jerry Weinberg 
Subject: Customer reactions

David Bennett:
> In our experience the customers are relatively tolerant of 
>newly-released features  which don't work quite right in 
>the first released.  They quite like the opportunity to 
>provide feedback and influence the next release. 

>We often strengthen validation testing based on feedback. 
>Customers may come to  depend on "features" which were not 
>part of the original specification.

>Those same customers are totally intolerant of existing 
>features which stop working  or change in the most minor 
>respect.  They will refuse to use (or even pay for) an 
>upgrade on the basis of a single minor incompatibility, 
>even where (in our  opinion) we were fixing a bug from 
>the previous version.

For a further elaboration of this kind of customer reaction,
depending on customer type, see the article by Johanna 
Schwab in the most recent American Programmer.  Or, if you 
can't afford that pricey mag, read it on the web at 
.  Look at the Software 
Engineering Management Essays page.

Jerry
website = http://www.geraldmweinberg.com
email = hardpretzel@earthlink.net

++++ Moderator Comment ++++

  This is an interesting article and worth reading,
  but it is by Johanna Rothman, not Schwab.

  Johanna suggests that there are different kinds of
  customers who want different kinds of quality.  Her
  paper certainly matches my experiences!

 

===== NEW POST(S) =====

++++ New Post, New Topic ++++

Subject: A Question
From: BIKER ON THE ROAD 

Hello SQ,

A while ago I have posted a message listing
the problems we face in working in a
legacy system with the following characteristics:
1. Big system with 3000 modules.
2. Assembler programming language.
3. Multiprogrammer environment.
4. Old started on 1966.
5. Rapidly evolving with 12 new functional
   changes every year!
6. Many enhancements are purchased from other
   vendors with radically different standards.

I promised that I will share my solution with
the list at a later post this is it.. :)

The problems are caused by:
1. The base system is very poorly documented.
   Any attempt to document such a system
   in one shot will fail due to time & effort
   costs.
   A team should be formed to do a high level
   review documents about the packages with
   detail added over time and as changes are
   implemented in the system.
2. New standards should be made to govern
   the new developments in assembler coding
   and enhancements documenting.

I will appreciate any updates to the above
as I am very interested to hear from the list
members about their experice..
Biker

++++ Moderator Comment ++++

  We have a similar problem of updating old software that
  was written with few comments or standards (or taste!)
  Unfortunately, I managed (indirectly) the writing of all
  of it! (I'm paying for past sins/inattention.)

  A technique that we have found to be very powerful for
  maintaining code that you have taken over, but which is
  not commented or documented is:

  Whenever you are reading code, and finally understand
  what it does, write it down -- immediately, and as
  comments in the source code.  It's surprising how
  quickly this improves the understandability of old
  code.  You spend a little more time writing the
  comments, but they are pretty easy to do -- after all,
  you just spent 40 minutes figuring out what that
  piece of code does and/or why it is coded that strange
  way!  Take an extra 10 minutes to write your understanding
  down, and you'll likely save 39 minutes next week
  when you next pass by the code.

  Interestingly enough, this technique helps even if
  the comments are wrong!  You still save the time, and
  the comments seem to be easier to detect logical
  inconsistencies in than the code itself.  Be sure to
  update the comments as your understanding improves.

  The reason for writing the new-found understanding
  into the source code is that the next maintaining
  programmer will be sure to find your documentation.



++++ New Post, New Topic ++++

From: "Truesdale, Barbara" 
Subject: Questions for David Bennett

Hi.  I'm new to the group.  I work for a state agency in 
Austin, TX. I'm also fairly new to the world of quality 
(as a result, I'm been taking lots of classes and doing lots 
of reading).   I've been assigned to develop a quality 
assurance program for our agency's development efforts.  

I have a couple of questions for David Bennett who posted 
recently.  I'm not trying to start an argument; I'm only 
trying to understand the intent of his message.  He said his 
company takes quality very seriously.  But then he mentioned 
something that seems to conflict with what I've been taught 
in recent courses.  He mentioned that they "rarely start 
with good specifications.  We create them as we understand 
what is possible, as compared to what is useful, what can 
be documented, and what can be sold."  Can I have some 
clarification, please?

I've been taught (and, of course, am trying to teach) that
specifications/requirements are an essential part of the 
quality process - that if you don't have good specifications, 
the quality of the work will be less than desired.  I'm 
getting some resistance from both sides of the fence - users 
and developers - because (I think) putting together really 
good requirements is rather hard to do, especially when we
haven't been taught how to gather requirements until very 
recently.

Is the way David's company does business something that 
they've learned along the way?  Is it a compromise with the 
users?   

Thanks!
bat

++++ Moderator Comment ++++

  Last issue, I told David that I didn't think there was
  much of a difference between shrink-wrap developers and
  in-house ones, but Barbara's post points one out for me:

  It's a lot easier to develop software for users requesting
  the software, than it is when you are imagining a want
  and trying to build something that will sell commercially,
  particularly if you are building something that never 
  existed before.



++++ New Post, New Topic ++++

From: Jim Cook 
Subject: Software Quality Newsletter.

Terry,

I found your newsletter subscription because of a search on 
a related topic. I've received two so far and find the 
information useful. I liked the fact that you reprinted 
Marion Miller's comments and revised your comments on later 
reflection. I also appreciate the reference to books and 
the one by Watts Humphrey was new to me. 

I saw your note about contributions and want to encourage 
you to persevere even if there's not much interaction yet. 
I've passed the newsletter on to several associates and 
added some of my perspectives on several topics. Use as you 
see fit.

I also am old, grizzled and opinionated and occasionally 
cranky. I have been developing / managing software since 
1964. Some of the books and authors that I have found 
immensely valuable over a number of years are:

Gerald Weinberg - virtually anything he's written but 
several things in particular.

Quality Software Management - a 4 volume set that covers 
every part of the development process and a lot of things 
that most people don't consider as part of the process

Handbook of Reviews, Inspections, and Walkthroughs. - Best 
and one of the few that I am aware of on this quality 
practice. A bit dated now but the concepts and issues are 
completely relevant.

Peopleware - Tom DeMarco

The Mythical Man Month - Fred Brooks

I have also found all of the books by McConnell to be well 
worth reading and rereading.

A few comments on Marion Miller's comments and a few of my 
own.

Snipped --------------------------
From: "Miller, Marion" 

Cut your project into tiny, tiny pieces.

Each piece will be easy to understand and program.

A unit of code longer than one page/screen is too long.

Progress is rapid since each unit is error free.

Problems increase geometrically with the complexity of the 
coding.
EndSnip ----------------------------

I generally agree with cutting into tiny, tiny pieces since 
the normal tendency is to leave the pieces too large and 
thereby miss some significant items until too late. However,
cutting too small causes other problems. I find that the 
"right" size is something that is "conceptually whole" but 
no smaller. I like a quote from Einstein:

"Make everything as simple as possible but no simpler."

That can be paraphrased into:

"Make everything as small as possible but no smaller."

The unit of code "longer than one page/screen is too long" 
may be a nice goal, but it really isn't practical in many 
cases (particularly the screen view). Again, I think the 
right unit is a conceptual concept and it may or may not 
fit on one page/screen. If it takes more than 10 screens, 
it may not be a single conceptual concept and probably 
should be looked at.

The overall thrust of divide and conquer and control 
complexity is clearly valuable.

A couple of things I have learned over the years.

Understand the whole before architecting, designing and 
implementing the first release. 

Basically, this is the upfront requirements engineering / 
development that must be done for any product but 
especially large systems. You never need to implement 
everything on the first release (or even many releases
subsequent). You do need to understand all of the 
pressures/wants/needs that can't be satisfied so that the 
architecture can handle them in later releases. This is 
very hard to do in many company cultures, because "no one
is coding and we have a deadline".

Developers and Marketing (or anyone else in the company) are 
not the customer. You're only a customer if you buy it 
and/or use it.

Involve the customer as part of the design team right at the 
start and keep them on the design team forever. Many 
organizations shield development from the customer and vice 
versa. I have seen many heated arguments about what the 
customer wants or doesn't want between marketing and 
development and QA and docs, etc. Many years ago, I learned 
to resolve these arguments by bringing a real customer into 
the room - even if it's just a phone call or email question. 
It continues to amaze me how receptive everyone is when a 
customer says: "I don't like/understand/need this and I 
won't buy it."  Everybody starts to get focused on what the 
customer wants that would  buy.  It's rarely anything that 
the argument was about.

Design for Change - Modularize, encapsulize and isolate.

Self explanatory. Things change. When you have to replace 
something it's much easier if you don't have to hit 
everything. You will have to replace something - guaranteed.

Identify all assumptions then, question all assumptions.

All software has assumptions built in. Most of these 
assumptions are invisible to everyone. Often assumptions 
are based on the way things are today and the assumption 
that things will be the same in the future. A good example 
is the Y2K "crisis". The assumption was that the software 
would be replaced before it was a problem (assuming anyone 
even thought that far ahead). 

One of the best ways to surface "invisible" assumptions 
is to ask the development team to imagine something that 
would make their code unworkable. Here's a couple of 
examples that illustrate the concept. Does your code deal 
with:

share prices in fractions? - what happens when the NYSE 
goes decimal?

currency - what happens when new currencies appear? Will 
you need more places to enter/display/print/convert 
multiple currencies? 

phone numbers - what happens when we get extra digits 
because we run out? (The world standard length right now 
is 22 digits).

regulatory or business rules - what happens if the rules 
change?

any element of data or logic that is man made and therefore 
changeable - what happens if the rules change for that 
data/logic item. What happens to the system overall?

A good question to ask is: I'm going to change the format 
of every data element in the system? What happens to my code 
when I do that?

Build the "Hello World" version of the product as fast as 
possible and never break it after that. 

Basically this means implement the base stuff as fast as 
possible and do daily builds once the base stuff is done. 
If the build ever breaks, stop everything until the build 
works.

Bad news early!

Let more users see/try/beat on it earlier rather than fewer 
users later. Early problems give you the time to react. 
Late problems kill releases and companies.

Jim Cook

++++ Moderator Comment ++++

  Lots of good ideas to agree with!

  Thanks for the positive feedback...

  I also find daily builds to be extremely helpful.
  I find the immediate feedback tremendously helpful,
  but we never "stop everything until the build works."
  We make it one person's top priority, but everyone
  else goes on.  Usually we have a working build the
  next day, but sometimes it goes 2 or 3 days.

  Any comments on why stopping everything helps your
  quality and productivity?


=============================================================
The Software-Quality Digest is edited by:
Terry Colligan, Moderator.      mailto:moderator@tenberry.com

And published by:
Tenberry Software, Inc.               http://www.tenberry.com

Information about this list is at our web site,
maintained by Tenberry Software:
    http://www.tenberry.com/softqual

To post a message ==> mailto:software-quality@tenberry.com

To subscribe ==> mailto:software-quality-join@tenberry.com

To unsubscribe ==> mailto:software-quality-leave@tenberry.com

Suggestions and comments ==> mailto:moderator@tenberry.com

=============  End of Software-Quality Digest ===============


[Tenberry] * [Software-Quality] * [Zero Defects] * [Automated QA] * [Consulting] * [Site Map]
Last modified 1998.2.19. Your questions, comments, and feedback are welcome.