Manual evaluation using metrics: Unclassified

Subject: Manual evaluation using metrics: Unclassified
From: "S.North" <north -at- HGL -dot- SIGNAAL -dot- NL>
Date: Tue, 20 Jul 1993 10:46:14 +0200

I know you mean well, and I can appreciate the problems and so I shall try to
not to be too angry or too rude; but please bear with me if I run off at the
fingers at little bit (it's hard to be restrained when your emotions are so
deeply involved).

Measure *what* in God's name!!!!!!!!!!!!!!!!

22 years in software, 10 years as a technical author, and now working as a
software quality assurance consultant have only served to make me extremely
angry at the increasing trend of people who so desperately want to measure
things. Sorry, .... calm down.

OK, back to first principles. What do you define as "quality"? If you cannot
define that then you are wasting your time. I suggest you look at Boehm's
criteria for software, or McCabe, or have a look at ISO/IEC 9126. [software
can teach technical documentation a lot, and vice versa - I intend to write a
paper on this.] However, the end result will only help you some of the way.
Ultimately, you will not be able to escape either the vague ISO 8402
definition, or the more general term from software engineering of "meeting the
requirements". In terms of a manual, you could express this as "fitness for
purpose" - which again begs the question, what is the purpose of the manual?

So, step two... what are the requirements for your manuals? Ease of use?
readability? weight? size? colour? smell? Define what you want your manuals to
do! Specify requirements.... Keep it sensible.... (why should anyone be
expected to remember what's written in a manual when they can't even remember
what day it is?). You should keep in mind the limitations of the media.

While your defining the requirements, it will probably dawn on you that
without some kind of description of the intended audience you will not be able
to make much progress. What may apply for one audience (as my full-time job I
write systems programmers documentation for real-time virtual machine
operating environments, while as a freelancer I write, for example,
instruction manuals for coffee maker machines) will be totally inapplicable
for another.

Now you should have the basics for a design specification: you have a set of
requirements for the documentation and a set of characterisitics for the
target group.

Now comes the hard part. In order to do anything meaningful with metrics you
will have to isolate a set of quality characteristics *THAT HAVE AN INFLUENCE
ON THE MANUALS ABILITY TO MEET THE REQUIREMENTS*. This will cause
you to reject 90% of the so-called metrics immediately. Line and word counts
are a TOTAL waste of time ... they will tell you NOTHING (except the line and
word counts). FOG tests and all their ilk will tell you NEARLY NOTHING (a
*slight* improvement). [BTW, I have a little utility that scans a text file
backwards, from the last character to the first. The backwards and forwards
versions given me *exactly the same* "readability" metrics - *I* know which
one is the more readable!] Basically, without wanting to labour the point, you
have to be certain that what you intend to measure actually affects one of the
quality criteria/characteristics that you have identified otherwise you are
wasting your time (e.g. measuring the readability of a manual will not help if
no-one ever actually reads it because it doesn't contain any useful
information or, to take one case from practice, where there is only one manual
for every thirty people and you are lucky if you even get to see a copy much
less read it).

Choosing your metrics is going to take a very long time. You will have to
select them to fulfil the following criteria (this list is by no means
exhaustive):

objective and repeatable: whoever does the measurement, and however many
times, must come up with the same figure.

pertain to the characteristic. Do not confuse feature and component. Sentence
length is a *contributory factor* to readability, but readability cannot be
judged on sentence length alone.

must be controllable. If there's nothing you can do to influence a
characteristic (e.g. the house style prescribes a page layout that is very
unreadable) then you should be careful to only measure the things you *can*
influence. This is often forgotten.... measuring the amount of rainfall every
day is not going to help me reduce the amount of rainfall!

must point to solutions not to problems (see 'controllable'). If you were to
give me a nice set of metrics that told me one of my manuals was useless then
I would have to accept the criticism (much a like a comment on a reader's
comment form). If you cannot tell me *why* then it is useless to me. It would
then require so much *interpretation* that I would have to consider it as
being an *opinion* (like a reader's comment form) - which is neither objective
nor reproducible.....

Sorry to sound off like this but such questions do get my blood boiling. I
have spent the last six years or so pondering the very same question of
metrics and (apart from upsetting a lot of academics) have more or less
concluded that your "peer review" approach is probably the best compromise.

You might, however, consider a *process* approach (ISO 9000, SEI CMM, etc.)...

Whatever, I wish you strength - you will need it.

====== Never use a short word when polysyllabic terminology will suffice. ======
Simon JJ North BA EngTech FISTC Quality Group, Software R&D
north -at- hgl -dot- signaal -dot- nl Hollandse Signaalapparaten BV
================================= Unclassified =================================


Previous by Author: STC & Internet
Next by Author: Manual evaluation without metrics (Unclassified)
Previous by Thread: An Introduction
Next by Thread: Re: Manual evaluation using metrics: Unclassified


What this post helpful? Share it with friends and colleagues:


Sponsored Ads