TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Subject:Re: Readability tools? Just say no! From:Steven Jong <SteveFJong -at- AOL -dot- COM> Date:Thu, 25 Mar 1999 08:19:06 EST
To respond to the original question, no, I don't know of any FrameMaker native
tools for measuring readability. I would recommend copying and pasting samples
from Frame into Word, which does have several tools built in. (You only want
to measure randomly selected small blocks of text anyway, so this isn't
unreasonable.)
>> [T]ry this trick to test whether [readability-index software is] any
>> use: take a simple sentence and arrange the words in random
>> order (better still, arrange them maliciously so the sentence
>> makes absolutely no sense, or even says the opposite of what
>> you intended to say). If the software provides a comparable
>> readability index for both versions of the sentence, demand
>> your money back.
Hmmm... So if I write the sentence "The dog is white" and change it to "The
dog is black," and the tool says both sentences are equally readable (as they
will), I should demand a refund? Gee, what do you want? Readability indexes
purport to measure readability, not meaning.
>> There's almost no correlation between the main
>> readability indexes and actual readability, and there won't be
>> for a good long time to come until someone develops a tool
>> that can parse the content of text in the specific context of a
>> well-defined audience.
This is a sweeping indictment, but it is not true. Readability formulas were
originally developed to predict the ability of schoolchildren to comprehend
written text. The parameters of the original formulas (starting with the
Flesch index) were adjusted heuristically until their predictions matched
actual reading-test scores. Actually, there is a good correlation between
readability-index scores and actual readability in certain domains. Textbook
publishers and magazine editors still use readability formulas to assess the
readability of their products.
Now, I will say that the "domains" in question are limited. Flesch studied
Iowa elementary-school students in the 1920s or 1930s, I believe. I am not
aware of any work to calibrate formulas to contemporary adult technical
readers (although neither am I saying such work has not been done). Not many
people know how to measure a document correctly. (You can't just select all
the text and run it through the hopper.) And even if correctly measured,
readability doesn't address the effectiveness or design of graphics, which in
today's technical documents can be equally important.
I would not want to base any action or decision solely on a readability index,
any more than I would want to judge a ballplayer strictly on batting average.
However, readability does have some validity, and could reasonably be
considered as part of a larger set of measurements.
-- Steve
=========|=========|=========|=========|=========|=========|=====
Steven Jong, Documentation Team Manager ("Typo? What tpyo?")
Lightbridge, Inc., 67 South Bedford St., Burlington, MA 01803 USA mailto:jong -at- lightbridge -dot- com -dot- nospam 781.359.4902 [voice]
Home Sweet Homepage: http://members.aol.com/SteveFJong