TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Carrie Baker wondered: <<I would like to prepare a feedback form for
our user guides.>>
Be aware that response rates are very low (5% is considered
surprisingly good), and that the people who reply are rarely a truly
representative sample of the overall audience. So you have to use
this kind of feedback with considerable care. Never act solely on the
basis of a feedback form unless it reports an objectively
demonstrable problem (e.g., the page references are wrong or an
entire topic is missing) rather than a subjective "this sucks" or
"this would be better if you eliminate all forms of the verb "to be"
from the documentation".
<<Are there such forms online that I could look at. What sort of
things could I ask?>>
<<(our marketing oriented boss likes to make forms with questions
like "To what extent is the documentation helpful to you" and you
have to circle a number from one to five to signify the extent of
helpfulness.>>
This kind of metric is actually entirely useless, as you can see if
you think about it for a moment:
"Hey... we got a 3 out of 5 on our 'helpful' score!"
"So... what does that mean?"
"It means we need to make the docs more helpful."
"Sounds reasonable. How are we going to do that? I mean,
_specifically_ how?"
"Umm.... we could guess?"
If you rely solely on this kind of metric, you end up guessing at the
problem, and if you guess wrong, you end up making the problem worse.
(Example: Microsoft and the disastrous modification to revision
tracking in Word XP.) If your boss is hung up on this kind of
question, make sure the question is useful. Compare, for example, the
following questions:
"Is the index useful? (1 = useful, 5 = useless)
"Is the index sufficiently long? (1 = long enough, 5 = too short)
"Were the index keywords we provided sufficiently clear? (1 = crystal
clear, and enough synonyms; 5 = you used words I did not understand
or was not familiar with, and did not use the words I was looking for)
The first question gets a 5 on its own scale; it's useless in terms
of telling you what to do. The second isn't bad, but "long" doesn't
provide enough details to get a perfect score because it doesn't tell
you why the index isn't long enough and does not define what is
missing (keywords? topics? cross-references? synonyms?). The final
question gets top marks because it gives you an answer that defines
how you must respond, but loses points because it combines two things
(clarity and synonyms); a better choice would divide the question
into two separate questions (one for clarity and one for synonyms).
See the thought process involved in coming up with a good question?
For a metric to be useful, it must provide objective information you
can act upon. To find out whether a question works, ask a colleague
to answer the question and ask yourself: "OK, now what do I do based
on that answer?" If you _know_ what to do (add keywords?), and don't
have to guess (increase the line spacing?), the question is
effective. If you don't, revise the question and ask someone else
until you get a useful answer.
WebWorks ePublisher Pro for Word features support for every major Help
format plus PDF, HTML and more. Flexible, precise, and efficient content
delivery. Try it today! http://www.webworks.com/techwr-l
Easily create HTML or Microsoft Word content and convert to any popular Help file format or printed documentation. Learn more at http://www.DocToHelp.com/TechwrlList