TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Actually, Microsoft virtually returned to the drafting board to redesign the
Windows interface. It cost them millions, but they did it anyway. Much of
what you see in Windows today is the result of large user tests. Windows is
bloatware, but it's bloatware to satisfy marketing. The interface is
usability-driven. Check out the Outlook interface for 2000. I think it's an
excellent interface considering the crazy quilt of functions that they named
"Outlook". We did a book on Office 2000, and it was absolutely the worst
writing job when we reached Outlook, because it doesn't have specific
windows or dialog boxes for specific functions. Rather, a constellation of
functions exerts gravitational pull on one another. It's like the software
equivalent of the three-body problem. Still, the interface for it works. I
have a client who mimicked it, not to shortcut anything but because their
software worked well also in that paradigm.
My point of usability is not that a particular manual is or is not usable,
but that until you test it YOU DON'T KNOW FOR SURE. Only testing will reveal
usability. Not discussion, not focus groups, not discretionary feedback, not
heuristics, not guesswork, not confidence, not call records. Nothing else,
because everything else is discretionary on the part of the end user. That's
a factor you can control in testing, but not otherwise. If the user doesn't
want to talk to you, or hasn't read the manual, or is trying to be nice, or
just doesn't have the time to call, or assumes that if you wrote a crappy
manual your tech support can't be much better, then you have no loop
closure. And in my experience, most people are astonished at how their
informed guesswork comes apart like a snowball in the oven when users
actually put it to the test. Even users are often astonished when you show
them the results. They'll swear that they just love that neat clickable
image, but when you test them you discover that they never use it, because
they prefer the bland but obvious menu on the left side.
Of course, usability testing must be done correctly. The same can be said
for every aspect of life. Products that crash networks should have been done
correctly, too. Documents that don't enlighten should have been done
correctly, too. Saying "Well, it has to be done right, and that's a reason
not to do it" doesn't seem to me to be a strong argument against testing.
Nor is it a strong argument to contend that if the users *don't* find
something, then they've failed and the whole thing was a waste. You can't
prove a negative. Generally, experience shows that you don't have to worry
about that, because testing ALWAYS reveals wrinkles. The point is to
minimize the numbers of them, not to eliminate them entirely. After a
certain point, you run into the chaos of individual actions, a kind of
Brownian motion that you'll never be able to eliminate. If the majority of
your small testing sample find the same problem, it's a wrinkle that needs
ironing. One out of the sample should be considered, just not as strongly.
As a final note, have you noticed that product introductions and maturation
follow a pattern? In the early days, small companies rush out products that
may be buggy and unreliable, but they get customers because nobody else has
tackled the problem. Then as sales go up, the original vendor either grows
to the point of having to institute quality (as opposed to subjective)
measures, or some larger company that uses quality standards takes the
business away. First adopters will tolerate buggy products; later adopters
won't. But unless you can quantify your expectations, you can't institute
quality. You can talk about it, you can offer opinions about it, but you
can't institute it. Only when you can put numbers to performance can you
institute quality.
If a company can't or won't put numbers to their standards, it's not a
quality assurance environment. It's art. If the hordes of harried buyers out
there are content with art, then your company is in good shape. If those
hordes are more cautious and insist on more measurable performance, then
your company either learns the lesson or is doomed. Or it must come out with
something else artistic and keep running the introduction cycle so it
doesn't have to face up to the quality dilemma.
Tim Altom
Simply Written, Inc.
Featuring FrameMaker and the Clustar Method(TM)
"Better communication is a service to mankind."
317.562.9298
Check our Web site for the upcoming Clustar class info http://www.simplywritten.com