Do you create products or organize events for UX professionals or manage a UX team that’s hiring? Sponsor UXmatters and see your ad or logo here! Learn moreLearn More

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

September 2007 Issue

By Paul J. Sherman

Published: September 24, 2007

“Do you know how users really feel about your design? Probably not.”

Perhaps you’ve done contextual inquiries to discover your users’ requirements and understand their workflows. You may have carried out participatory design sessions, usability tested your design, then iterated and improved it. But do you know how users really feel about your design? Probably not.

The user experience field has been trying to move beyond mere usability and utility for years. So far, no one seems to have developed easy-to-implement, non-retrospective, valid, and reliable measures for gauging users’ emotional reactions to a system, application, or Web site.

In this column, I’ll introduce you to a promising method that just might solve this problem. While this method has not yet been subjected to rigorous peer review or experimental testing, it offers an intriguing solution and is endlessly fascinating to me. And it just might prove to be the kind of powerful technique we’ve been looking for to illuminate users’ emotional reactions to our designs. Read moreRead More>

By Mike Hughes

Published: September 24, 2007

“Technical communicators have a tendency to want to document a topic as completely as possible.”

Two questions any writer must deal with are: “What do I write about?” and “How much do I say about it?” Essentially, these questions deal with the scope and the depth of a document. Technical communicators have a tendency to want to document a topic as completely as possible, and we carry this instinct with us when we architect and write Help files. In this column, I challenge that prevalent instinct and offer an alternative way of thinking about the scope and depth requirements of Help systems. The benefits of this approach are, I hope, better Help for users and, for our clients and employers, a more efficient use of technical communicators’ time. First, I’ll discuss three principles that underpin my perspective, then I’ll give some practical advice about writing Help that people will actually use.

Three Underlying Principles for Help

My years as a technical instructor—often dealing with new products or technologies—have taught me that I did not need to be a lot smarter than my students. I’ve survived, more times than not, by merely staying a day ahead in the reading or a page ahead in the lesson plan. Ironically, as I got smarter about a topic, I sometimes found that the effectiveness of my teaching decreased, because I was missing the point that the students could grasp only so much in a given day. So, as I got smarter, my output exceeded my students’ input capacity. John Carroll, in his book The Nurnberg Funnel, talks about the Heisenberg uncertainty principle in training: The more complete training is, the less usable it will be; the more usable it is, the less complete it can be. I think this uncertainty principle aligns closely with my realization that instructional delivery systems must match up with a student’s intake bandwidth. Read moreRead More>

By Steve Baty

Published: September 10, 2007

“A simple, semi-structured, one-on-one interview can provide a very rich source of insights.”

If you’ve read some of my previous columns on UXmatters, you could be forgiven for thinking my entire working life is spent largely surrounded in a sea of quantitative data. This is, rather surprisingly even to me, not nearly close to the truth. Looking back over recent months, by far the most common form of research I’ve carried out is that stalwart of qualitative studies—the interview.

A simple, semi-structured, one-on-one interview can provide a very rich source of insights. Interviews work very well for gaining insights from both internal and external stakeholders, as well as from actual users of a system under consideration. Though, in this column, I’ll focus on stakeholder interviews rather than user interviews. (And I’ll come back to that word, insights, a little later on, because it’s important.)

Ten Guidelines for Stakeholder Interviews

Here are ten general guidelines I follow when conducting stakeholder interviews:

1. Set aside at least 45 minutes for each interview.

I often find I don’t need all of this time. However, occasionally, I do need all of it and am glad I allowed plenty of time. I’ve been lucky, on a few occasions, to interview people who not only understood the topic under study, but were also able to clearly articulate their thoughts. Such interviews are golden—for both the quality of insights they can generate and because of their rarity. Read moreRead More>

By Sam Ng

Published: September 10, 2007

“Card sorting is a deceptively simple method.”

Card sorting is a simple and effective method with which most of us are familiar. There are already some excellent resources on how to run a card sort and why you should do card sorting. This article, on the other hand, is a frank discussion of the lessons I’ve learned from running numerous card sorts over the years. By sharing these lessons learned along the way, I hope to enable others to dodge similar potholes when they venture down the card sorting path.

Don’t Expect Too Much of Card Sorting

Card sorting is a deceptively simple method. The beauty of card sorting is that it just makes sense. I normally get enthusiastic nods of approval when explaining it to others. But therein lies one of the key problems with card sorting: our expectations of what it can do.

One of the earliest card sorts I ran was unnecessarily complex, involving over 100 cards with around 80 participants. Yes, what a sucker for punishment! We had started off with a simple research goal and unwittingly turned it into a monster. We wanted to find out everything. Part of the problem, in this case, was a misunderstanding on the part of the client. Read moreRead More>