Top

Updating Our Understanding of Perception and Cognition: Part I

July 5, 2010

For my new book Designing with the Mind in Mind to reflect an up-to-date understanding of human perception and cognition, I had to update my own knowledge. It had been over thirty years since I had studied psychology seriously. Of course, human perception and cognition have not changed much in the last three decades—or even in the last three millennia. However, over the thirty years since I finished my psychology degree, research psychologists and neurophysiologists have been busy, and their efforts have greatly improved humankind’s understanding of perception and cognition.

In two successive articles on UXmatters, I will summarize some of the new bits of knowledge I picked up while gathering information for the book. This first article focuses on visual perception. The second article will focus on reading, memory, and cognitive and perceptual time constants.

Champion Advertisement
Continue Reading…

Modern Industrial Society Has Made Our Retina’s Rods Obsolete

In college and graduate school, I learned the following about human vision:

  • The retina at the back of our eyes—the surface on which our eyes focus images—has two types of light-receptor cells: rods and cones.
  • The rods detect light levels, but not colors, while the cones detect colors.
  • There are three types of cones, which are sensitive to red, green, and blue light, respectively, suggesting that human color vision encodes colors as combinations of red, green, and blue pixels, similar to the way computers and digital cameras encode colors.

However, what I did not learn until recently—from Colin Ware’s book Visual Thinking for Design—is that people who live in today’s industrialized societies don’t use their retinal rods much.

The rods evolved to help humankind and our predecessors in the animal kingdom see in poorly illuminated environments—for example, at dusk, at dawn, during the night, or in dark caves. We all—humans and animals—spent much of our time in poor lighting until the nineteenth century, when electric lighting was invented and became widely used in the developed world. But our retina’s rods function only at low levels of light. Bright light—even normal daylight—saturates them, yielding a maxed-out signal that conveys no useful information.

Today, those of us living in the developed world rely on our rods only when we are doing things like having dinner by candlelight, feeling our way around our homes during a nighttime power outage, camping outside after dark, or going on a moonlit stroll. In bright daylight and artificially lighted environments

—where we spend most of our time—our rods are completely maxed out. Therefore, our visual perception usually comes entirely from our cones.

Many Animals See Colors

Even though Designing with the Mind in Mind is about the human mind, the reading I did to prepare for writing it taught me some things about animal vision as well. For example, I learned that much of what I thought I knew about animals’ color vision was incorrect.

Everything I had read about animal vision in my youth suggested that primates—lemurs, monkeys, apes, and humans—are the only animals that see colors. All other animals, I had thought, have only brightness-detector cells

—that is, rods—and, therefore, cannot distinguish different colors. Wrong!

It turns out that many different animals, ranging all across the animal kingdom, can see colors. However, this does not mean their color vision works the same way ours does or that they see the same colors we see. There is really no way to know what colors animals see, because color is not an objective property of the world, but rather an artifact of perception—something the brain constructs to distinguish different frequencies of light. Thus, saying that many animals can perceive color just means that they can distinguish different colors. Usually these are colors that are important to their survival. Some animals can even distinguish more colors than people can.

An ability to perceive—that is, distinguish—colors requires that an animal’s eyes have more than one type of light sensor, each of which is sensitive to different frequencies of light. If all of an animal’s light sensors are sensitive to the same range of light frequencies, the animal can distinguish only levels of brightness, not colors. Swatches of color that differ only in hue, like those in Figure 1, would appear the same to such an animal.

Figure 1—The top three swatches differ only in hue. The bottom three show how they would appear to an animal with only one photoreceptor type.
The top three swatches differ only in hue. The bottom three show how they would appear to an animal with only one photoreceptor type.

But if an animal has two or more different types of light-sensor cells, each sensitive to a different range of light frequencies, that animal probably can distinguish different colors. (I say probably because some animals behave as though they cannot distinguish colors, even though their eyes contain different types of light-sensor cells.) What may be surprising is that many different animals, from insects to mammals, have two or more types of light detectors.

Animals that can see colors include the following:

  • Bees have three types of color detectors in their eyes, so they can distinguish different colors. Unlike ours, their light-detector cells are totally insensitive to low-frequency, or red, light, but are sensitive to ultraviolet light—higher light frequencies than human eyes can perceive. Therefore, bees can distinguish colors that we cannot and vice versa.
  • Some butterflies have between four and six types of photoreceptors and, therefore, can distinguish many more colors than we can. This makes sense—a butterfly spends its life seeking out the flowers that have the most nectar.
  • Many fish and birds have four types of photoreceptors, making them capable of distinguishing more colors than primates can distinguish.
  • Dogs have two types of cones in their eyes. The sensitivities of their cones are very similar to those of humans who have red-green color-deficient vision. Therefore, don’t expect a dog to distinguish red from green very well.

Animals that cannot see colors include the following:

  • Bulls cannot see the color of a matador’s red cape. Like all cattle, bulls have only rods in their eyes, so they cannot perceive color. In bullfights, bulls respond only to the movement of the matador’s cape. While the capes are red by tradition, they could just as well be purple, pink, or green. Imagine green matador capes!
  • Guinea pigs have only rods—not cones—so they cannot see color.
  • Owls have no cone cells, so they cannot see colors. In fact, many animals that are active mainly at night lack color vision. That makes evolutionary sense: they have little opportunity to use color vision.
  • The owl monkeys of Central and South America, despite being primates, have only one type of retinal cell and, therefore, cannot see colors. The evolutionary reason may be similar to that for owls: a nocturnal lifestyle.
  • Cats are an odd species: While their eyes have three types of cones, the proportion of cones relative to rods is very low in comparison to primates. Perhaps this is because of their being—at least in the wild—active mainly at night. Cats usually behave as though they are totally colorblind, but under certain conditions they can distinguish orange-red objects from blue-green objects.

Visual Resolution Drops Radically at the Retina’s Periphery

When I was a psychology graduate student, it was already well known that the density of light-receptor cells decreases progressively from the center of the retina—known as the fovea—to the edges of the retina, as shown in Figure 2. In the fovea, there are over 140,000 cone cells packed into every square millimeter, in sharp contrast to the retinal areas away from the fovea, where only about 10,000 cone cells occupy each square millimeter. That’s a factor of 14 decrease. Although there are almost no rods within the fovea itself, they are fairly densely packed around the fovea—160,000 rod cells per square millimeter—with their density decreasing toward the edge.

Figure 2—Distribution of cones and rods in a typical human retina (Lindsay and Norman)
Distribution of cones and rods in a typical human retina (Lindsay and Norman)

The tremendous variation in photo-receptor—especially cone—density between the fovea and the edges of the retina means the resolution of the signals from our eyes to our brain is much, much higher in the center of the visual field than at the edges. One way of illustrating this difference is with an ophthalmologist’s reading-acuity chart like that in Figure 3. This chart shows the relative sizes of letters people with normal vision can identify at the center versus at the edge of their visual field.

Figure 3—Reading acuity at the center versus the edge of the visual field (Anstis)
Reading acuity at the center versus the edge of the visual field (Anstis)

It has also long been well known that cone cells in the fovea connect 1:1 to the ganglial neuron cells that begin the processing and transmission of visual data, while elsewhere on the retina, multiple photo-receptor cells—cones and rods—connect to each ganglion cell. This means data from the visual periphery is highly compressed and suffers data loss before its transmission to the brain, while data from the fovea reaches the brain relatively uncompressed.

But the signal from our eyes to our brain is only part of the story. Today, thanks to technologies like functional Magnetic Resonance Imagery (fMRI), which lets researchers watch which areas of the brain activate in response to stimuli, we know that the brain devotes most of its visual processing to signals coming from the fovea. The fovea constitutes only about 1% of the area of the retina, but the visual cortex at the back of our brains devotes about 50% of its area—and, hence, its neurons—to the fovea.

As a result, according to Gerd Waloszek, our vision has much, much greater resolution—both spatial resolution and color resolution—at the center of our visual field than elsewhere. If you hold out your arm and look at your thumb, your thumbnail covers approximately the area your fovea perceives. With your eyes fixed on your thumbnail, you can resolve about 300 pixels per inch in that small area of your visual field. In contrast, at the edges of your visual field, each “pixel” would be about the size of a grapefruit held at arms’ length. That might seem like an exaggeration, because we have a constant impression of a high-resolution image all around us. However, keep in mind that your eyes are constantly moving—at least 3 times per second—sampling and resampling areas of our physical environment that our brain considers important. Also be aware that your brain fills in many details that your eyes don’t actually perceive.

For example, as you are reading this article, your eyes dart about, scanning and reading. No matter which part of the text you’re actually focusing on, you have the impression of viewing a full page of text.

Now, imagine you were viewing this page on a special computer screen that tracks your eye movements and knows what area on the page your fovea is focused on. Imagine that, wherever you are looking, the computer clearly displays meaningful text in the small area of the page that corresponds to your fovea, but everywhere else on the page, it displays random, meaningless text. As your fovea flits around the page, the computer quickly updates each area where your fovea stops to show the correct text there, while the last area on which your fovea focused returns to textual noise. According to cognitive scientist Andy Clark, people do not notice when this occurs in tests. Not only can people read normally, they still believe they are viewing a full page of meaningful text. Obviously, until the technology existed to support such an experiment—fast displays and eyetracking equipment—it would have been impossible to conduct this experiment.

In Conclusion

The last thirty years have been an exciting period of growth in humanity’s understanding of how vision works. In preparing to write Designing with the Mind in Mind, I had initially thought I’d merely need to refresh my knowledge, but instead found that I had to update it considerably. I hope this brief sampling of some of the new knowledge we’ve gained about vision conveys some of the possibilities this knowledge presents. I encourage UX designers to think about how our new understanding of vision can inform and improve the design of user interfaces. 

Read more

Updating Our Understanding of Perception and Cognition: Part II

Announcement: Jeff Is Speaking at the July 2010 BayCHI Meeting

Topic: Designing with the Mind in Mind: The Psychological Basis for UI Design Rules

Jeff will describe some facts about human perception and cognition that impact how we should design software. Cognitive psychology is the focus of Jeff’s new book, Designing with the Mind in Mind (Morgan Kaufmann, 2010). User interface (UI) design rules have their foundation in human abilities. They are not simple recipes we can apply mindlessly. To apply design rules effectively, we must determine their applicability and precedence in specific situations and balance the trade-offs that arise in situations where design rules appear to contradict each other. Understanding the psychology underlying design rules enhances UX professionals’ ability to interpret and apply them effectively.

Editor’s Note: For those of you who aren’t in the San Francisco Bay area and can’t make it to this event, Jeff gave a similar talk at a Silicon Valley IxDA meeting in July 2008: Psych 101: The Psychological Basis for UI Design Rules. His talk was very well received. His presentation for that talk is available on SlideShare.

References

Annenberg Media. “Color Vision in Animals.” Annenberg Media. Retrieved May 28, 2010.

Anstis, Stuart. Vision Research. Vol. 14. Amsterdam: Elsevier, 1974.

Clark, Andy. Being There: Putting Brain, Body, and World Together Again. Cambridge, MA: MIT Press, 1998.

Lindsay, Peter H., and Donald A. Norman. Human Information Processing. New York: Academic Press, 1972.

Smith, Sally E., ed., et al. “What Colors Do Animals See? WebExhibits/Causes of Color. Retrieved May 28, 2010.

Waloszek, Gerd. “Vision and Visual Disabilities: An Introduction.” SAP Design Guild, June 7, 2005. Retrieved May 28, 2010.

Ware, Colin. Visual Thinking for Design. Burlington, MA: Morgan Kaufmann Publishers, 2008.

Wikipedia. “Color Vision.” Wikipedia, May 26, 2010. Retrieved May 28, 2010.

Principal Consultant at Wiser Usability, Inc.

Assistant Professor, Computer Science Department, at University of San Francisco

San Francisco, California, USA

Jeff JohnsonAt Wiser Usability, Jeff focuses on usability for older users. He has previously worked as a user-interface designer, implementer, manager, usability tester, and researcher at Cromemco, Xerox, US West, Hewlett-Packard, and Sun Microsystems. In addition to Jeff’s current position as Assistant Professor of Computer Science at the University of San Francisco, he has also taught in the Computer Science Departments at Stanford University, Mills College, and the University of Canterbury, in Christchurch, New Zealand. After graduating from Yale University with a BA in Experimental Psychology, Jeff earned his PhD in Developmental and Experimental Psychology at Stanford University. He is a member of the ACM SIGCHI Academy and a recipient of SIGCHI’s Lifetime Achievement in Practice Award. Jeff has authored numerous articles on a variety of human-computer interaction topics, as well as the books Designing User Interfaces for an Aging Population, with Kate Finn (2017); Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Rules (1st edition, 2010; 2nd edition, 2014); Conceptual Models: Core to Good Design, with Austin Henderson (2011); GUI Bloopers 2.0: Common User Interface Design Don’ts and Dos (2007), Web Bloopers: 60 Common Design Mistakes and How to Avoid Them (2003), and GUI Bloopers: Don’ts and Dos for Software Developers and Web Designers (2000).  Read More

Other Articles on Human Factors Research

New on UXmatters