Top

Toward a More Human Interface Device: Integrating the Virtual and Physical

Beautiful Information

Discovering patterns in knowledge spaces

A column by Jonathan Follett
October 20, 2008

As we create our digital lives—communicating and socializing with others, collecting content for business and pleasure, building objects with software, buying products—we understand that, despite its moniker, this existence is only half virtual. While it’s a given that engaging in our digital experiences requires physical devices, it may be less obvious that the input method affects the way in which we communicate with our computers—particularly, the way we feel about the experience.

In the physical world, we don’t have to think about manipulating an object—we just do it. Turn a photograph around on a table? Pick it up to take a closer look? Put it into a file folder? All of these are purely automatic actions.

In the virtual world, though, we constantly have to think about how to take such actions. Does performing a particular action require a single click or a double-click? Can I drag that file to move it or launch an application, or do I need to use a dialog box? What’s the keyboard shortcut for magnifying images in this particular program again? In the real world, we don’t need to double-click a piece of paper to read it or move it. But in the virtual world, we require abstractions and use intermediaries to accomplish our tasks for us—like engineers who have to manipulate a robot arm to move radioactive material around inside a reactor. (Let’s hope, as UX designers, that our users do not see the interfaces we create as being toxic!)

Champion Advertisement
Continue Reading…

For nearly 25 years, the keyboard and the mouse have provided the physical connection that mediates most of the interactions we have with computer technology. But human communication is nuanced and complex, filled with physical elements, subtle signals, facial expressions, and gestures. With a touch, a glance, or a motion, we can convey a host of information. Till now, such subtleties have been largely absent from our day-to-day interactions with computers. However, gestures are beginning to make their way into our interactive vocabulary through a variety of innovative input devices that—although not new inventions—have reached a tipping point when it comes to affordability, availability, and consumer acceptance. From video games to mobile products to notebook and desktop computers, we are seeing an explosion of alternative input technologies and methods—touch screens, touch pads, tablets, wearable interfaces, and other devices.

For UX designers, these input devices open up opportunities for creating richer experiences that are more immersive and natural—expanding the ways in which people interact with computers. In the near future, we’ll be designing interactions that are not only usable and useful, but also deepen people’s connection with their technology. And as our dual lives continue to evolve, the most compelling products and services will help us bridge the gap between the virtual and the physical worlds.

The Human Interface Device Revolution

Human Interface Device (HID) is the name for a USB input specification for keyboards, mouse devices, joysticks, and other input devices, but it can also be a useful term for describing an expanding category of digital input devices. As UX professionals, we think about a person’s interaction with a graphic user interface (GUI), but rarely address the physical connection between the user and the computer. However, with the host of new human interface devices already available—and even more coming to market—we’ll soon have many more input methods to consider as we design digital experiences. To understand these devices, let’s begin by examining those for specialty purposes such as digital gaming, music, and art. The human interface devices already in use for niche products will no doubt influence the general adoption of new input technologies.

Gaming

The video game industry has always pushed boundaries when it comes to creating unique input devices—and has the market reach to gain their widespread acceptance. I can remember my initial confusion when, in the late 1970’s, I first used the Atari 2600 paddles for games like Pong and Breakout, which soon turned to delight once I understood the method of interaction—turning the wheel slowly or quickly to position the bar that represented me, the player, onscreen. Today, of course, the gaming industry is producing far more sophisticated devices for interaction.

Sports, exercise, and other active games lend themselves to the incorporation of physical motion. Game systems can capture physical motion through dance pads that sense a user’s steps, steering wheels that let a user control the movement of a vehicle onscreen, or musical instrument-like devices that take in a user’s drumming or guitar strumming. The innovative Nintendo Wii controllers, which utilize accelerometers and optical sensors, let users control a game through movements that simulate live action—you can practice your golf swing or bowling technique in a digital world. Gaming platforms like the Wii are particularly intriguing, because similar controllers for the Web could let users browse sites in nontraditional ways.

Music

For musicians, a host of input devices are available and have evolved from the days of MIDI-synthesizers to today’s PC-connected controllers. Devices range from piano keyboards to guitars to wheel controllers that mimic turntables, all outfitted with USB connections for a desktop or notebook computer. The Korg Kaoss pad, shown in Figure 1, is a touch pad that enables performers to manipulate samples and digital audio effects with hand gestures and motions. An advanced version of the device, the KPE1 Kaoss Entrancer, can also manipulate video images and visual effects in real time.

Figure 1—The Korg Kaoss Pad lets performers control sound effects with touch
Korg Kaoss Pad

Art

For years, drawing tablets like the current Wacom Intuos model have been important tools for digital artists. The tablets, ranging in size from 4 x 6 inches to 12 x 19 inches, allow artists to use a stylus to draw on a flat, smooth drawing surface—similar to a large track pad—and see their strokes appear onscreen in Photoshop or another drawing application. The drawing tablet, while not as versatile as paper, mimics the physical experience of sketching, which enables artists to apply the real-world techniques they’ve learned to their virtual art. Gradual advances—like stylus tips with different shapes and levels of hardness, thicker stylus grips, and the ability to sense the angle of the stylus’s tilt—have made it possible to replicate tools as varied as paint brushes, pencils, felt-tip markers, and airbrushes. Flipping the stylus over automatically switches to the application’s eraser tool, making the experience of using the input device almost exactly like that of using a pencil.

An even more compelling input device, the Wacom Cintiq pen display, shown in Figure 2, lets users work directly on a large LCD screen, reducing even further the distance between their hand motions and the resulting visual display.

Figure 2—Wacom Cintiq pen display lets users draw directly on the screen
Wacom Cintiq pen display

Is Touch the Future of Interaction?

The Korg Kaoss Pad and the Wacom Cintiq pen display both use touch interfaces to enhance and deepen their user experience, providing people with ways of physically connecting with the technology.

The HP TouchSmart desktop attempts to do the same for the home computer, as Figure 3 shows. The TouchSmart leverages the touch capabilities of the Windows Vista operating system, and its bundled software includes a photo editing application, text notes, voice notes, a Web browser, and a music player. Interacting with all of these applications is entirely gestural, letting users point, select, and manipulate digital objects with their fingers—no mouse is required. In the TouchSmart, we can see a movement toward unmediated interaction with digital objects, removing a level of abstraction from such interactions.

Figure 3—HP TouchSmart—a gestural interface for a desktop computer
HP TouchSmart

And, of course, no overview of touch capabilities would be complete without mentioning the revolutionary multi-touch capability of the Apple iPhone, which offers users a mobile experience unlike any other. Among the features of the iPhone is a soft QWERTY keyboard, which takes the place of the physical keyboard on most mobile devices like the BlackBerry. The advantage of a touch-screen keyboard, of course, is that it’s hidden when a user doesn’t need it, reducing the overall form factor of the device, increasing the size of the display, and increasing the device’s flexibility.

Author and interaction designer, Dan Saffer, has this to say about gestural interfaces in the first chapter of his forthcoming book from O’Reilly, Interactive Gestures: Designing Gestural Interfaces:

”It’s only going to get more complicated as time goes on. Users, especially sophisticated users, are being trained to expect that devices and appliances will have touchscreens and/or can be manipulated by gestures. But it’s not just early adopters: the general public is being exposed to more and more touchscreens via airport and retail kiosks and voting machines, and people are discovering the ease and pleasure these devices can give them. ...

Touchscreens and gestural interfaces take direct manipulation to another level. Users can simply touch the item they want to manipulate right on the screen itself, move it, make it bigger, scroll it, etc. This is the ultimate in direct manipulation: using the body itself to control digital (and sometimes even physical) space around us.”—Dan Saffer

Integrating the Virtual and Physical Experiences

Sophisticated human interface devices can enable users to complete their tasks—whether for work or play—quickly, efficiently, and naturally. Such input devices can also make interactions more interesting, exciting, and ultimately satisfying, which is an advantage from both a marketing and user experience perspective. But these devices may offer other significant benefits to users—in particular the ability to close the gap between physical actions and virtual responses.

Flow State and the Expert Interface

Much of our effort as UX professionals revolves around reducing the learning curve for novices. A touch screen, on which a user can simply touch something to select it, is an inherently intuitive user interface that makes it easy for first-time users to jump right in and start working. But what about expert users? It turns out that touchscreens offer something to those at the other end of the spectrum, too: flow.

Shortly after the birth of the GUI, software designers realized the necessity of including keyboard shortcuts for expert users, so people could avoid the time-consuming actions of digging through menu hierarchies, using only the point-and-click capabilities of the mouse.

Once expert users internalize these shortcuts, they can think less about how they are accomplishing a task and more about performing the actual task. Ideally, an expert user should be able to achieve a high level of focus while working, sometimes referred to as a flow state.

In a recent article on Boxes and Arrows,Design for Emotion and Flow,” author Trevor van Gorp described how users perceive the flow state.

“In this state of consciousness, people often experience intense concentration and feelings of enjoyment, coupled with peak performance. Hours pass by in what seems like minutes. We tend to enter these states in environments with few interruptions, where our attention becomes focused by a challenge that we’re confident we can handle with our existing skills. Feedback is instantaneous, so we can always judge how close we are to accomplishing our task and reaching our goal. The importance of the task influences our level of motivation and perceptions of how difficult the task will be.”—Trevor van Gorp

Advanced human interface devices like the touch screen can aid this ability to work with technology as an extension of ourselves and our minds—to be able to imagine something, then do it instantly. For an example of how we might achieve this, we can look at some noncomputer technology that relies entirely on touch input—musical instruments.

For expert musicians—those who have mastered the techniques required to create the desired sounds using their instruments—the intense focus that comes during a performance is akin to the flow state expert software users experience. The performer no longer looks at keys or strings, but instead plays his instrument relying entirely on a combination of muscle and pattern memory. The physical connection the performer has with his musical instrument enables him to concentrate on musical expressiveness rather than technique.

What we can learn from examining human interaction with musical instruments is this: Users can connect with truly great physical interfaces so completely the technology can become an extension of the person. This situation is not entirely analogous to that of a computer user, of course. It may not be possible to achieve a flow state when working on certain mundane tasks. However, this focused state of engagement, in which we feel as if we are one with our actions, may be a worthy goal of interaction design.

Wearable Computing

While gestural interaction is almost certainly the way people will connect with computer technology in the immediate future, the integration between the physical and the virtual will not stop there.

The wearable computing project at the MIT Media Lab illustrates ways in which we can incorporate user interfaces even more closely into our day-to-day lives, as part of a user’s clothing.

“To date, personal computers have not lived up to their name. Most machines sit on the desk and interact with their owners for only a small fraction of the day. Smaller and faster notebook computers have made mobility less of an issue, but the same staid user paradigm persists. Wearable computing hopes to shatter this myth of how a computer should be used. A person’s computer should be worn, much as eyeglasses or clothing are worn, and interact with the user based on the context of the situation. With heads-up displays, unobtrusive input devices, personal wireless local area networks, and a host of other context sensing and communication tools, the wearable computer can act as an intelligent assistant, whether it be through a Remembrance Agent, augmented reality, or intellectual collectives.”—MIT Media Lab

Wearable computing may not be so far away. At the Boston Museum of Science, the Tangible Media Group at the MIT Media Lab recently sponsored the event SEAMLESS: Computational Couture.

“Fashionistas and techies unite at SEAMLESS, a fashion show and celebration showcasing emerging designers from around the globe and functional creations that push the boundaries of wearable technology. The Museum transforms into a catwalk for ‘computational couture’ as models strut groundbreaking clothing to live media performances by video artists sosolimited and DJs Eddie O. and Mike Uzzi of Zero G Sounds. The evening evolves into a living exhibition, where guests interact with the designers and their fashions.”—Museum of Science, Boston

New Opportunities for Interaction

As UX professionals, we often take for granted the fact that our users will be dealing with a keyboard, mouse or track pad, and monitor. We think about users’ physical relationship with their digital devices very selectively, if at all. But, as we explore new human interface devices and incorporate new interactions into our designs, we have the opportunity to create deep connections between users and their technology. 

Principal at GoInvo

Boston, Massachusetts, USA

Jonathan FollettAt GoInvo, a healthcare design and innovation firm, Jon leads the company’s emerging technologies practice, working with clients such as Partners HealthCare, the Personal Genome Project, and Walgreens. Articles in The Atlantic, Forbes, The Huffington Post, and WIRED have featured his work. Jon has written or contributed to half a dozen non-fiction books on design, technology, and popular culture. He was the editor for O’Reilly Media’s Designing for Emerging Technologies, which came out in 2014. One of the first UX books of its kind, the work offers a glimpse into what future interactions and user experiences may be for rapidly developing technologies such as genomics, nano printers, or workforce robotics. Jon’s articles on UX and information design have been translated into Russian, Chinese, Spanish, Polish, and Portuguese. Jon has also coauthored a series of alt-culture books on UFOs and millennial madness and coauthored a science-fiction novel for young readers with New York Times bestselling author Matthew Holm, Marvin and the Moths, which Scholastic published in 2016. Jon holds a Bachelor’s degree in Advertising, with an English Minor, from Boston University.  Read More

Other Columns by Jonathan Follett

Other Articles on Software User Experiences

New on UXmatters