Top

The User Experience of Iron Man

Innovating UX Practice

Inspirations from software engineering

A column by Peter Hornsby
February 18, 2013

Steve Rogers: “Big man in a suit of armor. Take that off, what are you?”

Tony Stark: “Genius … billionaire … playboy … philanthropist.”

—The Avengers

It’s confession time: I’m a huge fan of Iron Man, who is shown in Figure 1. What’s not to love? A genius, billionaire playboy with a super-powered suit and an endless supply of witty one-liners. I’ve recently been wondering about that suit and how we could make it a reality. However, I’m not thinking about recreating the armor or the arc reactor in the chest that powers the whole ensemble, but the suit’s user interface. Because that’s the bit that really rocks!

Champion Advertisement
Continue Reading…
Figure 1—Iron Man
Iron Man

(Okay, I admit that the user interface is less sexy than the rest of the suit. And, if I had any hardware-engineering talent, I’d want to work on the suit itself, because it’s infinitely cooler. If my code didn’t make development managers cry, I’d want to work on the software. But you work with what you’ve got, and I’m a UX designer.)

Of course, there are an awful lot of user interfaces in the Iron Man movies: transparent touchscreens on smartphones, shown in Figure 2, and as part of a desktop.

Figure 2—Smartphone user interface
Smartphone user interface

Tony can interact with his designs and research in a 3D space, as the video in Figure 3 shows.

Figure 3—Interacting in 3D space

But the really cool bit is the user interface of the Iron Man suit itself. How could we make it work? Let’s look at the user interface in action in Figure 4.

Figure 4—Iron Man suit’s user interface

Jarvis

In the video in Figure 4, we see Tony in a typically perilous situation, mid-rescue. Naturally, Tony’s voice-controlled majordomo Jarvis is available to assist him. One of the first things we see is Tony querying Jarvis about the number of people in the air. Jarvis responds not only with the answer to Tony’s question, but highlights where those people are, demonstrating contextual awareness and the ability to go beyond the user’s initial instruction to provide further help.

We can see Jarvis as an evolution of intelligent agents like Siri—or to put it another way, a descendent of Microsoft’s Clippy. But the problem with Jarvis providing a user interface for Tony’s Iron Man suit is that the suit’s usefulness and efficiency would be heavily dependent on Jarvis having a high degree of artificial intelligence. Tony would have to be supremely confident that, in a moment of jeopardy, Jarvis wouldn’t do something stupid. There might be potential bandwidth problems in relying on an artificial intelligence: the user would have to be able to explain clearly what he needs in a timely manner, which might not always be possible in the middle of a human-on-alien action. So Jarvis is useful, but far from a perfect user interface for many tasks.

Head-up Display

As in the Google Project Glass, the head-up display provides an augmented reality (AR) interface. When in mid-flight, Tony doesn’t have a conventional, hardware user interface available to him. But gaze might offer him some capabilities in managing the suit’s user interface—for example, using an approach similar to that of Fixational. However, Fixational’s approach uses winking to trigger actions, and we don’t see Tony using such gross facial movements to trigger actions, so some other interaction must be occurring.

An interesting application of an old technology to this problem might be something as simple as a chord keyboard that is embedded in the suit’s gloves. Gaze in combination with chord keystrokes would enable a mouse-like interaction. Perhaps this might work within the suit’s gauntlets, but they could make fine motor control difficult. But this is a manageable problem: we’re not looking for our hero to perform brain surgery, simply to rescue those in distress and administer righteous justice to evildoers.

Flying the Iron Man Suit

But what of the Iron Man suit’s more prosaic capabilities: the ability to fly, for instance? Here we see an evolution of the solo rocket flight that we’ve seen in many movies. The Rocketeer just had a throttle and a combined helmet and rudder, but Iron Man’s flight capability is much more sophisticated. His suit incorporates movable flight surfaces like those on modern fighters, as well as multiple jets on the hands and feet.

So, how might Iron Man control flight? Simple rules to prevent accidents could solve part of this problem. For example, if Tony were carrying a survivor from an exploded 747, triggering the hand jets would be messy to say the least. In flight, Tony expects to be able to have some degree of physical movement—for example, to twist and turn to avoid incoming debris or missiles, without getting thrown radically off course. So Jarvis or another subsystem of the suit could compensate by understanding that, for certain movements, it would be necessary to change power levels or other aspects of the suit’s configuration. I can imagine two broad scenarios for flying:

  1. Tony knows where he wants to go and can hand over control to Jarvis to get there. His suit’s user interface would essentially be a 3D version of that for Google’s self-driving car. Such a user interface could operate using multiple, redundant information sources for navigation—the stars, GPS, and an altimeter—together with collision-avoidance software.
  2. Tony’s flight path and actions are unpredictable, so he needs to have a high degree of control and maneuverability. For example, if Tony were engaged in an aerial dogfight—for instance, with alien invaders—he would need a very high level of suit control. Not only would he need to be able to fly the suit, but to fight as well. Solving this problem gets trickier. A voice interface could help with some aspects of it—“Jarvis, shoot any aliens in range!”—but would be wholly impractical for others. Imagine the tip-of-the-tongue phenomenon occurring during a dogfight: “Jarvis! Shoot the big alien … thing … with the … thing!”

(I appreciate that Tony Stark is a witty genius who likely experiences fewer verbal stumbles than most of us do, but remember that he is also an alcoholic.)

So, what kind of user interface could we use? Perhaps the suit could respond to Tony’s own movements—in the same way a skydiver can exert limited control over his direction of movement. Imagine doing this to achieve high-speed acceleration away from or toward a threat, for instance. Fine control would be more problematic. Perhaps Tony could change direction by changing the relative position of his feet—for example, by raising up his right foot, he could initiate a turn to the left in a relatively intuitive way. He could manage movement in a different axis by changing the position of his head. Though identifying which of his movements were intentional would be problematic, so that chord keyboard might again be useful in this case.

Summary

The Iron Man suit needs a very, very flexible user interface. At present, if we were trying to recreate such a suit, there would be only limited options for controlling its user interface. And there would be many demands the user interface would have to handle: flight and fight being the most obvious, but also information retrieval and problem solving.

While such a suit’s having a single, very smart user would make solving this design problem simpler, there would still be major challenges, including the need for the user to be able to switch tasks quickly; to delegate some aspects of control, while taking more responsibility for others; and to use novel control techniques. But our ability to manage many of these challenges today is greater than you might expect. Now all we need is the suit… 

Director at Edgerton Riley

Reading, Berkshire, UK

Peter HornsbyPeter has been actively involved in Web design and development since 1993, working in the defense and telecommunications industries; designing a number of interactive, Web-based systems; and advising on usability. He has also worked in education, in both industry and academia, designing and delivering both classroom-based and online training. Peter is a Director at Edgerton Riley, which provides UX consultancy and research to technology firms. Peter has a PhD in software component reuse and a Bachelors degree in human factors, both from Loughborough University, in Leicestershire, UK. He has presented at international conferences and written about reuse, eLearning, and organizational design.  Read More

Other Columns by Peter Hornsby

Other Articles on Software User Experiences

New on UXmatters