(Okay, I admit that the user interface is less sexy than the rest of the suit. And, if I had any hardware-engineering talent, I’d want to work on the suit itself, because it’s infinitely cooler. If my code didn’t make development managers cry, I’d want to work on the software. But you work with what you’ve got, and I’m a UX designer.)
Of course, there are an awful lot of user interfaces in the Iron Man movies: transparent touchscreens on smartphones, shown in Figure 2, and as part of a desktop.
Tony can interact with his designs and research in a 3D space, as the video in Figure 3 shows.
But the really cool bit is the user interface of the Iron Man suit itself. How could we make it work? Let’s look at the user interface in action in Figure 4.
In the video in Figure 4, we see Tony in a typically perilous situation, mid-rescue. Naturally, Tony’s voice-controlled majordomo Jarvis is available to assist him. One of the first things we see is Tony querying Jarvis about the number of people in the air. Jarvis responds not only with the answer to Tony’s question, but highlights where those people are, demonstrating contextual awareness and the ability to go beyond the user’s initial instruction to provide further help.
We can see Jarvis as an evolution of intelligent agents like Siri—or to put it another way, a descendent of Microsoft’s Clippy. But the problem with Jarvis providing a user interface for Tony’s Iron Man suit is that the suit’s usefulness and efficiency would be heavily dependent on Jarvis having a high degree of artificial intelligence. Tony would have to be supremely confident that, in a moment of jeopardy, Jarvis wouldn’t do something stupid. There might be potential bandwidth problems in relying on an artificial intelligence: the user would have to be able to explain clearly what he needs in a timely manner, which might not always be possible in the middle of a human-on-alien action. So Jarvis is useful, but far from a perfect user interface for many tasks.
Head-up Display
As in the Google Project Glass, the head-up display provides an augmented reality (AR) interface. When in mid-flight, Tony doesn’t have a conventional, hardware user interface available to him. But gaze might offer him some capabilities in managing the suit’s user interface—for example, using an approach similar to that of Fixational. However, Fixational’s approach uses winking to trigger actions, and we don’t see Tony using such gross facial movements to trigger actions, so some other interaction must be occurring.
An interesting application of an old technology to this problem might be something as simple as a chord keyboard that is embedded in the suit’s gloves. Gaze in combination with chord keystrokes would enable a mouse-like interaction. Perhaps this might work within the suit’s gauntlets, but they could make fine motor control difficult. But this is a manageable problem: we’re not looking for our hero to perform brain surgery, simply to rescue those in distress and administer righteous justice to evildoers.