Top

Recent Research in User Experience

Innovating UX Practice

Inspirations from software engineering

A column by Peter Hornsby
August 20, 2012

In this column, let’s take a look at some recent technology developments that promise to change the landscape of user experience in the months and years to come:

  • Leap
  • ChronoZoom
  • AffectAura
  • Phylo

Champion Advertisement
Continue Reading…

Leap

While, by now, most people have used some type of wireless, gesture-based technology—for example, the Wii or Kinect—the Leap focuses on replacing the mouse and keyboard for all interactions with a computer rather than being a gaming technology per se. An iPod-sized box that tracks a user’s movements down to 1/100th of a millimeter, within a volume of 8 cubic feet of space, the Leap enables a user’s individual hand and finger movements to control interactions with a screen. Microsoft claims that the Leap is much more accurate than any other input device on the market—although the product site is a little vague, claiming it is both 100x and 200x more accurate. While it sounds like the Leap would track every hand tremor—with the possible danger of a user’s being punished for them—hands-on reviews by Wired and Engadget have been positive.

At one American hospital, surgeons have used a similar hands-free technology—albeit one without the same degree of precision—to review patient records while conducting procedures, without their needing to scrub up again and thus saving them time during operations.

While there will likely be other specialist applications for this type of device, the key challenge for the Leap will be ensuring that its developers think differently, so they can get the most out of this new device. It is not possible to simply extend a 2D metaphor into 3D and expect to get the most value from the device. While the Leap is unlikely to replace the keyboard and mouse for most users, it could well be a supplement to these established input devices for certain applications such as 3D modelling. Although, as a gamer, I am really looking forward to experiencing genuine point and shoot, as shown in the demonstration video in Figure 1, when my Leap arrives.

Figure 1Leap in action

The product’s developers have been sending out developer kits and encouraging a developer community for the Leap, so it will be interesting to see what applications the software development community have come up with for the device when the hardware gets released at the end of 2012.

ChronoZoom

The ChronoZoom project is an open source development project that attempts to visualize the history of everything. Not just human history, but what they call big history: the cosmos, Earth, humanity, and life itself. Using a timeline as the basis for navigation, users can explore any point in time and understand its events by drilling down to view text, video, and images, as shown in Figure 2.

Figure 2ChronoZoom

The project aims to make temporal relationships between different eras in history explicit—from the beginning of known time right up to modern human history. While it is a fascinating project in terms of the information it presents and the ease with which users can navigate that information, the UX challenges the project presents are formidable:

  • using a single scale for information ranging across billions of years
  • prioritizing information on such a massive canvas
  • ensuring the same experience across multiple devices
  • retaining objectivity on how history is presented

MyLifeBits, the life-logging research project Microsoft is doing, shares these challenges—with the difference that MyLifeBits has as its focus the individual, while ChronoZoom is much more generalized. It will be fascinating to see what kinds of tools supporting human analysis of large data sets will emerge from these projects. While the key structure of ChronoZoom is time based, to extend their usefulness, such systems will need to support the rapid browsing and searching of data in other modalities such as location and relationships.

AffectAura

One of the more intriguing developments to come out of Microsoft Research, AffectAuraPDF is an “emotional prosthetic that allows users to reflect on their emotional states over long periods of time.” Essentially, it uses a mix of portable and workstation?based sensors to monitor users via their facial expressions, posture—using Microsoft Kinect—speech, physical location, arousal—via the skin—file activity, and calendars. AffectAura logs what users are experiencing—both what is happening in the world around them and their physiological data—and provides a user interface for user reflection.

Superficially, this is an extension of the type of life-logging information that is becoming increasingly popular, but with a primarily emotion?based focus rather than the time?based focus of ChronoZoom. What is clear from testing is that a record of emotional state in isolation is not sufficient to act as a memory trigger, and it often particularly difficult for test participants to rationalize the physiological data. In fact, in many cases, the signals from speech, posture, and arousal levels are in conflict. However, users were able to work with the user interface to analyze their emotional experiences—looking both forward and backward in time and combining data from multiple sources—and this was something that they found useful.

AffectAura, in its current form, is almost certainly too complex for widespread use. But one obvious application would be in psychotherapy—for instance, in helping patients suffering from depression or related conditions to understand and reflect upon their responses in more detail than has hitherto been possible.

It is interesting to note that one of the authors of a paper on AffectAura, Ashish Kapoor, previously worked in Ros Picard’s Affective Computing group at MIT, which has been doing research in computing and emotional state for many years. Understanding how to design user interfaces that better support different emotional states is likely to be a significant growth area in the next few years, as the technology to read and interpret emotional states becomes more mature. Projects such as MoodSense are already moving in that direction.

Phylo

Gamification, the use of game-design techniques in other applications, has been?a developing area of work for some time. Phylo combines gamification techniques with crowdsourcing to harness people’s abilities to process visual information, understand the real world, and use intuition to tackle a problem called multiple sequence alignments. These are sequences of genetic material—DNA, RNA, or protein—that identify similar regions, which may be the result of functional, structural, or evolutionary relationships. By understanding these alignments, it is possible to trace the source of certain genetic diseases such as breast cancer, understand their mutations, and infer their shared evolutionary origins.

Analyzing this information using a brute-force computational approach would be impractical; it would take too long because of the complexity of the process. However, by using a game-based approach to solving the problem, Phylo takes advantage of human abilities in pattern recognition and visual problem solving. This approach is of interest because it demonstrates the effective use of synergy between humans and computers, both playing to their strengths.

Phylo is not alone in taking this approach. Zooniverse is a collection of projects, many of which ask users to identify objects in astronomical images such as moon craters and galaxies. An interesting side effect of getting people to do these investigations is the identification of patterns that their original investigators did not even consider. One such example came from transcribing weather readings from World War I Royal Navy logs. Some users began keeping track of how many people were sick each day, and peaks in this data turned out to be a signature of the 1918 influenza pandemic. Crowdsourcing offers huge potential for supporting scientific research, but only if we can crack the UX design challenges. 

Director at Edgerton Riley

Reading, Berkshire, UK

Peter HornsbyPeter has been actively involved in Web design and development since 1993, working in the defense and telecommunications industries; designing a number of interactive, Web-based systems; and advising on usability. He has also worked in education, in both industry and academia, designing and delivering both classroom-based and online training. Peter is a Director at Edgerton Riley, which provides UX consultancy and research to technology firms. Peter has a PhD in software component reuse and a Bachelors degree in human factors, both from Loughborough University, in Leicestershire, UK. He has presented at international conferences and written about reuse, eLearning, and organizational design.  Read More

Other Columns by Peter Hornsby

Other Articles on Software User Experiences

New on UXmatters