Top

Optimization: Applying Moore’s Law to User Experience

Innovating UX Practice

Inspirations from software engineering

A column by Peter Hornsby
December 7, 2009

“The Web is a special effects race, fanfares on spreadsheets! Just what we need! (Instead of dealing with the important structure issues—structure, continuity, persistence of material, side-by-side intercomparison, showing what things are the same.) This is cosmetics instead of medicine. We are reliving the font madness of the eighties, a tangent which [sic] did nothing to help the structure that users need who are trying to manage content.”—Ted Nelson

Computer scientists are familiar with Moore’s law, which Gordon Moore originally proposed in 1965. This law states that the number of transistors on an integrated circuit would double roughly every two years. The reality of Moore’s law has led to an exponential growth in the raw processing power of computers, which lets us solve more—and more complex—problems by applying more processing power to them. Over the last 20 years or so, we have increasingly devoted this comparative wealth of processing power to supporting the human element of human?computer systems, to the extent that UX design now operates as a specialist field.

Champion Advertisement
Continue Reading…

Working day-to-day in UX design, we can sometimes lose sight of the big picture. As a discipline, what is our collective goal, our grand design? What are we, as a group, trying to achieve, and for what will history remember us? Douglas Engelbart, the inventor of the mouse and one of the pioneers of human?computer interaction (HCI), became inspired to work in the field we now call user experience in 1951:

“He suddenly envisioned intellectual workers sitting at display ‘working stations,’ flying through information space, harnessing their collective intellectual capacity to solve important problems together in much more powerful ways. Harnessing collective intellect, facilitated by interactive computers, became his life’s mission at a time when computers were viewed as number-crunching tools.“—Wikipedia

As UX designers, our vision is to optimize the overall human?computer system, improve the ability of humanity to solve important problems, and help people to gain insights more effectively. In this column, I’ll look at what optimization means, as well as some of the ways in which we can optimize user experience.

The Dangers of Optimization

Any optimization involves compromise. For example, we can make a system more secure and robust, but at the cost of speed. We can optimize code, but in the process, make it harder to read—thus, more difficult to change in the future.

We can look at optimization of the human component of the human-computer system from several different levels. At the most basic level is our understanding of fundamental human capabilities and limitations—such as perception, memory, and cognitive abilities. These characteristics are subject to a normal distribution across any given population. While there are differences between individuals, we understand them to a sufficient degree to be able to design systems that work within people’s capabilities. We also understand that people’s performance may degrade under stress, so designing for lower abilities can help all users.

At a higher level are the fundamental principles upon which we design and build computer systems, which derive from our basic human capabilities. From these principles, we have devised conceptual models like the basic windowing model of a graphic user interface (GUI). Today’s children and young adults have likely been introduced to computing using a GUI. But GUIs and the command line interfaces that were prevalent in the past are fundamentally different in how they condition users’ thinking about how they can manipulate the underlying machine.

At a finer-grained, conceptual level are the metaphors a GUI uses—such as the Clipboard, file system, and desktop. These metaphors are typically far removed from their real?world equivalents, but in a sense, the computer has become a metaphor itself now. Concepts to which people would once have had exposure in the form of physical artifacts—such as a spreadsheet—are now familiar primarily through a computer.

The highest level at which we can optimize human capabilities for using computing power is the specialist knowledge level—the knowledge users require and employ in using a specific application we’re designing. This is the level at which much optimization of the user experience takes place—through understanding the background, goals, and knowledge of the user population. The creators of many of the tools we use in designing user experiences devised them to support our efforts to achieve this level of optimization. For example, personas, user journeys, and task analyses all focus on understanding users and their tasks—allowing us to optimize our designs for them.

Today, we rarely—if ever—focus on or challenge the lower levels at which optimization might potentially occur. To a certain extent, this is understandable. because optimization is about compromise. Because users have now become habituated to existing interaction models, requiring them to learn new concepts before they could even start to use a new computing system might represent an unwanted workload for them. So, most of our UX design projects employ standard interaction models. Optimization also suffers from the law of diminishing returns. We can put more and more effort into optimizing a computer system for less and less real return. Unfortunately, some projects have disappeared down a rabbit hole, because they’ve attempted to understand users at a high level when increasing their understanding of users’ low-level capabilities or changing the fundamental concepts of computer systems might have potentially provided much more benefit.

Increasing Our Understanding of Human Capabilities and Constraints

The better we understand fundamental human capabilities and constraints, the more effectively we can create computer systems that build on and amplify these capabilities. Research into these areas by psychologists and human factors specialists is ongoing, and UX designers have a responsibility to remain up to date with this research and understand how we can apply it in our work. The work we do as UX professionals also expands our understanding of fundamental human capabilities. For example, the user research we conduct to learn how people are using our computer systems can help verify and expand our knowledge of their capabilities. While each study we do may involve only a few users, when we look at the results of many studies in aggregate, involving multiple user groups and many different applications, significant patterns may begin to emerge.

We can represent our understanding of these basic human capabilities and constraints using our existing tools. For example, creating a root persona that describes basic human capabilities would—like the anthropometric data we use in the design of physical objects—help us to characterize our users’ basic capabilities and work within their limitations. We could design systems that meet users’ needs by revealing different information or functionality, depending on their needs.

Going Beyond Fundamental System Concepts

The fundamental concepts of modern GUIs remain basically unchanged from those we designed for systems that had far fewer clock cycles to spare. The window, the Clipboard, toolbars, and wizards have evolved, but there have been few or no genuinely new basic concepts over the past 20 years.

“Oh, sure, the Macintosh interface is intuitive! I’ve always thought deep in my heart that Command-Z should undo things.”—Margy Levine

The QWERTY keyboard provides another example of a less than optimal, but persistent standard. It’s a kludge! By virtually all metrics, it is a poor design that slows us down and reduces our performance. Many better designs have been proposed, with demonstrably better performance, but the QWERTY layout lumbers on through the momentum of familiarity.

While building GUIs on known concepts can aid learning by users, the danger is that the design concepts we use on a daily basis might condition our thinking to such an extent that we cannot break out of old mindsets to take a completely different approach—perhaps one where file structures, browsers, and even the GUI get replaced by something more flexible and adaptable. Like coding a GUI in LISP or writing language processing in Smalltalk, conducting a thought experiment in which we design from the ground up, attempting to go beyond our current ideas, could provide us with fresh insights into what we think we already know and give us a new perspective on the world.

To a certain extent, it is unsurprising that developments in technology are, in many cases, the driving force behind new user interface concepts. We have become so deeply immersed in our existing ideas that it can be hard to take a step back and see what might lie beyond. On the other hand, virtually all forms of technology have uses their original designers did not anticipate. Humans are adept at using things for purposes other than that for which they were originally intended.

In Conclusion

If we are to take significant steps forward in the state of the art for UX design, we need to continually question our base assumptions and refresh our knowledge about where the boundaries of human performance lie. Human brains are capable of adapting to changing situations and understanding new concepts. If the rewards of innovation are sufficiently great, we can change our current conceptions about what computer systems should be. While it is possible that the current state of the art is the best we can manage, I hope not. 

Director at Edgerton Riley

Reading, Berkshire, UK

Peter HornsbyPeter has been actively involved in Web design and development since 1993, working in the defense and telecommunications industries; designing a number of interactive, Web-based systems; and advising on usability. He has also worked in education, in both industry and academia, designing and delivering both classroom-based and online training. Peter is a Director at Edgerton Riley, which provides UX consultancy and research to technology firms. Peter has a PhD in software component reuse and a Bachelors degree in human factors, both from Loughborough University, in Leicestershire, UK. He has presented at international conferences and written about reuse, eLearning, and organizational design.  Read More

Other Columns by Peter Hornsby

Other Articles on UX Strategy

New on UXmatters