Top

The User Experience of Enterprise Software Matters, Part 2: Strategic User Experience

Envision the Future

The role UX professionals play

A column by Paul J. Sherman
March 23, 2009

In my previous column, “The User Experience of Enterprise Software Matters,” I argued that organizations making enterprise-level technology selections often do an incomplete job of assessing the real-world effects of the new applications they impose on their staffs’ workflows and processes, saying:

“The technology selection process typically neglects methods of evaluating the goodness of fit between the enterprise users’ processes, workflow, and needs, and the vendors’ solutions. Organizations could avoid many a rollout disaster simply by testing the usability of vendors’ solutions with employees during a trial phase.”

I also encouraged enterprises to demand more usable software that meets their organizations’ needs.

In this column, I’ll provide a technology selection framework that can help enterprises better assess the usability and appropriateness of enterprise applications they’re considering purchasing, with the goal of ensuring their IT (Information Technology) investments deliver fully on their value propositions.

Champion Advertisement
Continue Reading…

It’s Not Rocket Science

As you may have suspected—and as UX professionals are fond of saying—the answer to this problem is not rocket science. It’s actually pretty simple: Organizations making technology investments need to do a few things in addition to their typical processes for evaluating technology:

  • Identify and describe the target user groups that currently perform the task or process the software will automate, so their characteristics, motivations, and appetite for change are well understood.
  • Model and describe the current workflow the target users employ to accomplish the task or process, using simple methods like task analysis and time-on-task measurement.
  • Discover what the target users and other staff typically do before and after the task being automated, to gain an understanding of whether—and, if so, how—you can automate the task’s precursors and antecedents or somehow include them in the potential solution.
  • Finally—and only after doing all of the above—begin to assess the technology solutions in detail for their goodness-of-fit to the qualitative, real-world characteristics of the target users and the existing workflow.

At this point in technology assessment, feature lists and demos matter a whole lot less than actually putting real target users on the system and having them perform their tasks. Does doing this consume more time and resources? Yes. Is it worth it? Absolutely! Not doing this increases the risk that your organization will suffer reduced productivity, decreased morale, and the other risks attendant on technology rejection that I described in Part 1. And, just in case you don’t really buy the examples I described there, let me relate two more stories of technology rejection that I recently encountered—this time, in high-risk, mission-critical environments.

Stories of Technology Rejection

Let me tell you a couple of stories about users who rejected new technology.

Story of a Carrier Flight Deck Crew

Recently, I met someone who had been an aircraft carrier flight deck crewman. During his service on the carrier, the Navy had automated the deck crews’ process for preflight aircraft inspection. Before adopting the new process, the deck crew used a paper checklist on a clipboard—both as a memory aid and for data capture. They later logged the data into a database for reporting and safety analysis.

The crewman described the automated process the Navy had deployed to replace their paper-and-pencil inspection process. It required the deck crew to use a hand-held device for both data entry and scanning during their inspections—entering data manually at certain points and connecting the device directly to the aircraft to capture instrumentation data at other points. The crewman was adamant in his view that the device had detracted from the deck crews’ ability to rely on their experience and exercise their judgment, because they interacted primarily with the scanning device rather than the aircraft itself.

Story of a Beat Cop

During a recent conversation I had with a usability test participant who was a patrolman, he revealed this interesting anecdote: His municipality had recently “upgraded” the computer system in the cruisers, which patrolmen used for reporting and receiving information in the field. This cop and others had come to the conclusion that the new system, with its high-resolution graphics and touch-screen interface, actually slowed down the reporting and receiving of information. More critically, because using the computer required greater attention and more time, it had also reduced their situational awareness, increasing risk to them and the citizens they served.

Adopting Enterprise Software User Experience Assessment

So, I’ve discussed the why and the what. Let’s talk about the how. How do you get your organization to take the human factor into account when considering large technology investments that will change how your workers carry out their tasks?

If you’ve been reading my column for a while, you know I’m all about the UX professional as change agent. My advice to UX professionals who want to get involved in assessing enterprise software is as follows:

  1. Figure out what the current technology selection process is, so you can talk intelligently about how you propose to change the process.
  2. Figure out who has got what skin in the game. Who gets a feather in her cap if you succeed? Don’t make enemies, make allies. And be prepared to share the credit. Don’t give all of the credit to the IS (Information Systems) VP, but make sure you’re paying proper tribute. It’s her ball, and she can take it home if she wants.
  3. Define your key metrics and your process for assessing the user experience of the software. If you’re part of an internal UX team in a big corporation and you’d like to help your IT group assess several competing applications for enterprise-wide deployment, you’ve got a ready-made usability test participant group of IT professionals. And if the application is, say, an expense reporting tool, your metrics are likely to be training footprint, errors, and efficiency.
  4. Run a pilot assessment using the new process. Show some immediate value by identifying an issue they would have found only post deployment, using the old process. Nothing opens up doors like a demonstrated success. For example, I was able to help one of my former companies reduce a key negative metric in their customer-facing IVR (Interactive Voice Response) system by making some simple changes in the IVR script. This resulted in several opportunities to get involved in the evaluation of the user experiences of other customer-facing systems where UX professionals had not previously been involved.
  5. Launch and monitor the new system. Once you’ve run your pilot assessment—demonstrating the value of assessing the user experience of enterprise software as part of the selection process—it’s time to formalize the relationship between your discipline and IT/IS. At this point, you have to act like a sales rep. You need to close the deal. Ask your organization to formally commit to assessing the user experience of every technology solution they consider.

Like most organizational interventions, the one I’ve just described follows the general principles the Six Sigma canon lays out. Specifically, it follows the DMAIC (Define, Measure, Analyze, Improve, Control) method of systematic process improvement, which rigorously tracks and measures the efficacy of process change. While it’s fair to say some of the shine is off Six Sigma—both in the business press and in industry—its core principles are still sound.

Changing How We Assess Enterprise Software Changes Vendor Behavior

My main point is this: Assessing enterprise software vendors’ offerings for their goodness of fit to peoples’ workflows, processes, and motivations puts new kinds of pressure on those vendors to build their software with more attention to satisfying all of their needs. The result can only be more usable, better-designed software. Remember, if you and their other customers don’t demand that your vendors satisfy your workers’ needs, they have very little incentive to actually deliver software that meets their needs.

So, use the approaches and methods I’ve described in this column to help your organization discover what the people in your organization really need. Then, use your skills as a change agent to institutionalize a better technology selection process that ensures all of the enterprise software your organization purchases fulfills its needs. 

Founder and Principal Consultant at ShermanUX

Assistant Professor and Coordinator for the Masters of Science in User Experience Design Program at Kent State University

Cleveland, Ohio, USA

Paul J. ShermanShermanUX provides a range of services, including research, design, evaluation, UX strategy, training, and rapid contextual innovation. Paul has worked in the field of usability and user-centered design for the past 13 years. He was most recently Senior Director of User-Centered Design at Sage Software in Atlanta, Georgia, where he led efforts to redesign the user interface and improve the overall customer experience of Peachtree Accounting and several other business management applications. While at Sage, Paul designed and implemented a customer-centric contextual innovation program that sought to identify new product and service opportunities by observing small businesses in the wild. Paul also led his team’s effort to modernize and bring consistency to Sage North America product user interfaces on both the desktop and the Web. In the 1990s, Paul was a Member of Technical Staff at Lucent Technologies in New Jersey, where he led the development of cross-product user interface standards for telecommunications management applications. As a consultant, Paul has conducted usability testing and user interface design for banking, accounting, and tax preparation applications, Web applications for financial planning and portfolio management, and ecommerce Web sites. In 1997, Paul received his PhD from the University of Texas at Austin. His research focused on how pilots’ use of computers and automated systems on the flight deck affects their individual and team performance. Paul is Past President of the Usability Professionals’ Association, was the founding President of the UPA Dallas/Fort Worth chapter, and currently serves on the UPA Board of Directors and Executive Committee. Paul was Editor and contributed several chapters for the book Usability Success Stories: How Organizations Improve by Making Easier-to-Use Software and Web Sites, which Gower published in October 2006. He has presented at conferences in North America, Asia, Europe, and South America.  Read More

Other Columns by Paul J. Sherman

Other Articles on Enterprise UX Strategy

New on UXmatters