The User Experience of Enterprise Software Matters, Part 2: Strategic User Experience

By Paul J. Sherman

Published: March 23, 2009

“I’ll provide a technology selection framework that can help enterprises better assess the usability and appropriateness of enterprise applications they’re considering purchasing.”

In my previous column, “The User Experience of Enterprise Software Matters,” I argued that organizations making enterprise-level technology selections often do an incomplete job of assessing the real-world effects of the new applications they impose on their staffs’ workflows and processes, saying:

“The technology selection process typically neglects methods of evaluating the goodness of fit between the enterprise users’ processes, workflow, and needs, and the vendors’ solutions. Organizations could avoid many a rollout disaster simply by testing the usability of vendors’ solutions with employees during a trial phase.”

I also encouraged enterprises to demand more usable software that meets their organizations’ needs.

In this column, I’ll provide a technology selection framework that can help enterprises better assess the usability and appropriateness of enterprise applications they’re considering purchasing, with the goal of ensuring their IT (Information Technology) investments deliver fully on their value propositions.

It’s Not Rocket Science

“Organizations making technology investments need to do a few things in addition to their typical processes for evaluating technology.”

As you may have suspected—and as UX professionals are fond of saying—the answer to this problem is not rocket science. It’s actually pretty simple: Organizations making technology investments need to do a few things in addition to their typical processes for evaluating technology:

  • Identify and describe the target user groups that currently perform the task or process the software will automate, so their characteristics, motivations, and appetite for change are well understood.
  • Model and describe the current workflow the target users employ to accomplish the task or process, using simple methods like task analysis and time-on-task measurement.
  • Discover what the target users and other staff typically do before and after the task being automated, to gain an understanding of whether—and, if so, how—you can automate the task’s precursors and antecedents or somehow include them in the potential solution.
  • Finally—and only after doing all of the above—begin to assess the technology solutions in detail for their goodness-of-fit to the qualitative, real-world characteristics of the target users and the existing workflow.

At this point in technology assessment, feature lists and demos matter a whole lot less than actually putting real target users on the system and having them perform their tasks. Does doing this consume more time and resources? Yes. Is it worth it? Absolutely! Not doing this increases the risk that your organization will suffer reduced productivity, decreased morale, and the other risks attendant on technology rejection that I described in Part 1. And, just in case you don’t really buy the examples I described there, let me relate two more stories of technology rejection that I recently encountered—this time, in high-risk, mission-critical environments.

Stories of Technology Rejection

Let me tell you a couple of stories about users who rejected new technology.

Story of a Carrier Flight Deck Crew

Recently, I met someone who had been an aircraft carrier flight deck crewman. During his service on the carrier, the Navy had automated the deck crews’ process for preflight aircraft inspection. Before adopting the new process, the deck crew used a paper checklist on a clipboard—both as a memory aid and for data capture. They later logged the data into a database for reporting and safety analysis.

The crewman described the automated process the Navy had deployed to replace their paper-and-pencil inspection process. It required the deck crew to use a hand-held device for both data entry and scanning during their inspections—entering data manually at certain points and connecting the device directly to the aircraft to capture instrumentation data at other points. The crewman was adamant in his view that the device had detracted from the deck crews’ ability to rely on their experience and exercise their judgment, because they interacted primarily with the scanning device rather than the aircraft itself.

Story of a Beat Cop

During a recent conversation I had with a usability test participant who was a patrolman, he revealed this interesting anecdote: His municipality had recently “upgraded” the computer system in the cruisers, which patrolmen used for reporting and receiving information in the field. This cop and others had come to the conclusion that the new system, with its high-resolution graphics and touch-screen interface, actually slowed down the reporting and receiving of information. More critically, because using the computer required greater attention and more time, it had also reduced their situational awareness, increasing risk to them and the citizens they served.

Adopting Enterprise Software User Experience Assessment

“How do you get your organization to take the human factor into account when considering large technology investments that will change how your workers carry out their tasks?”

So, I’ve discussed the why and the what. Let’s talk about the how. How do you get your organization to take the human factor into account when considering large technology investments that will change how your workers carry out their tasks?

If you’ve been reading my column for a while, you know I’m all about the UX professional as change agent. My advice to UX professionals who want to get involved in assessing enterprise software is as follows:

  1. Figure out what the current technology selection process is, so you can talk intelligently about how you propose to change the process.
  2. Figure out who has got what skin in the game. Who gets a feather in her cap if you succeed? Don’t make enemies, make allies. And be prepared to share the credit. Don’t give all of the credit to the IS (Information Systems) VP, but make sure you’re paying proper tribute. It’s her ball, and she can take it home if she wants.
  3. Define your key metrics and your process for assessing the user experience of the software. If you’re part of an internal UX team in a big corporation and you’d like to help your IT group assess several competing applications for enterprise-wide deployment, you’ve got a ready-made usability test participant group of IT professionals. And if the application is, say, an expense reporting tool, your metrics are likely to be training footprint, errors, and efficiency.
  4. Run a pilot assessment using the new process. Show some immediate value by identifying an issue they would have found only post deployment, using the old process. Nothing opens up doors like a demonstrated success. For example, I was able to help one of my former companies reduce a key negative metric in their customer-facing IVR (Interactive Voice Response) system by making some simple changes in the IVR script. This resulted in several opportunities to get involved in the evaluation of the user experiences of other customer-facing systems where UX professionals had not previously been involved.
  5. Launch and monitor the new system. Once you’ve run your pilot assessment—demonstrating the value of assessing the user experience of enterprise software as part of the selection process—it’s time to formalize the relationship between your discipline and IT/IS. At this point, you have to act like a sales rep. You need to close the deal. Ask your organization to formally commit to assessing the user experience of every technology solution they consider.

Like most organizational interventions, the one I’ve just described follows the general principles the Six Sigma canon lays out. Specifically, it follows the DMAIC (Define, Measure, Analyze, Improve, Control) method of systematic process improvement, which rigorously tracks and measures the efficacy of process change. While it’s fair to say some of the shine is off Six Sigma—both in the business press and in industry—its core principles are still sound.

Changing How We Assess Enterprise Software Changes Vendor Behavior

“Assessing enterprise software vendors’ offerings for their goodness of fit to peoples’ workflows, processes, and motivations puts new kinds of pressure on those vendors to build their software with more attention to satisfying all of these needs.”

My main point is this: Assessing enterprise software vendors’ offerings for their goodness of fit to peoples’ workflows, processes, and motivations puts new kinds of pressure on those vendors to build their software with more attention to satisfying all of their needs. The result can only be more usable, better-designed software. Remember, if you and their other customers don’t demand that your vendors satisfy your workers’ needs, they have very little incentive to actually deliver software that meets their needs.

So, use the approaches and methods I’ve described in this column to help your organization discover what the people in your organization really need. Then, use your skills as a change agent to institutionalize a better technology selection process that ensures all of the enterprise software your organization purchases fulfills its needs.

1 Comment

We—the business applications usability group at my company—started piloting our process in January 2008 with an Expense Reporting System product evaluation, and ended up doing just as you say—refining, launching, watching IT adopt. We’ve been pretty successful, and I would say are now considered pretty important players in the procurement decisions.

I think the next step to this—or flip side of the coin—is to more clearly define our role after the purchase has been made—for example, how best to configure the software for users.

Join the Discussion

Asterisks (*) indicate required information.