Just a few years ago, UX professionals weren’t talking about topics such as user privacy, technological governance, cybersecurity, or sustainable information technology (IT) as much as we are now. We have come to an inflection point in the history of the Web and are now seeing some unintended implications of the digital innovations that have bubbled up on the Internet—from online fraud to mental-health issues to unsustainable consumerism. However, the ways in which we, as UX professionals, do our work has not yet caught up with these issues.
As I touched upon in a previous column, UX design practices still hinge upon principles that maximize productivity, efficiency, and cognitive ease, in ways that are fundamentally at odds with some of the priorities and values that are emerging today. The formalization of these principles is grounded in the notion of user-centered design (UCD), a paradigm that gained steam at the onset of the Internet era, in the late ’90s. [1]
If people’s attitudes toward and needs for digital experiences are shifting, why are we still using the same UX design methods that made sense for the burgeoning Web?
Champion Advertisement
Continue Reading…
One way to answer this question is by acknowledging that the digital world has become an inescapable socio-economic paradigm—one that is very difficult for users to opt out of. (At present, it actually seems hard to imagine that there might be better alternatives.)
A second point to consider is that people’s relationship with the online world is highly nuanced and never straightforward. As some research has pointed out, Internet usage keeps growing, as does the adoption of digital tools to meet the needs of our everyday lives—for example, banking and payment apps. [2]
But, while many more users now seem more confident in navigating the Web, few are discerning when it comes to the basics of online architectures. According to one study, about one in ten Internet users are unaware or unsure of any of the methods that companies use to collect their personal information online, and about one in five are unaware of the use of algorithms to tailor what people see online. So a gap seems to exist between the parts of the public that are demanding better transparency, ethics, and security online and the large swathes of the population who remain oblivious or simply indifferent to potential risks.
Thirdly, design interventions whose aim is changing the fundamental efficiency-based paradigm for user experiences can backfire or at least be ineffective if they’re not designed properly, as I’ll explain shortly.
In this column, I’ll explore the dynamic tension between usability- and efficiency-based design practices and users’ contrasting needs by focusing on the specific issue of overreliance on artificial intelligence (AI) by explaining the following concepts:
What is AI overreliance?
How is the notion of AI overreliance intertwined with the broader paradigms of usability- and efficiency-centered design?
What are the implications of AI overreliance?
What are some possible UX design approaches for minimizing AI overreliance?
But first, it might be useful to define what we mean by usability- and efficiency-centered design and why this is still the dominant approach to designing digital experiences.
The Importance of User-Centered Design in Shaping Online Experiences
User-centered design, which we can define as usability- and efficiency-based design, emerged as a paradigm shift in Web development during the late 20th century—although its origins predate the Web.
This approach focuses on achieving optimal performance and functionality by evaluating design solutions on the basis of their ability to meet specific objectives within a given context. Within the realm of user-interface (UI) and UX design, usability- and efficiency-based design principles gained prominence in the 1990s and early 2000s, coinciding with the rapid growth of personal computing, then the Internet. [3]
These design principles influenced UX design by emphasizing the importance of creating user interfaces that maximize users’ productivity, minimize cognitive load, and enhance overall task performance. UX designers began to prioritize factors such as easy-to-use navigation, higher click-through rates, and streamlined information architectures to improve efficiency and user satisfaction.
The application of efficiency-based UX design principles led to the development of more user-centered interfaces, contributing to the evolution of digital products that are not only aesthetically pleasing but also highly functional and efficient—products that serve the needs of immediacy and convenience of use that are the hallmarks of the online economy.
Despite the many benefits of user-centered design, some less obvious implications began to emerge in the second decade of the 21st century. Prominent critics of user-centered design have now begun to challenge the widely held belief that innovation processes should always start with observing mainstream or lead users. They argue that a singular focus on users’ needs could limit the potential for more radical and meaningful innovations that could address broader societal and environmental concerns.
Progressing from Cognitive Ease to AI Overreliance: Does Great Usability Make Users More Vulnerable?
A significant branch of research has been focusing on the impacts of technology on human cognitive abilities. Researchers have investigated the use of smartphones within the context of their impacts on users’ ability to think, remember, and pay attention. [4, 5] Other studies have focused on the impacts of tools such as Maps on our wayfinding and navigation abilities, particularly among younger users. [6] Although the results of these studies are not conclusive and we recognize that mobile phones are flexible, powerful tools that can augment human cognition when people use them prudently, there are examples in which habitual involvement with technology seems to lead to users’ greater vulnerability.
One such case is online scams, which tend to occur more frequently when users apply heuristic processing—that is, quick, effortless decision-making, using simple rules or cognitive shortcuts that minimize cognitive load—rather than when they engage with information in a more systematic and critical way. [7]
Empirical evidence exists that some form of cognitive friction—which is discouraged by a classical user-centered perspective—can actually enhance users’ understanding of online information. This phenomenon has been investigated primarily within the contexts of data visualization, infographics, and charts. [8]
In short, while concluding that technology makes people dumber might not be entirely accurate, there is mounting evidence of potential unintended consequences that could accompany habitual exposure to and usage of digital tools. Nowhere is this more evident than in the realm of artificial intelligence.
Overreliance in Human and AI Interactions: An Unavoidable Issue?
Research indicates that individuals using AI-powered, decision-support tools often exhibit excessive reliance on an AI’s recommendations, accepting suggestions without critically evaluating their accuracy or appropriateness. [9] The phenomenon of overreliance can lead to suboptimal decision-making and potential errors.
Researchers have applied the dual-process theory in analyzing user behaviors when interacting with AI systems—similar to its use in understanding people’s responses to online fraud. The dual-process theory posits that human cognition operates through two distinct systems:
System-1, intuitive thinking—This system relies heavily on heuristics and mental shortcuts in rapidly processing information and making quick decisions. It is automatic, effortless, and often emotion driven.
System-2, reflective thinking—This system involves slower, more deliberate and analytical thinking. It requires conscious effort and is capable of logical reasoning and critical analysis.
Within the context of AI interactions, users tend to default to System-1 thinking, which could lead to the following consequences:
rapid acceptance of an AI’s suggestions without adequate scrutiny
overconfidence in an AI’s capabilities
reduced engagement of the user’s critical-thinking skills
The dangers of this behavior have already become a notable concern within the scientific and academic communities. According to some estimates, up to 70% of the references within AI-generated academic works are inaccurate. Frequently, the culprit is the author’s failure to apply critical thinking and human expertise to validate the AI’s output.
At the enterprise level, this same behavior could yield a welter of negative outcomes such as reduced human oversight, systemic biases and misinterpretations, loss of trust, reputational damage, and legal complications.
Mitigating AI Overreliance: Explainable AI Versus Cognitive Friction
So what can we do to mitigate the effects of the phenomenon of AI overreliance?
One approach is that of Explainable AI (XAI). The core idea behind XAI is to provide users with clear explanations or justifications for AI-generated recommendations. This approach should enable users to better understand the AI’s decision-making process, theoretically allowing users to identify flawed reasoning and reject incorrect suggestions.
However, contrary to expectations, research indicates that XAI has not significantly mitigated the issue of human overreliance on AI. Even when users are presented with explanations for AI-generated recommendations, they still typically tend to make less optimal decisions when the AI provides incorrect or subpar solutions.
In a study examining the use of AI in clinical decision-support systems (CDSS), researchers found that providing more detailed explanations of the facts the AI has used in making a diagnosis positively influenced users’ trust in the AI system. [10] However, this increased trust also led to issues of overreliance, in which users became excessively dependent on the AI’s recommendations, accepting them without sufficient critical evaluation.
Conversely, when participants received less detailed explanations, they began to question the reliability of the system, which resulted in self-reliance issues: users tended to discount the AI’s insights, overrelying on their own judgment.
A paper titled “To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making,” introduced the notion of cognitive forcing functions. [11] In this approach, the user implements strategies at the moment of decision-making that interrupt automatic thought processes and stimulate critical, analytical reasoning. Examples include checklists, diagnostics, time-outs, and explicitly asking the user to make a judgment. But further testing of these approaches may be necessary. Experiments revealed that cognitive forcing functions were markedly more effective in reducing overreliance on AI systems in comparison to conventional explainable AI approaches.
We can draw a parallel with users’ ability to interpret and retain information from charts. While common usability guidelines discourage the use of embellished charts and data visualizations, studies have found that people’s accuracy in describing embellished charts was no worse than for plain charts. Plus, their recall after a two-to-three-week gap was significantly better. [12]
In both cases, the research findings led researchers to question some of the premises of our traditional understanding of user-centered design, which typically emphasizes maximizing efficiency in user interactions.
Conclusion: Reconsidering UX Paradigms to Support Users’ Relationships with New Technologies
By intentionally introducing elements that disrupt quick, intuitive decision-making, this new perspective challenges long-held UX design principles. It suggests that in certain contexts—particularly when users are interacting with AI systems—an optimal user experience might actually require a deliberate increase in cognitive engagement.
This shift represents a nuanced evolution in our understanding of effective user-centered design, especially in situations where critical thinking and careful consideration are essential to making informed decisions.
As a strategic designer and UX specialist at IBM, Silvia helps enterprises pursue human-centered innovation by leveraging new technologis and creating compelling user experiences. Silvia facilitates research, synthesizes product insights, and designs minimum-viable products (MVPs) that capture the potential of our technologies in addressing both user and business needs. Silvia is a passionate, independent UX researcher who focuses on the topics of digital humanism, change management, and service design. Read More