Artificial-intelligence (AI) technology is capable of behaving with human-like intelligence. With recent advances, AI has become more pervasive. Insurance companies use AI in processing claims and banks rely on automated stock trading. People can perform self-checks for skin cancer, using smart apps such as Skinvision or HealthAI-Skin Cancer, or they can interact with intelligent services through user interfaces such as Google Home or Amazon Echo, which are themselves smart because they understand natural-language queries and provide answers using natural language as well.
Most users of AI technologies do not have sufficient insight into their inner workings to understand how they’ve arrived at their outputs. This, in turn, makes it hard for people to trust the technology, learn from it, or to be able to correctly predict future situations.
Champion Advertisement
Continue Reading…
Some Examples of the Use of AI in User Interfaces
A concrete example of the use of AI is Google’s search engine. Most Internet users neither know nor understand the algorithm that Google uses to find search results that match their queries. One consequence is that Web-site owners who want users to find their Web site must hire search-engine optimization (SEO) specialists to manipulate their page content and format it in ways that improve their Web pages’ rankings. These SEO specialists do not have an exhaustive understanding of Google’s search algorithm either, but they do know what changes to make to a page to have a positive impact on its ranking.
Some might describe such a search algorithm as an example of a black-box paradigm. However, comparing AI to a black box doesn’t really describe it adequately because, in a black-box scenario, users receive predictable outputs to their inputs. In AI, because of the nature of the self-learning algorithms the technologies use, this might not be the case. For example, the Nest smart thermostat autonomously chose temperature settings that were incomprehensible for its users. According to Kara Pernice, it neither provided an explanation of its decision making, nor allowed users to override its settings. [1]
The Need for Human-Centered AI
Manufacturers of AI technologies need to ensure that their users understand how AI works—to a certain degree. Wei Xu has written about a field called explainable AI. [2] However, this field is only one part of a more comprehensive research field called human-centered AI (HAI or HCAI), which considers many other factors that determine how good or bad the user experience is for people who are interacting with an intelligent system. Trust is one of these factors. A lack of trust limits the proliferation of smart technology, as well as its potential for lifting the standard of living for many people, which results from the superior ability of AI to make certain decisions better than humans.
Whenever human beings are part of an AI system that ingests their inputs, processes them, and provides outputs—in other words, in all cases where a system has not achieved full automation or it would not be desirable to do so—the interactions between humans and artificial intelligence must be carefully crafted.
However, the relationship between individual users and AI is only one aspect that you need to consider. Institutions such as Stanford University [3] have formed human-centered AI institutes to research and understand AI’s impacts—both positive and negative—not only on individuals but also on social institutions, economies, industries, and governments.
The central questions remain the same: How can we design and develop AI systems so they make communication and collaboration more effective, efficient, and enjoyable? How can AI systems augment human capabilities rather than straight out replacing humans? To enable users to trust machines, how can we help them to better understand the strengths and weaknesses of AI?
Ways in Which HCAI Is Similar to or Differs from HCI
Ben Shneiderman has emphasized the requirement that human-centered AI should serve human needs and, thus, put humans at the center of the experience. [4] Consequently, humans must remain in control even in highly automated scenarios. In Shneiderman’s opinion, human control and automation are not mutually exclusive. This human-centered viewpoint is a continuation of the discipline called Human-Computer Interaction (HCI), which, in the 1970s and 1980s enabled the broad adoption of personal computers.
However, while the HCI community has spent more than 40 years developing standards for graphic user interfaces, these interfaces have only limited applicability and value for AI systems. [5, 6] In fact, AI best practices may even contradict these guidelines. For example, ISO 9241-110 requires system conformity with users’ expectations, but as I mentioned earlier, without a proper understanding of an AI’s reasoning, or conceptual model, the user’s mental model may represent inaccurate or false expectations. [7]
Another reason for mismatches between classical usability guidelines and the field of AI is that the former do not assume that the technology is a human-like actor who is able to pass the Turing Test. The expectations of human beings interacting with an intelligent system whose user interface is a bot or agent are noticeably higher than for a traditional, utilitarian software tool.
Further, the HCI usability guidelines were created for graphic user interfaces, while the goal of smart systems is to facilitate human-system interaction in more natural, seamless ways that mimic human-to-human communication. Følstad and Brandtzæg’s research on chatbots demonstrates that, for such systems, proper conversation design is more critical than the design of a graphic user interface. [8]
To make natural human-AI conversations a reality, it is necessary to consider human conversational processes and conventions, as well as biases that result from differences in gender, culture, or status. [9] Beyond these conversational factors, we need to apply models and theories from human learning, reasoning, and decision making to the design of AI-driven technologies, enabling system agents and bots to be effective, trustworthy partners of their human users. The goal is to create robust, formative models and frameworks that inform reusable guidelines and checklists for the design of human-centered, AI-driven, interactive systems. While there are some initial guidelines for AI-driven systems, [10] they’re just a beginning and more work is necessary, as industry leaders agree. [11]
Although, in recent years, we’ve seen a proliferation of AI systems that try to mimic human beings—not only in terms of their intellect but also in their appearance and their ability to communicate—this approach has been met with criticism. Shneiderman [12] cautions that, designing technical systems such as service robots to resemble humans could prevent designers from taking full advantage of unique computer features that have no human analog—for example, advanced sensors or data displays. He calls for a thorough analysis and understanding of when to equip AI with human-like features and when to avoid doing so. We saw the equivalent at the dawn of the human-machine interaction (HMI) discipline in the 1950s when Paul Fitts published his “Men are better at, Machines are better at” list, also known as his MABA-MABA list. [13]
Companies Efforts to Establish AI Design Principles
As various companies have worked to advance the field of AI, many have created and published their own lists of the AI design principles that they apply. Comparing those of Microsoft [14] and Google [15] demonstrates similarities in their thinking, commitments, and human considerations, as Table 1 shows.
Table 1—Comparing Microsoft’s and Google’s AI design principles
Microsoft AI Principles
Google AI Principles
Fairness—AI systems should treat all people fairly.
Reliability and Safety—AI systems should perform reliably and safely.
Privacy and Security—AI systems should be secure and respect privacy.
Inclusiveness—AI systems should empower everyone and engage people.
Transparency—AI systems should be understandable.
Accountability—People should be accountable for AI systems.
Be socially beneficial.
Avoid creating or reinforcing unfair bias.
Be built and tested for safety.
Be accountable to people.
Incorporate privacy design principles.
Uphold high standards of scientific excellence.
Be made available for uses that accord with these principles.
Conclusion
Against the background of the democratization of AI, these principles and their public impacts are becoming even more important. As technologies such as codeless and automated machine learning enable increasingly broader audiences to develop AI and machine-learning algorithms, we must emphasize the roles of ethics and the need to create human-centered systems, which are two sides of the same coin. Although we won’t achieve Artificial General Intelligence (AGI) anytime soon, we may one day realize that dream through a combination of specialized AI technologies, which continue to evolve quickly and broadly. Human-centered AI must be part of this journey.
[3] Stanford Institute for Human-Centered Artificial Intelligence. “Human Impact Research Mission,” undated. Retrieved February 19, 2021.
[4] Shneiderman, Ben, Catherine Plaisant, Maxine Cohen, Steven Jacobs, Niklas Elmqvist, and Nicholas Diakopoulos. Designing the User Interface: Strategies for Effective Human-Computer Interaction, Sixth ed. Cranbury, NJ: Pearson, 2016.
[8] Følstad, Asbjørn, and Petter Bae Brandtzæg. “Chatbots and the New World of HCI.” ACM Interactions, Vol. XXIV, No. 4. July–August 2017. Retrieved February 19, 2021.
[9] Hannon, Charles (2018). “Avoiding Bias in Robot Speech.” ACM Interactions, Vol. XXV, No. 5. September–October 2018. Retrieved February 19, 2021.
At Infragistics, Tobias leads data analytics, artificial intelligence, and machine-learning initiatives. An evangelist for user- and customer-centered design strategy, methods, and processes, he has worked in User Experience for over 20 years, leading teams, projects, and programs with the goal of creating user experiences that are meaningful, usable, and differentiated. Prior to joining Infragistics, he served as Global Director of User Experience for Corporate IT at Honeywell and held several senior R&D positions at Siemens. Tobias holds a Master’s in Psychology from the University of Regensburg in Germany and earned his PhD in the field of Usability Engineering from the University of Kassel, also in Germany. He has published more than 50 technical papers, has presented at international conferences, and teaches UX courses as an Adjunct Professor in the Master’s in Business & Science program at Rutgers University.
Read More