Today, artificial intelligence (AI) is ubiquitous. It’s everywhere—a sea change affecting all aspects of life at a rate far faster and more far reaching than anything we’ve experienced thus far. AI promises intelligent companions and copilots that simplify and enrich our daily activities and social interactions at work, in our community, and in our domestic life. The vision is a more pervasive, omnipresent, and active fabric of intelligence that seamlessly wraps around and through all the spaces of people’s lives––ready to activate on command.
However, the path to this vision and its widespread adoption requires progressive design approaches to support its evolution over time. During the formative stages of this technology, UX designers must leverage their expertise to design new user interfaces that effectively leverage dynamic visual affordances and multisensory cues to better guide discoverability and cooperation with fully integrated AI intelligence. Over time, these explicit cues and indicators may recede as the adoption of seamless interactions takes hold. But until then, the practice of visually enhancing the access to and activation of intelligence is necessary.
Champion Advertisement
Continue Reading…
The Shift from Apps to Integrated Intelligence
Until recently, many AI experiences have been either standalone apps or features that users activate just like any other app or feature. But soon, AI agents will be fully integrated as intelligent services across platforms, providing more seamless functionality, reducing fragmentation, and improving user experiences. AI agents will span systems and enable data exchanges that reduce switching between apps or user interfaces. Agents will automate repetitive tasks such as syncing calendars or users’ data, and cooperative agents will adapt to user preferences and learn from user behaviors to enable personalized services. These agents will offer invisible power and functionality. But how will users become aware of the immense possibilities and power accessible to them if they are not visible?
Designing Affordances for Invisible Agents
To unlock the potential of AI, UX designers must bridge the gap between invisibility and users’ awareness. For designers, this paradigm shift poses a contradictory challenge: how can we design for an invisible intelligence that operates behind the scenes while still providing clear, easy to recognize cues that enable users to activate its features? Striking the balance between attention and distraction is important. In user interfaces, affordances play a key role in achieving this balance by offering users subtle, yet effective signals to interact with an AI. These affordances can take the form of dynamic visual cues, auditory feedback, tactile responses, or a combination of multisensory elements that guide users without overwhelming them. By designing affordances that are both contextually appropriate and seamlessly integrated, designers can help users understand the presence and functionality of AI.
At Punchcut, we are working with leading companies in technology, automotive, healthcare, and media to help them integrate AI into their platforms, products, and services. From designing custom AI agents to exploring how to express and activate general intelligence from within their product experiences, much of our recent work has focused on how to visualize intelligence to drive better AI awareness and adoption. The principles that follow have guided our efforts along this path of evolution.
1. Enhance user interfaces beyond chat.
While conversational user interfaces have made significant strides toward making AI more accessible, they fall short in helping users to fully utilize the vast potential of generative AI tools. Open-ended input fields challenge users who must ponder what prompt to use to generate a successful outcome. Similar to the shift from open-ended, command-line interfaces to app-based graphic user interfaces (GUIs), today’s generative AI experiences require richer user interfaces that visualize and surface advanced capabilities without overwhelming users. This means crafting easy-to-understand interactions, clearly communicating the capabilities of generative AI tools, and guiding users to optimally leverage these tools for their specific needs. Thoughtful UI design can enable users to tap into the true potential of AI systems, creating more valuable and personalized experiences.
2. Engage multisensory modalities.
Human interactions are multisensory—rather than being based on a singular sense. Through education and cognitive research, we know that people have different learning modes that cross visual, auditory, tactile, and kinesthetic senses. Because multisensory modes of interaction leverage multiple senses, they greatly enhance our understanding and retention of information. Therefore, current AI user interfaces that leverage single-mode chat or single-mode voice miss the opportunity to provide flexibility and enhanced connection and support. Many predict that voice will be the dominant interface mode for agents, but this doesn't eliminate the need for visual cues and other sensory feedback. For instance, human communication frequently relies on nonverbal visual cues such as facial expressions or gestures to guide conversations and the development of relationships. Plus, in environments where text or voice isn’t feasible, alternative modes are often necessary for privacy and progress.
The evolution of AI user interfaces could dramatically improve human-machine interactions by incorporating multisensory interactions. For both human and machine, multisensory experiences enrich sensory perception and expression in ways that reflect more natural human communications and comprehension. For UX designers, this means creating natural, multisensory indicators of AI readiness and engagement. Examples include the following:
visual readiness cues—An ambient light pulse could change color to show the agent is listening.
auditory confirmation—A brief tone or voice confirmation could indicate that the AI has understood a command.
tactile engagement cues—A wearable device could vibrate slightly to inform the user that the AI has completed a task.
3. Drive awareness through dynamic visuals.
In this new world of digital intelligence, UX designers need to incorporate dynamic indicators of an AI’s availability and readiness to engage. A myriad of use cases require designers to create visual cues that build awareness, exemplify listening, demonstrate proactivity, offer different levels of activation, and provide successful interactions to build emotional connections and trust.
The need to visualize intelligence emerged with early digital assistants and smart appliances in which voice was the dominant interface modality. Moving visual elements complemented these user experiences, spanning different contexts and indicating the stages of listening, processing, and functioning. For instance, Alexa, Google, and Microsoft utilized various visualizations in the form of undulating waveforms or rings.
Other smart appliances also use visuals to bring intelligence to life. Google Nest Hub displays animations on the screen and also uses sounds such as beeps and voice confirmations. Tesla autopilot uses on-screen graphics to show the user’s surroundings, and alerts inform users of obstacles through steering wheel vibrations. The Apple watch’s Siri integration shows waveforms when processing commands and chimes indicate its readiness to engage. LG refrigerators have a visual display that shows food inventory and voice assistants confirm shopping or recipe suggestions.
The important goal is to strike the right balance between awareness and distraction. In certain scenarios, more visual emphasis and contrast is necessary, while in other moments subtler, more understated visualizations can help create peripheral awareness or provide ambient feedback. Getting not only the visual forms right but also their tone, intensity, and motion is an art that requires progressive refinements.
4. Craft abstract visuals rather than literal representations.
A common misperception of AI design is an insistence on mimicking human appearance and behavior in the form of realistic avatars, digital twins, and chatbots. In reality, such user experiences often feel inauthentic, uncanny, and hollow. But people don’t want to be fooled. They want utility and delight and relatability. People don’t need AIs to pose as humans to form deep personal relationships with them that are built on real emotions. We commonly form such relationships with pets, stuffed toys, fictional characters, phones, cars, and robots.
Our user research into AI reveals that users have a strong preference for abstract or lightly figurative visuals that feel approachable, without veering into literal, human-like representations that users often find uncanny or inauthentic. Abstract visuals such as flowing waveforms or minimalistic geometric shapes evoke intelligence and adaptability, while leaving room for the user’s imagination and personalization. These designs must avoid the pitfalls of anthropomorphism, providing flexibility across diverse contexts and cultures, while fostering trust and engagement. By focusing on open-ended, nonliteral expressions, UX designers can authentically convey an AI’s capabilities and create user experiences that are universally relatable and even inspiring.
On a recent project, we explored a range of potential identities for AIs—some more abstract; others more representational. We looked at common archetypes and explored different character traits and qualities. Then we tested these different identities with users to understand how they responded to them and how comfortable they felt. We wanted to learn how their expectations for the AI shifted. In the end, participants preferred a more abstract representation, albeit one that could become more expressive through the use of color and motion and be quite flexible across states, while still remaining recognizable. There are still small expressions that leverage a human metaphor. It has a sort of gaze, a breath, and can shake its head—in its way. However, it was mostly abstract, using expressive gestures to engender personality.
5. Express distinctions for brand differentiation.
AI systems must reflect their brand’s identity, while enabling users to navigate options and understand functionality. Users rapidly become familiar with the functionality that is associated with their AI platform of choice. However, prominent AI-assistant platforms often lack visual distinction. Many utilize similar generic shapes such as waveforms, sparkles, or stars, making it difficult for users to discern the brand or entity with which they’re interacting. This homogeneity creates a bland user experience and dilutes brand identity in the AI space. AIs require visual differentiation to establish brand recognition and user loyalty. These visual expressions should be
distinctive—Clearly identifiable and associated with a specific brand
consistent—Maintained across all platforms and devices
complementary—Integrated seamlessly within the overall operating-system experience
Think of how car manufacturers use signature grilles, headlights, and body lines to create a distinctive visual identity. AI assistants need a similar approach, utilizing unique visual cues, animations, and even micro-expressions to convey their brand personality.
In the increasingly crowded AI landscape, brand differentiation is no longer a luxury; it’s a necessity. Companies must move beyond generic avatars and embrace a visual language that truly reflects their brand identity, creating a more engaging and personalized user experience. But visuals alone are not enough; personality and character can also be memorable qualities of voice, tone, and auditory interactions with an agent.
6. Build trust with transparent feedback.
Trust is critical to AI adoption. Users must understand what the AI system is doing, why, and how it benefits them. Transparent feedback loops ensure that users feel well informed and in control. This is where multisensory cues become vital, going beyond simple text-based explanations to provide a richer understanding of the AI’s processes. The following examples help UX designers to add visual elements to multisensory experiences:
Task progress indicators give users context and set expectations for completion through dynamic progress bars, spatial metaphors, and color shifts that express a tangible sense of movement. They reflect different stages of processing to drive confidence in the AI’s performance.
Explanation features help users understand the logic behind an AI’s responses, which in turn helps users formulate better prompts or interaction commands. Tools such as ChatGPT’s why I responded this way feature explain the reasoning behind outputs. Other examples to consider are the visualization of decision trees or animated explanations that highlight what factors the AI considered and how it arrived at a decision.
Error transparency communicates when and why a command failed and offers actionable suggestions. This transparency reassures users, fosters confidence, and reinforces the AI’s role as a trusted assistant. This is where empathetic visuals or distinctive sounds can subtly signal errors, warnings, or successful completion.
Bringing Intelligence to Life
As AI becomes an integral part of our daily lives, the journey toward seamless integration is as much about design innovation as it is about technological advancement. During AI’s formative stages, UX designers must leverage dynamic visuals and multisensory expression to bridge the gap between the AI’s invisible power and its practical, everyday utility. By fostering better awareness and guiding users’ interactions, we can train them in the use of the AI, drive its adoption, and build long-term loyalty.
Over time, as consumers grow more familiar with and trusting of AIs, user-interface affordances will evolve to focus on more subtle indicators and easy-to-understand controls, bringing intelligence to life in ways that are both magical and deeply meaningful.
Credits–I want to credit and thank my team at Punchcut, including Nate Cox, Akshat Srivastava, and Nick Munro, for contributing valuable insights and content for this column.
Ken was a co-founder of Punchcut and has driven the company’s vision, strategy, and creative direction for over 20 years—from the company’s inception as the first mobile-design consultancy to its position today as a design accelerator for business growth and transformation. Punchcut works with many of the world’s top companies—including Samsung, LG, Disney, Nissan, and Google—to envision and design transformative product experiences in wearables, smart home Internet of Things (IoT), autonomous vehicles, and extended reality (XR). As a UX leader and entrepreneur, Ken is a passionate advocate for a human-centered approach to design and business. He believes that design is all about shaping human’s relationships with products in ways that create sustainable value for people and businesses. He studied communication design at Kutztown University of Pennsylvania. Read More