In early 2025, the buzzword dominating in corporate circles is agentic artificial intelligence (AI). Many organizations are now creating new departments and special initiatives; and any article that talks about AI regularly includes the term. But what exactly is agentic AI, and how do we, as UX designers, approach it?
Agentic AI represents a significant evolution from traditional AI models. Instead of merely automating individual tasks, agentic AI systems can operate semi-independently, executing a multitude of tasks that require decision-making, analysis, and adaptation. This shift allows businesses to assign AI agents to collections of functions and even to cover entire roles with minimal human intervention.
But the transformation goes beyond automation. It signals a fundamental shift in computing itself. We are progressing from a world of fixed, app-based user experiences to a dynamic, learning-driven ecosystem in which AI agents proactively assist and respond to users and evolve over time.
Champion Advertisement
Continue Reading…
At Punchcut, we have worked extensively with clients to help them integrate AI into their services, designing new relationships between humans and AI agents—both basic and advanced. Through primary research, we have explored the evolving dynamics of human-AI interactions, identifying some critical questions such as the following:
How do AI relationships evolve over time?
What level of humanness do people need or expect in AI interactions?
Where do AI agents provide the most value?
How do AI agents build trust?
Through our research and experience designing AI-driven user experiences, we have identified key insights about crafting meaningful relationships between people and AI agents. In this column, I’ll share some of these insights.
1. Plan Around People, Not Agents
Prioritize real value over novelty by applying human-centered research to determine optimal points for automation.
Just as the early days of mobile software development drove an app-for-everything mindset, UX designers now risk falling into an agent-for-everything paradigm, potentially leading to fragmented or redundant AI experiences. First and foremost, we must plan agentic AI experiences around people. By beginning with human-centric research, we can identify opportunities for agent augmentation to add welcome value. Agents are not the right solution for everything. Thus, it is critical for us to discern when and where we should use agents within a user experience, whether for employees or consumers.
Recent technology fails have pushed novelty over utility and automation over humanity. But, recent consumer reactions and user research indicate that users have a clear preference for AI technologies that enhance human autonomy and creativity rather than fully automating tasks at the expense of human involvement. Apple’s and Google’s recent marketing missteps necessitated their having to pull significant ad spots. In both cases, the ads involved messages that elevated technology at the expense of human relationships and capabilities. These missteps have highlighted a prevalent consumer sentiment: while people welcome AI as a tool to augment their human capabilities, there is resistance to its use in ways that could diminish human autonomy and creativity.
Prioritize Value over Novelty
Our learnings from research should ensure that AI agent experiences are grounded in human insights and empathy. We must take care to design AI agents that parse user intentions and modulate the degree and nature of the assistance they provide at each step of an interaction. By conducting generative user research, we can understand when, where, and why people favor autonomy and control versus assistance and convenience across a user experience. Consider using artifacts such as autonomy service blueprints to effectively represent human intentions and machine interactions over time. With the resulting insights, we can build more cooperative AI agent experiences that balance explicit and implicit assistance.
2. Learn from Human Relationships
Learn from our existing human relationships, but avoid literal representational and emotional anthropomorphism.
While AI may be new and is changing rapidly, people are not. We already know a lot about them. People have deeply ingrained social instincts, emotional patterns, and relational expectations that we’ve studied for centuries. As UX designers, we can leverage our understanding of human relationships in crafting AI agent experiences that feel natural, engaging, and easy to understand—without straying into unrealistic or uncanny anthropomorphism.
We’ve found that people tend to project a lot of themselves onto their relationships with AI, so we can go pretty far in defining how humans would interact with AI agents simply by focusing on human-to-human relationships. Thus, designing AI interactions can draw heavily on what we already know about human relationships rather than requiring us to reinvent the wheel. By examining how people form trust, establish familiarity, and navigate human relationships, we can better predict how humans will receive and integrate AI agents into their daily life.
Use Archetypes to Shape AI Agent Personalities
When we define the nature of AI relationships, we often draw from established human archetypes—patterns of character and behavior that have been deeply ingrained in human storytelling for centuries. These archetypes help people quickly understand an AI agent’s role and set expectations for its personality and functionality.
For example, consider the following archetypes:
The Sage (Spock, Yoda, or Gandalf)—This AI provides logic, wisdom, and factual guidance without unnecessary emotional engagement.
The Caregiver (Mary Poppins, Mr. Rogers, or Baymax)—This nurturing AI supports, encourages, and provides comfort.
The Innocent (Buddy the Elf, Wall-E, or R2-D2)—This AI is playful, curious, and learns alongside the user.
The Challenger (Sherlock Holmes, Tony Stark, or House)—This AI pushes back, questions assumptions, and challenges the user’s thinking.
These archetypes do more than just shape the AI agent’s personality—they also define the agent’s interaction model. For example, a Sage AI might offer concise, data-driven responses, while a Caregiver AI would likely provide warmth and encouragement. Understanding these patterns lets us craft AI personalities that are both natural and effective, without falling into the trap of excessive anthropomorphism.
Avoid Making AI Agents Too Human
While these archetypes can help users quickly understand and relate to an AI agent, it’s important to avoid making an AI too human-like. Overly emotional or humanoid representations could lead to unrealistic expectations and even disappointment when an AI agent inevitably falls short of true human capabilities.
A common trend in AI agents is an insistence on replicating human appearance and behavior in the form of realistic avatars, digital twins, and chatbots. In humanizing this technology, some product makers believe that the more literal the human simulation, the more effective the connections it makes with consumers.
In reality, such literal user experiences often feel inauthentic, uncanny, and hollow. Our research has shown that people don’t want to be fooled by digital buddies. They want utility, delight, and relatability. People don’t need AI agents to pose as humans to form deep personal relationships with them that are built on real emotions. Humans commonly form such relationships with our pets, stuffed toys, fictional characters, phones, cars, and robots.
Instead of creating AI agents that pretend to have emotions, we should focus on agents’ mirroring human communication styles and responsiveness in ways that feel familiar to them, but also remain authentic to the AI’s nature. Examples might include the following:
using natural conversational patterns such as turn-taking and contextual memory
adapting tone and style in response to the user’s behavior
offering reciprocal engagement by acknowledging the user’s input and adjusting its responses accordingly
In short, an AI agent doesn’t need to seem to be human—it just needs to interact in a way that makes sense within human expectations. By studying real human relationships and leveraging these timeless archetypes, we can craft AI agents that feel natural and trustworthy and can seamlessly integrate into people’s lives.
3. Build Trust with Utility
Even the most intimate human relationships are built on utility. We earn trust by nailing the basics first.
One of the most important things we’ve found in our research on human-AI relationships is that they take time to develop. In our relationships, we must earn intimacy and trust, and we earn them through utility. AI agents must deliver practical value to people. Plus, they must nail the basics first—before they can succeed in developing deeper relationships. Unlike traditional software or static user interfaces, AI agents are dynamic and constantly evolving. Therefore, users don’t evaluate them based on just a single interaction—they assess them over time.
Plan for the Stages of Agent Relationships
Just as human relationships go through distinct stages—assessing compatibility, building connection, and deepening attachment—people also develop their relationships with AI agents over time. In human relationships, we don’t establish trust through grand gestures but through consistent, reliable interactions. The same is true for AI agents. An agent must first prove its utility before attempting to build a deeper connection with users. Therefore, it must handle simple, practical tasks flawlessly before moving on to more complex, high-stakes responsibilities.
Answer Key Relationship Questions as a Basis for AI Design
To design a trustworthy AI agent, we must answer the following questions:
How do relationships form and deepen over time?
What are the ideal qualities of a trustworthy, long-lasting relationship?
What are the essential cues that signal reliability, warmth, or competence?
For example, an AI must first function as an assistant—for example, by sending a text or setting a reminder—before users will trust it to act as an agent—for example, planning a vacation or managing workflows. Only once an AI has established its reliability in these domains can it evolve into a companion—for example, offering coaching or emotional support. Over time, as the AI agent proves its reliability, respect for privacy, and ability to mirror human needs, the relationship solidifies into something richer and more rewarding.
Balance Emotional Intelligence with Practicality
While emotional intelligence is a crucial component in the design of affective AI agents, jumping to overly emotional or intimate references too early could create discomfort or even mistrust. People are cautious in forming deeper relationships with AI agents. If an AI moves too quickly—expressing too much warmth, personal familiarity, or sentimentality before it has proven its reliability—the relationship could feel forced or even unsettling.
For example, a smart assistant that suddenly began engaging in empathetic conversation before mastering basic scheduling tasks might feel insincere. Likewise, an AI agent that acted too human-like too soon could raise user expectations that it isn’t equipped to fulfill, leading to frustration or abandonment.
Instead, an AI agent should establish emotional depth through utility. Once users learn to trust the AI’s functionality, they might naturally begin attributing more human-like qualities to the agent—without the AI needing to overcompensate with artificial warmth.
4. Tailor Interactions to Deepen Bonds
To establish real connection and deliver personalized experiences, shape AI interactions to the user and the context.
Personalized interactions make an AI agent feel more like a true partner. AI experiences should adapt to the user’s behaviors, preferences, and context to create meaningful engagement. The power of a great agent is similar to that of a good partner: It is reliable, remembers what the user cares about, and tailors its interactions to help the user feel supported in the best way.
People invest time in helping others with whom they’re beginning to form close relationships—friends, partners, or even service providers—learn their specific likes and dislikes. In human relationships, it can be frustrating to start over with someone new, having to explain your preferences from scratch. The same applies to AI agents: if users feel they are constantly reteaching the system, they are more likely to disengage.
A well-designed AI agent should be able to do the following:
Remember personal details, without being invasive.
Adapt to the user’s changing needs over time.
Anticipate the user’s preferences based on past interactions.
Use context to shape responses—for example, using different tones for work versus personal interactions.
For instance, if a user regularly asks an AI assistant to summarize long email messages and highlight action items, the AI should automatically start doing this proactively—not wait for the user to request this help every time.
Move Beyond Language to Actions
Conveying genuine warmth in AI interactions isn’t just about using friendly language. It’s about the actions the AI agent takes to show that it understands and cares about the user. The AI agent can convey warmth through knowledge, by remembering past interactions and anticipating the user’s needs; through care, by proactively helping the user without being asked to do so; and through vulnerability, by acknowledging its limitations and improving over time.
For example, an AI agent that reminds the user about an important event—not because it was scheduled in the user’s calendar, but because it recognizes the event’s emotional significance—feels more like a thoughtful companion than a generic assistant. Similarly, an AI that admits not knowing something, but offers to learn or improve feels more authentic than one that simply deflects, disappointing the user’s expectations.
Conclusion: Design for an Agent-Based Future
The rapid advancement of AI technology has brought about a new era of agentic AI, where AI systems are capable of independent decision-making and action. This shift necessitates a renewed focus on human-centered UX design to ensure that the behaviors of autonomous systems align with the intended human values, needs, and goals. Using design guidelines, iterative research and testing techniques, and user research can ensure you create products and services that amplify rather than replace human autonomy—even as you employ AI agents.
Credits—I want to credit and thank my team at Punchcut, including Jodi Burke, Nate Cox, and Nick Munro, for contributing valuable insights and content for this article.
Ken was a co-founder of Punchcut and has driven the company’s vision, strategy, and creative direction for over 20 years—from the company’s inception as the first mobile-design consultancy to its position today as a design accelerator for business growth and transformation. Punchcut works with many of the world’s top companies—including Samsung, LG, Disney, Nissan, and Google—to envision and design transformative product experiences in wearables, smart home Internet of Things (IoT), autonomous vehicles, and extended reality (XR). As a UX leader and entrepreneur, Ken is a passionate advocate for a human-centered approach to design and business. He believes that design is all about shaping human’s relationships with products in ways that create sustainable value for people and businesses. He studied communication design at Kutztown University of Pennsylvania. Read More