Joanne Z. Tan, AIXD™ (AI Experience Design), thought leadership brand strategist, on AI agent, digital human, virtual influencer, deepfake, & brand damage.

Agentic AI, Digital Humans, Virtual Influencers: Deepfake, Brand Damage, and Preventive Actions

AI agent, “digital humans”, and “virtual influencers” have the potential to make or break the relationship between organizations and their customers. Implementing these technologies requires a focus on human expectations and human experience.

Without AIXD™, AI personalities, AI avatars, AI-driven identities, synthetic brand representatives, AI-powered personas can lead to deepfake, brand damage, reputation risk, and the loss of brand trust.

“The agentic AI age is already here,” reports MIT professor Sinan Aral. AI agents are already being deployed across the economy “at scale” to perform a wide range of tasks. Meanwhile, Nvidia CEO Jensen Huang predicts that AI agents will create a “multi-trillion dollar opportunity” for industries from medicine to software engineering. 

Yet even companies on the cutting edge “don’t fully grasp how to use AI agents to maximize productivity and performance,” while understanding of their social implications is “nascent, if not nonexistent,” according to Professor Aral. In other words, organizations have a long way to go to understand how to use agentic AI – or whether to use it at all.

That is even more true of the growing use of “digital humans” and AI avatars – visual representations of real or fictitious people. These increasingly lifelike characters have moved beyond video games into roles in customer service, healthcare, and more. Without the right policies and guardrails in place, they risk straying into “deepfake” territory, damaging brands and reputations.

The key to avoiding pitfalls and unlocking the potential of these technologies is AIXD™ or “AI experience design.” AIXD™  is the process of designing an AI enabled, AI native, AI led user or customer experience. It is based on human-centered values, workflows, products, and services. AIXD™ begins by collecting data about what human users want and need – not what AI developers think they want.

This article examines what agentic AI really means, its uses, and its risks. We then investigate the world of AI avatars and digital humans, which can magnify both risks and opportunities. 

👉  To watch this as a 13-min video

👉  To listen as a 13-min podcast

👉 Subscribe to our FREE Newsletter for more insights.

What is agentic AI?

There is no generally accepted definition of agentic AI, according to the MIT Sloan Management School. One useful idea to keep in mind is that AI agents aren’t confined to the digital world: “Agents can actually take actions that change things in the physical world,” according to Professor Aral.

Artificial intelligence first came to broad public attention with generative AI (“Gen AI”), which was able to automate text, image, and video based on human prompts, most often typewritten instructions.

“AI agents go further by acting and making decisions the way a human might,” according to John Horton of MIT. They harness available tools such as APIs (application programming interfaces) to interact with humans and other agents, access and use the internet, send and receive money, and many other tasks.

How do humans and AI agents work together?

Research from MIT shows that humans and AI agents can complement each other when the technology is implemented thoughtfully. AI agents can help humans by sifting through mountains of data and by performing common tasks without tiring. Humans excel at adapting to novel situations, interpreting contextual cues, and building relationships – areas where AI agents struggle.

One way to enhance human–AI agent collaboration, according to Professor Aral of MIT, is to give the AI agents “personalities” that complement their teams. In a large-scale experiment, Aral and colleagues found that giving AI agents the proper traits “led to better performance, productivity, and teamwork outcomes.” But care is required, as AI “personalities” can clash with human ones.

One big question the research does not address is how to adapt AI agent “personalities” for interactions with the broader public. The MIT research focused on small working teams, not on the broad range of problems and personalities we encounter in everyday life.

Will a “pushy” AI agent lead to better customer interactions, or will it put customers off? Will a “polite” AI agent fail to point out issues to “please” customers? Organizations need to identify and plan for these challenges through AIXD™, which we discuss below.

Creating a human-centered approach to AI

Another challenge is ensuring that AI agents behave in ways that enhance the organization’s mission and vision. The decision making process of AI agents is poorly understood and poses risks to organizations unless they are carefully supervised and monitored. As Professor Aral puts it, “You have to make sure the agentic decision making is aligned with a human centered decision process.”

These are a few of the risks to consider, beyond simple errors and “hallucinations”:

Unethical behavior: AI agents lack human empathy or the norms that define acceptable and ethical behavior. Whether based on faulty assumptions or a desire to solve a problem quickly, AI agents can behave in biased and inconsistent ways. “You need to be able to explain business decisions and consistently apply the same standards to every case,” according to Professor Aral. 

Security and Data: AI agents are gaining access to more and more systems and datasets. As they do, the risks of data loss and hacking increase. Organizations need a strong data policy, including “robust permissions-based systems,” according to the MIT article.

Accountability and Monitoring: Organizations need to decide who bears responsibility when AI agents make mistakes or cause harm, according to MIT Professor Kate Kellogg. “As you move agency from humans to machines, there’s a real increase in the importance of governance and infrastructure to control and support agentic systems,” she says.

What are digital humans?

Several steps beyond text and prompt based systems are AI agents that feature visual representations of humans: AI avatars and digital humans. As models become more sophisticated, they can cross into “deepfake” territory (including “virtual influencers”), raising ethical and reputational risks. Organizations should, at minimum, clearly identify when customers are viewing or interacting with AI generated content and models.

AI avatars. AI avatars have been described as “digitally created personalities” brought to life by AI. They are designed to mimic the behavior and expressions of human beings on computer and phone screens.

AI avatars are more sophisticated than simple animations. They may respond to text and voice prompts, and some may be able to interpret facial expressions. They can engage in conversations and adjust their responses to fit the situation.

Digital humans. More advanced than AI avatars are “digital humans,” which are “made to look and act like actual people and are intended to communicate in ways that are both natural and human,” according to author Ren Yamaguchi.

In addition to the features of AI avatars, digital humans:

  1.       Are capable of “lifelike” gestures and facial expressions;
  2.       Pay attention to both verbal communication and non-verbal cues;
  3.       Modify answers in response to context and non-verbal cues to make discussions feel more personalized;
  4.       May be digital recreations of real people or entirely fictitious.

The technologies to create AI avatars and digital humans have been widely available since at least June 2024, when NVIDIA released its Avatar Cloud Engine (“ACE”).  Tools like NVIDIA’s ACE use technologies including machine learning, natural language processing, speech synthesis, computer vision, and 3D rendering to produce increasingly realistic results.

Digital humans and AI avatars have been used in customer service and support, as virtual assistants, in telemedicine and healthcare, as interactive characters in video games, and as virtual “influencers” and even keynote speakers.

While digital humans and AI avatars may improve customer experience in some situations, organizations must take care not to degrade customer experience or damage their reputations. In an age of “deepfakes” and “AI slop” pretending to be original content, the danger of reputational harm is very real.

How can AIXD™ help organizations navigate the new landscape?

Agentic AI and digital humans are transforming customer relationships – the ways organizations communicate with and respond to their users and customers. The technologies aren’t good or bad in and of themselves. How they are implemented makes all the difference. Organizations can either broaden their appeal or alienate their user base. It all depends on how they respond to customer wants, needs, and expectations.

Organizations can start by asking these questions:

  1.       Will the AI agent or digital human solve a problem from the perspective of a human end user?
  2.       Will these AI tools lead to satisfaction and enjoyment for the human end users?
  3.       Will the tools free humans to do more creative or complex tasks?
  4.       Will they make human life better?

Start with a problem. Too many companies start with the technology – whether or not it’s useful or appropriate. FOMO leads them to put AI into products and processes without considering how it will be used or whether it answers a human need.

Measure and test. Starting with a specific problem also allows organizations to measure relevant data, which reveals whether the proposed solution is making a real difference in the lives of real humans.

Iterate and improve. The best products and services don’t emerge fully formed. They go through a process of testing, feedback, and improvement. Applying the lessons from testing lets designers improve their prototypes until they meet human needs and wants.

If you would like to learn more about how AIXD™ can help your organization navigate the changing landscape of agentic AI and digital humans, please contact us at AIXD.world or 10PlusBrand.com. Thank you.

👉 Subscribe to our FREE Newsletter for more insights on AI agent for AI search, SOM — Share of Model, and AI experience design.

About the author,  Joanne Z. Tan, Brand Strategist, Thought Leadership Coach

Joanne Z. Tan is the Founder & CEO of 10 Plus Brand, Inc. Joanne is a globally recognized brand strategist, thought leadership coach, content & branding expert, and speaker. She helps founders, CEOs, executives, board members, leaders, entrepreneurs, and organizations decode their Brand DNA, elevate merely successful businesses to become powerful brands in the AI age. Joanne was trained in law and business, and had a liberal arts education from Brandeis University before earning a law degree. Her coaching emphasizes comprehensive strategies, business modeling, multidisciplinary thought leadership and high authority content creation, brand building, culture, GTM, user experience design, AI native brand architecture™, and AIXD™ (AI experience design). A former journalist, award-winning photographic artist, Joanne is also a poet, writer, and an avid wilderness backpacker.

© Joanne Z. Tan, 2026. All rights reserved.