Now the twist: No actual spokesperson took part. The reporters interviewed a virtual clone created by generative artificial intelligence.
Using AI to make avatars isn’t all that new — relatively speaking, anyway — but the speed of its development is unsettling. Deepfakes, digitally replacing one person with the image and/or voice of another, are increasingly sophisticated and harder to tell from the real thing. (Watch for lots more of them in the run-up to the election).
Then along comes this report from NPR about a Chinese AI company that can create a seemingly real digital replica of a human from just a one-hour scan and 100 spoken phrases. In one example, it allowed a social media influencer to hawk products during multiple, simultaneous livestreams — all without actually being on camera. (In another, an executive digitally cloned his cherubic son to maintain their father-son bond when the child grew into surly teen.)
What’s seen as a boon for e-commerce worries me from a PR standpoint. How long before we start using avatars to speak on behalf of companies and clients? Digital spokespersons are always available, always fresh and always on message. Best of all, you don’t have to pay them, offer health insurance or contribute to their 401k. And as long as they deliver the message accurately, should it matter?
Yes. Profoundly.
Fundamental to our profession is trust. Read the PRSA Code of Ethics, and you’ll find that word aplenty. Trust is rooted in the human experience — what we say, how we say it, how we engage with one another in the saying. It’s rooted in being accountable and reliable. Remove the human element, and you have the words but not the ownership. Not the soul.
I’m especially thoughtful of this when it comes to delivering bad news. No one likes to do it. As technology advances, it’ll be more than a little tempting to tap a few keys and let a virtual you do the dirty work.
But here’s the thing: Delivering bad news should suck. It should hurt. We should feel at least some of the pain felt by those directly affected.
Over my career, I’ve served as a media spokesperson for over two dozen major layoff announcements, affecting scores of people to thousands — not to mention whole communities. Every last one of those experiences was awful. But by being there with the people affected, doing what I could to support them, demonstrating how my clients were doing what they could (or at least what they were prepared to do) to ease the impact, brought a vital human element. Maybe that didn’t stop the breaking of trust, but I believe it was essential to the long process of rebuilding it.
If nothing else, it was the human thing to do. No AI avatar will ever pull that off.
I’m partly relieved that the U.S. and other countries are talking about ethical guidelines for AI development and use. I hope such talk becomes concrete action. The PR profession must take a lead role in making it so.