Who are your favorite radio personalities from your past? Wouldn’t it be great if you could listen to The Real Don Steele, Dan Ingram, or Dr. Don Rose every single day again playing the latest hits and yakking about current issues? What if the late, great Vin Scully still did current Dodgers games?
It’s not only possible, but it won’t be long before it happens.
I would love to hear Paul Harvey’s folksy commentary on yet another Trump-Biden election. (I’d love to hear it but it would still be creepy and somewhat offensive because he’s dead. But that’s just me.)
AI voice and image-mimicking technology are about to become the biggest practical and ethical problem facing the radio, TV, and movie industries.
If it can be done it will be done. That’s an observation accredited to several sources including and beyond the Bible but regardless of who said it first it’s an inexorable truth and ethics has nothing to do with it.
Artificial Intelligence is a genie that can’t be stuffed back into the bottle. AI radio disc jockeys are already here. So are nonhuman voice actors.
“There are jobs that would have gone to voice actors that are now going to synthetic voices.”
Tim Friedlander is president and co-founder of the National Association of Voice Actors. He told me that AI can’t yet replicate human emotion but admits some of that doesn’t matter.
“For the most part you can definitely tell the difference, an AI can’t act the same way or perform the same way that a human actor can, but in a lot of these e-learning or training videos or informational videos it’s purely a transaction of information. There’s no need for an emotional transaction. It’s just purely getting information across.”
Friedlander says he’s hearing regularly from voice actors who are losing gigs. All he can do is advocate on their behalf to protect human rights from being plowed under by new voice and image-mimicking technology.
“There are no federal laws that give you the right to your voice. So, none of us own the right to the sound of our voice. We potentially have rights over a (specific) performance we’ve given. If we’re a celebrity, we have some right of publicity that could possibly protect us in some capacity but we, as citizens in the United States don’t have the right to (own) our voices.
“That’s a thorny problem when it comes down to trying to codify it, to pass laws, especially when you’ve got a bunch of people who are passing the laws, who barely know how to use their phones.”
Tim’s undeniably right about that. But the bigger question is, after we’ve pounded on our Congressional representatives to preserve individual rights for actors, narrators, and audiobook readers will it make any difference in the long run?
Spotify already has a very good AI disc jockey who not only sounds realistic but can address you by name, play the specific music you want to hear, and relate to you personally. In its inception, it impressively mimics the voice and delivery style of real-life deejay, Xavier “X” Jernigan, Spotify’s Head of Cultural Partnerships, who previously hosted Spotify’s morning show, The Get Up.
After hearing the Spotify demo I reacted with a mind-blown “Whoa!” as if I was Kramer in Seinfeld. Now I’m wondering if I can get Robert W. Morgan and Bobby Ocean as my personalized deejays.
Like it or not, AI-generated content and voices, mimicked and newly created, are changing what we anachronistically call radio.
It’s time to get up to speed and deal with it.
Though we try to reassure ourselves that AI voice technology will never be able to match the soul and nuance of life expressed by living, trained human voices, we’re required to ask ourselves two questions: First, are we sure of that? Second, will anybody care?
Unanswerable questions aside, we still have work to do.
We must stop resisting inevitable change—not because our ethical concerns are invalid, but because we can’t stop the inevitable. All we can hope to do is manage the challenges and that’s a tall order.
Two bills stewing in Congress at the moment, the No AI Fraud Act and the No FAKES Act, both designed to establish voice and image rights, are good first attempts to deal with the issue but they only address AI use as far as the technology can currently be defined and used. They can’t anticipate future developments and legal loopholes. Opponents of each bill as written say they would cause more problems than they would solve.
Constitutional Law and Supreme Court expert David Coale, partner with Lynn, Pinker, Hurst, and Schegmann in Dallas, explains the legal considerations.
“I’m sympathetic but we already have two complicated bodies of tort law in this area—defamation laws where you can’t lie about someone, and fraud laws where you can’t pretend to be someone you aren’t. Beyond that, you’re well into activity protected by the First Amendment. Adding another complicated body of law on top of all that really does risk causing more problems than it solves.”
Coale is just bringing us back to reality. Lawyers will continue writing contracts, filing suits, and arguing the Constitution. Infotainment entrepreneurs and those who go by the trendy title “influencers” will ply their trades as profitably as possible. In what we still think of as radio, we will, too, as long as there’s an appetite for information and an exchange of ideas.
If we’re to meet the future we have to embrace new ways to create, disseminate, and sell content. We need to leave nostalgia in our shoebox of old pictures and forget much of the how but not the why of what we’ve learned.
Once we’re on that road we can let the marketplace guide us.
I’m not sure I want to hear Vin Scully explain the ghost runner at second going into the tenth inning. Even the best AI can only draw upon his public record to guess what he might have thought and how he would have said it. I like to think Vin hated the idea and would explain that to us with his famously clear and convincing clarity.
I knew Dr. Don Rose and my first thought about listening to him again in real-time was, as much as I miss him I don’t want to hear a genuine-sounding fake of him cracking one-liners about personal pronouns or Taylor Swift. Or, do I? He would make us laugh at the silliness of both subjects without offending anyone, and he’d stamp it with a horn honk and a giggle that perfectly hit the vocal.
We can only imagine where AI will lead us, and yet we can’t.
What would Jesus do in a given situation? We’ll soon be told and probably even hear it in his own impressively imagined and digitized voice. A lot of people will be pissed.
In radio, we need to stop hand-wringing about these things and start planning how to use it all to create a wonderfully enhanced experience for listeners and to turn a profit in the process.
“Progress lies not in enhancing what is, but in advancing toward what will be.” -Kahlil Gibran