Natalie Monbiot, head of strategy at Hour One, explains that the term “deepfakes” implies using generative artificial intelligence in ways it wasn’t meant to be used. She says that what they’re doing is authorized from the start.w
She said that Hour One is an AI company with a “legal and ethical framework” for when to create digital likenesses of people.
There is an important difference between authorized and unauthorized media in the age of deepfakes. Deepfakes have been oftentimes associated with revenge porn or fake news. The term “deepfake” can be traced back to a Reddit user from 2017, known as “deepfakes,” who shared videos on the subreddit associated with deepface pornography.
Recent research suggests that in the future, we’ll be experiencing interactive deepfakes and compositional deepfakes. Interactive deepfakes are when people can seemingly talk to a celebrity, while compositional deepfakes will be used to create hordes of fake videos and visuals to compile a “synthetic history”.
Deepfakes have continuously shown up in the news, most recently about celebrity deepfakes in ads and Bruce Willis. In the Wall Street Journal, you can find coverage of Tom Cruise, Elon Musk, and Leonardo DiCaprio’s unauthorized deepfakes spreading after appearing in ads. There are also rumors circulating about Bruce Willis signing away the rights to his deepfake likeness (not true).
But is deepfaking an ethical practice? Vendors who specialize in synthetic media technology think so. “What about authorized deepfakes used for business video production?” they asked.
Only a few use cases are malicious, they claim. Most deepfake videos are fully authorized and popular in enterprise situations – for employee training, for instance. Or they may be created by people like celebrities and company leaders who want to take advantage of synthetic media by “outsourcing” to an AI twin.
In these cases, the strategy is to use synthetic media — in the form of virtual humans — to handle expensive, complex, and unscalable challenges that are common in traditional video production. Hour One claims to have made 100,000 videos in the past three and a half years, including content for Berlitz, NBC Universal and DreamWorks.
The future is bright for enterprise use of deepfakes, given the growth of generative AI as part of mainstream culture. One of Forrester’s top 2023 predictions was that by 2023, 10% of Fortune 500 enterprises will generate content with AI tools. Forrester mentioned startups like Hour One and Synthesia, which ” use AI to accelerate video content generation.”
The digital media industry will be big for the next five to seven years. One report predicts that as much as 90% of new media could be generated by artificial intelligence.
The business side of the deepfakes debate is “hugely under-appreciated”, insists Victor Riparbelli, CEO of London-based Synthesia. The company describes itself as an “AI video creation company.” Founded in 2017, it has more than 15,000 customers and a team of 135 year-round employees. Among its clients are fast food giants like McDonald’s and Teleperformance along with global advertising holding company WPP.
One company whose business is exclusively focused on personalized deepfakes is Deepcake. The company leaned into the enterprise space in the past but in recent years has been concentrating more on business with celebrities and influencers for brand endorsements. For example, it created a “digital twin” of Bruce Willis to be used for an advertisement for Russian telecommunications company MegaFon. This rumor led people to believe that Deepcake was the owner of Willis’ digital twin, which it does not own (although it does work with clients who want to channel their personality or likenesses through personalized deepfakes).
“We work directly with stars from talent management agencies,” said CEO Maria Chmir, “to develop digital twins that are ready to be put into any type of content.” TikTok commercials, for example. “This is a new way to produce the content without classic assets like constantly searching locations and a very long post-production process.” Users can also use fully-synthesized people who become brand ambassadors – for a few dozen dollars, they will say whatever the user has inputted them to say. “Of course, you can’t clone charisma and make someone improvise,” she said. “But we’re working on it.”
Synthesia is adding emotions and gestures to its videos over the coming months. Hour One recently released 3D environments to create a genuinely immersive experience.
“If you keep moving up that scale, you unlock more use cases and applications. So next year, we’ll see a lot more content powered by artificial intelligence in marketing. We’re going to see a lot less text and more audio and video in how we communicate every day,” said Riparbelli.