How Celebrities Are Fighting to Protect Their Image From a Tsunami of AI Deepfakes
Celebrities are facing a rapidly escalating issue: unauthorized AI-generated content misusing their name, image, and likeness (NIL) is flooding the internet. This surge in fake content, spurred by generative AI, is making it harder for famous personalities and their teams to combat these infringements, despite their efforts. The problem has grown so fast that handling it feels like a never-ending game of whack-a-mole. According to exclusive data, the volume of such violations has skyrocketed since late 2022. Industry sources reveal that although some steps are being taken to mitigate the issue, these efforts are still in their early stages, and there’s inconsistency in how they’re applied across different teams.
Traditionally, handling NIL infringements involved issuing Digital Millennium Copyright Act (DMCA) takedown requests, usually a manual task performed by a celebrity’s legal representatives, managers, and publicists. These teams would compile weekly or monthly reports detailing infringements and removals. However, this manual approach has become inadequate in the face of the vast amount of offending content. Some VIPs have even started hiring cybersecurity firms to assist with the growing number of deepfake violations, but even this has limitations. “Even with these legal teams and cybersecurity companies finding these infringements, they’re still missing so much because it’s a very manual process” one agency source explained.
The volume of violations isn’t the only challenge; content is increasingly disseminated across various platforms, making detection and removal even harder. For instance, some fake testimonial ads have been found on adult websites.
To combat this, new automated detection tools are being introduced. Companies like Vermillio and Loti, which are now partnering with agencies like WME, are offering automated services to detect and remove infringing content more efficiently. These solutions are seen as more comprehensive and less reliant on the manual work of large teams. “More talent will begin to take such actions because they’re automated and more all in one” an industry insider shared.
Despite these advances, not all tools are equally effective in addressing the scale of the problem. Few startups can detect AI-generated content, match it to a specific celebrity, and then issue takedown requests automatically. Nonetheless, automated solutions like those from Loti and Vermillio, are expected to become a crucial part of managing this rising challenge.
Source: Variety
Share:
Celebrities are facing a rapidly escalating issue: unauthorized AI-generated content misusing their name, image, and likeness (NIL) is flooding the internet. This surge in fake content, spurred by generative AI, is making it harder for famous personalities and their teams to combat these infringements, despite their efforts. The problem has grown so fast that handling it feels like a never-ending game of whack-a-mole. According to exclusive data, the volume of such violations has skyrocketed since late 2022. Industry sources reveal that although some steps are being taken to mitigate the issue, these efforts are still in their early stages, and there’s inconsistency in how they’re applied across different teams.
Traditionally, handling NIL infringements involved issuing Digital Millennium Copyright Act (DMCA) takedown requests, usually a manual task performed by a celebrity’s legal representatives, managers, and publicists. These teams would compile weekly or monthly reports detailing infringements and removals. However, this manual approach has become inadequate in the face of the vast amount of offending content. Some VIPs have even started hiring cybersecurity firms to assist with the growing number of deepfake violations, but even this has limitations. “Even with these legal teams and cybersecurity companies finding these infringements, they’re still missing so much because it’s a very manual process” one agency source explained.
The volume of violations isn’t the only challenge; content is increasingly disseminated across various platforms, making detection and removal even harder. For instance, some fake testimonial ads have been found on adult websites.
To combat this, new automated detection tools are being introduced. Companies like Vermillio and Loti, which are now partnering with agencies like WME, are offering automated services to detect and remove infringing content more efficiently. These solutions are seen as more comprehensive and less reliant on the manual work of large teams. “More talent will begin to take such actions because they’re automated and more all in one” an industry insider shared.
Despite these advances, not all tools are equally effective in addressing the scale of the problem. Few startups can detect AI-generated content, match it to a specific celebrity, and then issue takedown requests automatically. Nonetheless, automated solutions like those from Loti and Vermillio, are expected to become a crucial part of managing this rising challenge.
Source: Variety