The Charge
Following the launch of Copyleaks’ AI image detection software – which addresses the growing risks of AI-generated image misuse – we sought to align the new offering with a real-world, high-impact issue that shows the current relevance of AI image detection.
We identified a fast-moving issue on X where Grok was being used to generate nonconsensual, sexualized images of real women. This trend quickly emerged as one of the most significant and visible examples of AI image abuse to date.
The Solve
After an observational review of Grok’s publicly accessible photo tab, we identified an aggressive rate of roughly one nonconsensual sexualized image per minute in the observed image stream. From there, we drafted a blog post on behalf of Copyleaks that we immediately used – along with its standout statistic – in media outreach.
Approach
TheResults
Copyleaks’ data and expert commentary were featured in more than 50 articles that examined this misuse of AI-generated images. We secured coverage in many top-tier publications, including The New York Times, The Wall Street Journal, CNN, The Washington Post, Los Angeles Times, Rolling Stone, The Verge, The Atlantic, and more. A significant portion of the coverage featured direct backlinks to the Copyleaks blog. Efforts drove more than 1,500 new referral visits to the Copyleaks website and reinforced their position as a go-to authority on AI image detection at this critical moment.