Late last year, users on X began widely prompting the social media platform’s AI tool, Grok, to sexualize and “nudify” random and real people from photos that had been shared online, the majority of them girls and women who had not provided consent to use these photos.
Legal and other financial ramifications have impeded the flow of this kind of material on X, though company owner Elon Musk definitely was not discouraging this behavior as he himself later posted bikini deepfakes.
But even as Musk and X were being taken to court and facing the prospect of significant fines, the world had already changed. The Grok scandal presented not a new blip, but a dangerous new way of life on the internet, a Pandora’s Box of automated, fabricated and exploitative material. Platforms and technologies that do not face the same reputational, legal or financial risk as X—to say nothing of the moral qualms—will surely try to replicate Grok’s “success” as they attempt to stay one step ahead of the sheriffs in this new Wild West.
Keep in mind this was just one social media platform, albeit one of the more popular (or notorious) ones, owned by the richest man in the world. Just imagine the nightmare when all kinds of automated agents and AI-driven systems begin operating across the internet at scale. Content can be scraped, remixed, reposted, and redistributed automatically, meaning a single fabricated image or video will be endlessly repurposed and pushed across new channels without human involvement.
Documented outcomes already include harassment, doxxing, reputational damage, and depression. As the technology becomes more widespread, we will likely see deeper psychological consequences, including paranoia and people withdrawing from public life altogether.
If someone can be “puppeteered” without their participation, the implications and potential fallout are enormous. Once fabricated content is posted, it can replicate across dozens of sites and messaging channels before the target even knows it exists. The tools to generate convincing fakes have outpaced the tools to detect or stop them, by a significant measure.
The new bleeding edge lives on the blood of creators
This is the stuff of sci-fi movies, the nightmare scenario people have worried about since the early days of the web. No line now exists between what is real and what is synthetic. When that line collapses, the consequences extend beyond individual victims. Public trust erodes, reputations can be destroyed overnight, and misinformation becomes easier to weaponize because the average person no longer has reliable ways to verify authenticity. The fundamental and foundational trust and proposition value of the internet also vanishes.
For creators who live on the internet and whose income depends on their image and audience trust, deepfakes represent not only a violation of privacy and a deeply disrespectful form of humiliation, but a complete undermining of their livelihood. Fake content can siphon attention away from legitimate work, damage a creator’s brand, or create confusion about what content is authentic. In some cases, bad actors monetize these fake images or videos themselves, effectively exploiting a creator’s likeness without permission while diverting revenue from the real creator.
As generative image systems race to market, creators will be left unprotected, their likenesses repurposed without permission or compensation. When safeguards, consent frameworks and identity protections are treated as afterthoughts, the burden of dealing with the consequences falls on the individuals being targeted, and that burden will be compounded for creators.
It’s a Napster moment for the new creator economy. Just as peer-to-peer file sharing broke the music industry’s revenue model overnight with the artists suffering the first and deepest of those cuts, synthetic media threatens to do the same to individual creators.
The solution isn’t pretending the demand doesn’t exist, it’s building better and safer infrastructure. That means verified creator identities, responsible discovery tools, and monetization systems that reward legitimate creators while protecting users.
What’s unsettling is that the “antidote”—the verification and detection systems that could protect people—continues to lag behind the “disease.” Like a virus, harmful technologies tend to evolve around guardrails when there is financial incentive to exploit them.
A more responsible approach to AI in sensitive areas like adult content starts with respecting consent and agency. That means focusing on systems that connect users with real creators who have chosen to participate, rather than generating synthetic representations of people without permission. It also means building technology that prioritizes authenticity, transparency and privacy from the start rather than trying to retrofit those protections later.
Incentivizing authenticity generation at the same speed as a deepfake
As AI capabilities continue to grow, the real challenge will be ensuring that innovation does not come at the expense of human dignity and creator rights.
Platforms and developers will only build robust identity verification, deepfake detection, and discovery tools if there’s a sustainable business model behind them. That could mean systems where creators pay for verified identity credentials—similar to a “human-verified” checkmark—or where authenticated content is prioritized in discovery and commands higher value because audiences know it’s real.
In some cases, governments may also need to incentivize compliance through regulation or subsidies, just as they do in other industries where public safety and market incentives need to be aligned. Markets alone don’t regulate harmful behavior when there’s profit to be made. The internet is no different.
In other words, authenticity itself may become part of the product. If trust becomes scarce online, systems that guarantee real human creators and verifiable content could become one of the most valuable layers of the internet economy.
At the same time, technology can also play a key role in limiting AI abuse. One approach involves content authentication and provenance, where images and videos carry cryptographic signatures or watermarking that help verify whether content is authentic or manipulated. Another involves identity verification and likeness protection, allowing creators to establish a verifiable link between themselves and their real content so that impersonations or deepfakes can be detected more quickly. There needs to be an industry standard, a globally recognized system of identity and human authentication for content.
Ultimately, the goal isn’t to stop innovation, but to make sure the systems surrounding it reinforce authenticity rather than undermine it, especially in a world where authenticity is becoming a rarer and more valuable commodity with each passing day.
—————————————-
About Tim Enneking:
Timothy is the CEO of Presearch, a decentralized, privacy-focused, Web3 search engine. He was initially invited to join the project seven years ago after he recommended it during a CNBC Asia interview on crypto and he remained an advisor for four years. He rejoined Presearch in August 2023 when the founder invited him to become the CEO and bring the project to the next level.
He is the founder and Principal of Digital Capital Management, LLC (“DCM”), which runs CAF 2017, a crypto trading fund. He is also the founder and managing partner of Psalion, which manages two venture capital funds and a yield farming operation. For nine years ending in June 2024, Mr. Enneking was the CIO (Chief Investment Officer) of Mana Companies Asset Management, a medium-sized family office (which did not invest in crypto). Prior to those activities, Mr. Enneking founded and managed Tera Capital Fund, a fund of funds focused on Eastern Europe (established in 2004). Simultaneously, in 2013, he was engaged to manage the world’s first Bitcoin fund. Mr. Enneking also has extensive M&A experience, having completed more than 70 transactions with an aggregate transaction value of over US$12 billion. He speaks near-native French and Russian, as well as German. He has five university degrees, all in international business and law. You can follow him on LinkedIn here.









