January15 , 2026

    US senators demand answers from X, Meta, Alphabet on sexualized deepfakes | TechCrunch

    Related

    Share


    The tech world’s non-consensual, sexualized deepfake problem is now bigger than just X.

    In a letter to the leaders of X, Meta, Alphabet, Snap, Reddit and TikTok, several U.S. senators are asking the companies to provide proof that they have “robust protections and policies” in place, and to explain how they plan to curb the rise of sexualized deepfakes on their platforms.

    The senators also demanded that the companies preserve all documents and information relating to the creation, detection, moderation, and monetization of sexualized, AI-generated images, as well as any related policies.

    The letter comes hours after X said it updated Grok to prohibit it from making edits of real people in revealing clothing, and restricted image creation and edits via Grok to paying subscribers. (X and xAI are part of the same company.)

    Pointing to media reports about how easily and often Grok generated sexualized and nude images of women and children, the senators pointed out that platforms’ guardrails to prevent users from posting non-consensual, sexualized imagery may not be enough.

    “We recognize that many companies maintain policies against non-consensual intimate imagery and sexual exploitation, and that many AI systems claim to block explicit pornography. In practice, however, as seen in the examples above, users are finding ways around these guardrails. Or these guardrails are failing,” the letter reads.

    Grok, and consequently X, have been heavily criticized for enabling this trend, but other platforms are not immune.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Deepfakes first gained popularity on Reddit, when a page displaying synthetic porn videos of celebrities went viral before the platform took it down in 2018. Sexualized deepfakes targeting celebrities and politicians have multiplied on TikTok and YouTube, though they usually originate elsewhere.

    Meta’s Oversight Board last year called out two cases of explicit AI images of female public figures, and the platform has also allowed nudify apps to sell ads on its services, though it did sue a company called CrushAI later. There have been multiple reports of kids spreading deepfakes of peers on Snapchat. And Telegram, which isn’t included on the senators’ list, has also become notorious for hosting bots built to undress photos of women.

    X, Alphabet, Reddit, Snap, TikTok and Meta did not immediately respond to requests for comment.

    The letter demands the companies provide:

    • Policy definitions of “deepfake” content, “non-consensual intimate imagery,” or similar terms.
    • Descriptions of the companies’ policies and enforcement approach for non-consensual AI deepfakes of peoples’ bodies, non-nude pictures, altered clothing and “virtual undressing.”
    • Descriptions of current content policies addressing edited media and explicit content, as well as internal guidance provided to moderators.
    • How current policies govern AI tools and image generators as they relate to suggestive or intimate content.
    • What filters, guardrails or measures have been implemented to prevent the generation and distribution of deepfakes.
    • Which mechanisms the companies use to identify deepfake content and prevent them from being re-uploaded.
    • How they prevent users from profiting from such content.
    • How the platforms prevent themselves from monetizing non-consensual AI-generated content.
    • How the companies’ terms of service enable them to ban or suspend users who post deepfakes.
    • What the companies do to notify victims of non-consensual sexual deepfakes.

    The letter is signed by Senators Lisa Blunt Rochester (D-Del.), Tammy Baldwin (D-Wis.), Richard Blumenthal (D-Conn.), Kirsten Gillibrand (D-N.Y.), Mark Kelly (D-Ariz.), Ben Ray Luján (D-N.M.), Brian Schatz (D-Hawaii), and Adam Schiff (D-Calif.).

    The move comes just a day after xAI’s owner Elon Musk said that he was “not aware of any naked underage images generated by Grok.” Later on Wednesday, California’s attorney general opened an investigation into xAI’s chatbot, following mounting pressure from governments across the world incensed by the lack of guardrails around Grok that allowed this to happen.

    xAI has maintained that it takes action to remove “illegal content on X, including [CSAM] and non-consensual nudity,” though neither the company nor Musk have addressed the fact that Grok was allowed to generate such edits in the first place.

    The problem isn’t constrained to non-consensual manipulated sexualized imagery either. While not all AI-based image generation and editing services let users “undress” people, they do let one easily generate deepfakes. To pick a few examples, OpenAI’s Sora 2 reportedly allowed users to generate explicit videos featuring children; Google’s Nano Banana seemingly generated an image showing Charlie Kirk being shot; and racist videos made with Google’s AI video model are garnering millions of views on social media.

    The issue grows even more complex when Chinese image and video generators come into the picture. Many Chinese tech companies and apps — especially those linked to ByteDance — offer easy ways to edit faces, voices and videos, and those outputs have spread to Western social platforms. China has stronger synthetic content labeling requirements that don’t exist in the U.S. on the federal level, where the masses instead rely on fragmented and dubiously enforced policies from the platforms themselves.

    U.S. lawmakers have already passed some legislation seeking to rein in deepfake pornography, but the impact has been limited. The Take It Down Act, which became federal law in May, is meant to criminalize the creation and dissemination of non-consensual, sexualized imagery. But a number of provisions in the law make it difficult to hold image-generating platforms accountable, as they focus most of the scrutiny on individual users instead.

    Meanwhile, a number states are trying to take matters into their own hands to protect consumers and elections. This week, New York Governor Kathy Hochul proposed laws that would require AI-generated content to be labeled as such, and ban non-consensual deepfakes in specified periods leading up to elections, including depictions of opposition candidates.



    Source link