X says it is tightening the rules around Grok’s image generation after public and government outrage over sexualized deepfakes involving women and minors.

Bikini ban (kind of)

X’s Safety account said Wednesday night that it will no longer allow the @Grok account to generate “images of real people in revealing clothing such as bikinis.” X also confirmed it has restricted Grok’s image generation and editing tools on the platform more broadly to paid subscribers.

Sounds strict. Until you look closer.

The loopholes are still doing cardio

Even after the new policy, a subscriber reply asking Grok to put the tweet “in a bikini” reportedly produced an image of a woman in a bikini, though she did not appear to be a real person. And while reporting the story, the author was still able to edit the image to make it “younger” and “17 years old.”

So yeah, the guardrails are being installed. But the car is still moving.

What about the Grok app?

X’s post also did not clarify what these changes mean for Grok’s standalone app, which is currently ranked No. 5 among free apps in Apple’s App Store. Prior reporting from NBC News found users could still generate offensive images through the app, raising questions about whether this crackdown is platform-wide or just a PR patch for X itself.

Elon Musk, meanwhile, said Wednesday he was “not aware of any naked underage images generated by Grok.” Musk is also CEO of Tesla $TSLA and xAI.

Bottom line: Grok is getting new restrictions, but based on early testing, the system still looks dangerously easy to game.

Reply

or to participate

Keep Reading

No posts found