OpenAI is rolling out a new “age prediction” system for ChatGPT users, basically a behind-the-scenes filter meant to detect if someone is under 18 before they ever say it.

And the timing matters, because OpenAI is also signaling it wants to allow more adult content like “erotica” soon, while still trying to avoid a PR disaster around minors accessing it.

How OpenAI will predict your age

Instead of just asking “are you 18+?”, OpenAI says the model will look at a mix of behavioral + account-level signals, including:

  • how long the account has existed

  • typical times of day someone is active

  • usage patterns over time

  • the user’s stated age

So it’s not one signal, it’s a pattern recognition system that’s trying to infer whether you’re likely underage.

What happens if you get flagged as a minor

If the system suspects a user is under 18, OpenAI says it will restrict or reduce content involving:

  • graphic violence

  • harmful viral challenges

  • sexual or romantic role-play

  • depictions of self-harm

  • content promoting extreme beauty standards, unhealthy dieting, or body shaming

In other words: if you get flagged, ChatGPT basically becomes a much more locked-down version of itself.

If OpenAI gets it wrong, you’ll need verification

OpenAI also says if an adult gets incorrectly flagged as under 18, they’ll have to submit a selfie to an identity verification service to remove those restrictions.

So the tradeoff is pretty clear:

More safety for minors, but potentially more friction for everyone else.

Why this is happening now

This age prediction system is part of OpenAI’s plan to reduce harmful mental health encounters with ChatGPT.

But it’s also the infrastructure they need before enabling adult content generation.

Because once ChatGPT can generate erotica, “we’ll just trust users to be honest about their age” isn’t going to cut it anymore.

That’s the real story here.

Reply

or to participate

Keep Reading

No posts found