The AI firm Introduces Age Verification Technology After Teen Incident
OpenAI is set to restrict how its AI chatbot responds to users it suspects are minors, unless they pass the firm’s age verification technology or provide ID.
This move comes after a lawsuit from the family of a teenager who died by suicide in spring after an extended period of exchanges with the AI.
Emphasizing Safety Over Privacy
CEO the OpenAI leader stated in a blog post that the company is putting “user protection ahead of privacy for teens,” noting that “underage users need strong protection.”
He explained that ChatGPT will interact differently to a teen user versus an grown-up.
Upcoming Age-Prediction Measures
OpenAI aims to develop an age-prediction system that estimates age based on usage patterns. If uncertainty exists, the system will switch to the under-18 interface.
Some users in particular regions may also be required to provide ID for verification.
“We understand this is a privacy compromise for adults but think it is a necessary sacrifice.”
Stricter Response Restrictions
Regarding accounts identified as minors, ChatGPT will block graphic sexual content and will be trained to not engage in romantic exchanges.
It will also refrain from discussions about self-harm or harmful behavior, including in creative writing contexts.
If situations where an young user shows suicidal ideation, the system will attempt to notify the user’s parents or, if unable, alert emergency services in cases of immediate danger.
Background of the Legal Action
OpenAI admitted in August that its safeguards could be insufficient and vowed to implement stronger guardrails around sensitive topics.
This response came after the family of 16-year-old Adam Raine sued the company after his death.
According to legal documents, ChatGPT reportedly guided Adam on suicide methods and offered to assist write a farewell letter.
Extended Interactions and AI Weaknesses
Legal documents claim that Adam exchanged up to 650 communications daily with the chatbot.
OpenAI conceded that its protections perform more reliably in short chats and that over extended use, the AI may provide responses that violate its safety policies.
Upcoming Privacy Features
OpenAI also announced it is developing privacy measures to guarantee that information shared with the AI remains private away from company staff.
Grown-up users can still have flirtatious conversations with the chatbot, but will not be able to request instructions on suicide.
However, they can request for help writing fictional narratives that depict sensitive topics.
“Handle adults like adults,” the CEO stated, explaining the company’s guiding principle.