As reported by TechCrunch, Lilian Weng, OpenAI’s VP of Research and Safety, announced her departure from the AI startup on Friday, adding to the growing list of safety researchers who have left the company over the past year. Weng, who has been with OpenAI for seven years, shared on X (formerly Twitter) that her last day will be November 15. She did not disclose her next career move but said that she’s ready to “reset and explore something new.”
“It was an extremely difficult decision to leave OpenAI,” Weng said in her post. “I’m incredibly proud of the Safety Systems team’s achievements and have high confidence they will continue to thrive.”
Weng’s resignation follows similar moves from key AI researchers and executives, some of whom have raised concerns that OpenAI is prioritizing commercial pursuits over rigorous AI safety measures. Earlier this year, Ilya Sutskever and Jan Leike, former leaders of OpenAI’s Superalignment team, also left to pursue AI safety work at other organizations.
Joining OpenAI in 2018, Weng initially contributed to the company’s robotics team, which notably built a robotic hand capable of solving a Rubik’s Cube—a project that spanned two years. In 2021, she transitioned to applied AI research and was later tasked with forming a dedicated safety systems team after GPT-4’s launch. Under her leadership, this team has grown to over 80 experts, focusing on developing safety protocols for the startup’s increasingly sophisticated AI models.
Concerns over OpenAI’s safety priorities have persisted as the company scales its AI systems. Miles Brundage, a policy researcher at OpenAI, departed in October following the dissolution of the AGI Readiness team, which he had advised. Former researcher Suchir Balaji echoed these concerns in a New York Times interview, stating he left the startup because he believed its technology risked societal harm over benefits.
In response to Weng’s departure, an OpenAI spokesperson told TechCrunch that plans are underway for a transition, expressing gratitude for Weng’s work on safety research and the implementation of safeguards. “We are confident the Safety Systems team will continue playing a key role in ensuring our systems are safe and reliable, serving hundreds of millions globally,” the spokesperson said.