Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Perplexity AI logo

As reported by TechCrunch, Lilian Weng, OpenAI’s VP of Research and Safety, announced her departure from the AI startup on Friday, adding to the growing list of safety researchers who have left the company over the past year. Weng, who has been with OpenAI for seven years, shared on X (formerly Twitter) that her last day will be November 15. She did not disclose her next career move but said that she’s ready to “reset and explore something new.”

“It was an extremely difficult decision to leave OpenAI,” Weng said in her post. “I’m incredibly proud of the Safety Systems team’s achievements and have high confidence they will continue to thrive.”

Weng’s resignation follows similar moves from key AI researchers and executives, some of whom have raised concerns that OpenAI is prioritizing commercial pursuits over rigorous AI safety measures. Earlier this year, Ilya Sutskever and Jan Leike, former leaders of OpenAI’s Superalignment team, also left to pursue AI safety work at other organizations.

Joining OpenAI in 2018, Weng initially contributed to the company’s robotics team, which notably built a robotic hand capable of solving a Rubik’s Cube—a project that spanned two years. In 2021, she transitioned to applied AI research and was later tasked with forming a dedicated safety systems team after GPT-4’s launch. Under her leadership, this team has grown to over 80 experts, focusing on developing safety protocols for the startup’s increasingly sophisticated AI models.

Concerns over OpenAI’s safety priorities have persisted as the company scales its AI systems. Miles Brundage, a policy researcher at OpenAI, departed in October following the dissolution of the AGI Readiness team, which he had advised. Former researcher Suchir Balaji echoed these concerns in a New York Times interview, stating he left the startup because he believed its technology risked societal harm over benefits.

In response to Weng’s departure, an OpenAI spokesperson told TechCrunch that plans are underway for a transition, expressing gratitude for Weng’s work on safety research and the implementation of safeguards. “We are confident the Safety Systems team will continue playing a key role in ensuring our systems are safe and reliable, serving hundreds of millions globally,” the spokesperson said.