Former employees criticize OpenAI’s stance on SB 1047

Employee Criticism

OpenAI, the company behind the AI chatbot ChatGPT, has seen a significant number of departures from its AGI safety team in recent months. According to Daniel Kokotajlo, a former governance researcher at OpenAI, nearly half of the team focused on mitigating the risks of super-intelligent AI has left the company. Kokotajlo stated that the team initially had around 30 people working on AI safety, but multiple departures have reduced the team’s size to about 16 members.

He believes that people who are primarily focused on thinking about AGI safety and preparedness are being increasingly marginalized at OpenAI. The departures include several high-profile researchers such as Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, Todor Markov, and cofounder John Schulman.

Departures from OpenAI raise concerns

These exits followed the resignations of chief scientist Ilya Sutskever and Jan Leike, who co-headed OpenAI’s “superalignment” team working on ways to control “artificial superintelligence.”

Kokotajlo expressed concern that profit motives might be leading the company to take risks and described the Big Tech race to develop AGI as “reckless.” He also mentioned his disappointment with OpenAI’s opposition to California’s SB 1047, a bill aiming to put guardrails on AI development. An OpenAI spokesperson defended the company’s track record and commitment to safety, emphasizing engagement with various stakeholders to debate AI risks. They stated that OpenAI is “proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk.”

Despite the departures, OpenAI has created a new safety and security committee responsible for critical safety decisions and appointed Carnegie Mellon University professor Zico Kolter, who focuses on AI security, to its board of directors.

The issue of AI safety has drawn mixed reactions from the wider AI research community, with some experts considering the focus on AI’s existential risks to be overhyped while others believe that such scrutiny is necessary. The debate between state-level and federal-level regulation continues, with significant implications for the future of AI safety and innovation.