(Bloomberg) -- OpenAI has created a board committee to evaluate the safety and security of its artificial intelligence models, a governance change made weeks after its top executive on the the subject resigned and the company effectively disbanded his internal team.

The move also follows damning criticism of OpenAI’s governance by two former board members, Helen Toner and Tasha McCauley, in The Economist on Sunday. 

The new committee will spend 90 days evaluating the safeguards in OpenAI’s technology before giving a report. “Following the full board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security,” the company said in a blog post on Tuesday.

OpenAI also said that it has recently started to train its latest AI model.

The private firm’s recent rapid advances in AI have raised concerns about how it manages the technology’s potential dangers. Those worries intensified last fall when Chief Executive Officer Sam Altman was briefly ousted in a boardroom coup after clashing with co-founder and chief scientist Ilya Sutskever over how quickly to develop AI products and the steps to limit harms. 

Those concerns returned this month after Sutskever and a key deputy, Jan Leike, left the company. The scientists ran OpenAI’s so-called superalignment team, which focused on long-term threats of superhuman AI. Leike, who resigned, wrote afterward that his division was “struggling” for computing resources within OpenAI. Other departing employees echoed his criticism. 

Read More: OpenAI Dissolves Key Safety Team After Chief Scientist’s Exit

Following Sutskever’s departure, OpenAI integrated what remained of his team into its broader research efforts rather than maintaining it as a separate entity. OpenAI cofounder John Schulman now oversees superaligment research as part of an expanded remit, and takes on the new title of Head of Alignment Science.

The startup has at times struggled to manage staff departures. Last week, OpenAI nixed a policy that canceled the equity from former staffers if they spoke out against the company. 

Toner and McCauley, who stepped down from OpenAI’s board after Sam Altman was reinstated as CEO last year, criticized Altman’s leadership over the weekend in a guest essay for The Economist and warned the departures of Sutskever and other safety-focused team members “bode ill for the OpenAI experiment in self-governance.”

A spokesperson said OpenAI was aware of criticism from ex-employees and anticipated more, adding that the company was working to address concerns.

OpenAI’s new safety committee will consist of three board members — Chairman Bret Taylor, Quora CEO Adam D’Angelo and ex-Sony Entertainment executive Nicole Seligman — along with six employees, including Schulman and Altman. The company said it would continue to consult outside experts, naming two of them: Rob Joyce, a Homeland Security adviser to Donald Trump, and John Carlin, a former Justice Department official under President Joe Biden.

(Updates with further details of co-founder’s role in seventh paragraph)

©2024 Bloomberg L.P.