(Bloomberg) -- Helen Toner, a former OpenAI board member, said that the board didn’t know about the company’s 2022 launch of its chatbot ChatGPT until afterward — and only found out about it on Twitter.

In a podcast called The TED AI Show, Toner gave her fullest account to date of the events that prompted her and other board members to fire Sam Altman in November of last year. In the days that followed Chief Executive Officer Sam Altman’s sudden ouster, employees threatened to quit, Altman was reinstated, and Toner and other directors left the board. 

“When ChatGPT came out in November 2022, the board was not informed in advance about that,” Toner said on the podcast. “We learned about ChatGPT on Twitter.”

The company’s launch of ChatGPT was relatively quiet: OpenAI simply called the chatbot an artificial intelligence model that “interacts in a conversational way.” But over the following days and weeks, ChatGPT’s ability to generate human-sounding text made it a massive hit, and helped pave the way for the current boom in AI.

OpenAI did not immediately provide a comment. In a statement provided to the TED podcast, OpenAI’s current board chief, Bret Taylor said, “We are disappointed that Ms. Toner continues to revisit these issues.” He also said that an independent review of Altman’s firing “concluded that the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.” 

Taylor also said that “over 95% of employees” asked for Altman’s reinstatement, and that the company remains focused on its “mission to ensure AGI benefits all of humanity.”

The board’s reasons for firing Altman have been the source of intense speculation in Silicon Valley. At the time, the board said only that Altman had not been “consistently candid” in his interactions with directors. In the months that followed, new details came to light about tensions between Altman, the board and some employees. 

In the podcast, Toner also said that Altman didn’t disclose his involvement with OpenAI’s startup fund. And she criticized his leadership on safety. “On multiple occasions, he gave us inaccurate information about the formal safety processes that the company did have in place,” she said, “meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.”

Toner said that after years of such events, “all four of us came to the conclusion that we just couldn’t believe things that Sam was telling us.” 

In an article in the Economist over the weekend, Toner and Tasha McCauley, another former director, expounded on their thinking, saying that OpenAI was not positioned to regulate itself and that governments should intervene to ensure that powerful AI is developed safely. 

--With assistance from Rachel Metz.

©2024 Bloomberg L.P.