A former OpenAI board member has explained why the directors made the now infamous decision to fire CEO Sam Altman last November. Speaking in an interview on The TED AI show podcast accused AI researcher Helen Toner Altman of lying to and obstructing OpenAI’s governance, retaliating against those who criticized him and creating a “toxic atmosphere.”
“The [OpenAI] Board is a non-profit board established expressly for the purpose of ensuring that the public good of the company comes first – above profits, investor interests and other matters,” Toner told The TED AI show host Bilawal Sidhu. “But for years, Sam made it very difficult for the board to actually do that work by withholding information, misrepresenting things happening within the company and in some cases outright lying to the board.”
Tweet may have been deleted
OpenAI fired Altman on November 17 last year, a move that surprised many both inside and outside the company. According to Toner, the decision was not taken lightly and was preceded by weeks of intensive discussion. The secrecy surrounding it was also by design, she said.
“It was very clear to all of us that as soon as Sam had any inkling that we were going to do something against him, he would pull out all the stops, do everything in his power to undermine the board, to prevent us from… you know, even getting to the point where we can fire him,” Toner said “So we were very careful and deliberate about who we told, which was basically almost no one beforehand, except obviously our legal team.”
Unfortunately for Toner and the rest of OpenAI’s board, their careful planning did not yield the desired results. While Altman was initially ousted, OpenAI quickly rehired him as CEO after days of outrage, accusations and uncertainty. The company also installed an almost entirely new board, removing those who had tried to oust Altman.
Why did OpenAI’s board fire CEO Sam Altman?
Toner did not specifically discuss the aftermath of that tumultuous time on the podcast. However, she did explain why exactly OpenAI’s board came to the conclusion that Alman had to leave.
Earlier this week, Toner and former board member Tasha McCauley published an op-ed in The economist stating that they decided to get rid of Altman due to “long-standing patterns of behavior”. Toner has now provided examples of this behavior in her conversation with Sidhu – including claiming that OpenAI’s own board was not notified when ChatGPT was released, but only found out about it through social media.
“When ChatGPT came out [in] November 2022, the board was not informed about this in advance. We heard about GPT on Twitter,” Toner claimed. “Sam failed to inform the board that he owned the OpenAI startup fund, even though he continually claimed he was an independent board member with no financial interest in the company. On several occasions he gave us inaccurate information about the small number of formal safety processes the company had in place, meaning it was effectively impossible for the board to know how well those safety processes were working and what, if anything, needed to change.”
OpenAI is launching a new internal security team with Sam Altman at the helm
Toner also accused Altman of deliberately attacking her after he objected to a research paper she co-authored. Entitled ‘Decoding Intentions: Artificial Intelligence and Costly Signals’, the article discussed the dangers of AI and included an analysis of both OpenAI and the security measures of competitor Anthropic.
However, Altman reportedly found the academic article too critical of OpenAI and complementary to its rival. Toner said The TED AI show that after the article was published last October, Altman began spreading lies to the other board members in an attempt to have her removed. This alleged incident only further damaged the board’s confidence in him, she said, as by then they had already seriously discussed firing Altman.
Mashable speed of light
“[F]or any individual case, Sam could always come up with an innocent-sounding explanation as to why it wasn’t that important or was misinterpreted or whatever,” Toner said. “But the end result was that after years of this kind of thing, all four of us who fired him [OpenAI board members Toner, McCauley, Adam D’Angelo, and Ilya Sutskever] came to the conclusion that we just couldn’t believe the things Sam was telling us.
“And that’s a completely unworkable place to be in as a board, especially when the board is supposed to provide independent oversight of the company, not just help the CEO raise more money. CEO, who is your main channel into the company, your main source of information about the company, that’s just completely impossible.”
Toner stated that OpenAI’s board has made efforts to address these issues, putting in place new policies and processes. However, other executives then reportedly began telling the board about their own negative experiences with Altman and the “toxic atmosphere he created.” This included allegations of lying and manipulation, backup screenshots of conversations and other documentation.
“They used the term ‘mental abuse’ and told us they didn’t think he was the right person to lead the company to. [artificial general intelligence]and told us they didn’t believe he could or would change, that there was no point in giving him feedback, and that there was no point in trying to solve these problems,” Toner said.
OpenAI CEO accused of retaliating against critics
Toner further addressed the loud outcry from OpenAI employees against Altman’s firing. Many posted on social media in support of the ousted CEO, while more than 500 of the company’s 700 employees declared they would quit if he was not reinstated. According to Toner, staff were led to believe the false dichotomy that if Altman did not return “immediately, without responsibility” [and a] totally new board of your choice,” OpenAI would be destroyed.
“I understand why not wanting to see the company destroyed caused a lot of people to line up, whether they were about to make a lot of money from this coming bid, or just because they love their team. They wanted didn’t lose their jobs, they cared about the work they did,” Toner said. “And of course a lot of people didn’t want the company to fall apart, including us.”
She also claimed that fear of retaliation for opposing Altman may have contributed to the support he received from OpenAI staff.
“They had experienced him taking revenge on people, taking revenge on them for previous instances of criticism,” Toner said. “They were really afraid of what might happen to them. So when some employees started saying, ‘Wait, I don’t want the company to fall apart, let’s bring Sam back,’ it was very difficult for those people who had terrible experiences had to actually say that, out of fear that if Sam were to remain in power as he ultimately did, it would make their lives miserable.
Finally, Toner noted Altman’s turbulent work history, which initially emerged after his failed firing from OpenAI. Referring to reports that Altman was fired from his previous role at Y Combinator due to his perceived self-interest, Toner claimed that OpenAI was far from the only company that had the same problems with him.
“And then the management team at his job before that — which was his only other job in Silicon Valley, his startup Loopt — apparently went to the board twice and asked the board to fire him for what they called ‘misleading and chaotic behavior.’ mentioned. ‘” Toner continued.
“If you look at his track record, he doesn’t exactly have a stellar trail of credentials. This was not a problem specific to the personalities on the board, as much as he would like to portray it that way. “
Toner and McCauley are far from the only OpenAI alumni who have expressed doubts about Altman’s leadership. Chief security researcher Jan Leike resigned earlier this month, citing disagreements with management priorities and arguing that OpenAI should focus more on issues of security, safety and societal impact. (Chief scientist and former board member Sutskever also resigned, although he cited his desire to work on a personal project.)
In response, Altman and president Greg Brockman defended OpenAI’s approach to security. The company also announced this week that Altman would lead OpenAI’s new safety and security team. In the meantime, Leike has joined Anthropic.