In a series of shocking revelations, a former OpenAI board member named Helen Toner provided the first detailed account of the events that led to the brief ousting of CEO Sam Altman in November 2023. Toner painted the picture of a manipulative executive who fostered a “toxic atmosphere” within the company.
Toner, who resigned from OpenAI’s board less than two weeks after Altman’s reinstatement, made these explosive claims during an interview on “The Ted AI Show” podcast, which aired on Tuesday.
Reports of “Psychological Abuse” Prompted the Board to Take Action
According to Toner, one of the catalysts that led to Altman’s removal as the head of OpenAI was that employees started to report incidents involving “psychological abuse” to the board.
“They were really serious, to the point where they actually sent us screenshots and documentation of some of the instances they were telling us about,” Toner stressed.
She alleged that the executives informed the board that they “didn’t think he [Altman] was the right person to lead the company to AGI” and that “they had no belief that he could or would change, no point in giving him feedback, no point in trying to work through these issues.”
Toner portrayed Altman as someone who had repeatedly lied to the board and withheld crucial information, making it “basically impossible” for the board to fulfill its oversight responsibilities effectively.
“Sam had made it really difficult for the board to actually do that job by, you know, withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board,” she said.
One notable example cited by Toner included Altman’s intentionally refusing to inform the board that he owned the OpenAI startup fund, despite claiming to be an independent board member with no financial interest in the company. “That really damaged our ability to trust him,” Toner stated.
ChatGPT was Released Without Informing the Board
Toner also revealed that the OpenAI board was unaware of the existence of ChatGPT until they noticed that it became a trending topic on social media.
This pattern of behavior showcases the level of secrecy with which Altman operates, probably as a way to conceal activities that would be otherwise heavily scrutinized by the board.
On November 2023, the internet was shocked to know that Altman – a person believed by many to be the pioneer of artificial intelligence – was ousted from his leadership position at the company.
This led to a wave of comments, criticism, and calls for a prompt reinstatement across social media platforms like X (formerly Twitter).
The board, according to Toner’s comments, realized that it was the right time to “pull out all the stops” on Altman to prevent this toxic culture from further spreading across the company.
Toner commented on how the Board proceeded with the ousting by saying: “We were very careful, very deliberate about who we told, which was essentially almost no one in advance, other than obviously our legal team and so that’s kind of what took us to November 17.”
However, in another surprising twist of events, the board’s decision to oust Altman was overturned and significant changes were made to the company’s corporate governance structure shortly afterward.
Mr. Altman was soon reinstated after a large group of employees threatened to resign, pressuring the board to bring him back. Rumors abounded that Altman and most OpenAI employees would move to Microsoft if Altman wasn’t reinstated, helping to force the board’s hand.
Toner attributed Altman’s swift return to the fact that many employees were told that the company would collapse without him at the helm. Additionally, she alleged that once Altman’s potential reinstatement seemed likely, employees feared retaliation from him if they did not support his return.
Chairman of OpenAI’s Board Cites External Review to Discredit Toner’s Allegations
When reached for comment, OpenAI referred to a statement from the current Chairman of the Board, Bret Taylor, that addressed the allegations made by Toner.
Taylor stated that an independent review conducted by the law firm Wilmer Hale concluded that the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”
Taylor expressed disappointment that Toner “continues to revisit these issues” and emphasized that over 95% of employees, including senior leadership, had requested Altman’s reinstatement and the resignation of the previous board.
Implications of Toner’s Comments
❗EXCLUSIVE: "We learned about ChatGPT on Twitter."
What REALLY happened at OpenAI? Former board member Helen Toner breaks her silence with shocking new details about Sam Altman's firing. Hear the exclusive, untold story on The TED AI Show.
Here's just a sneak peek: pic.twitter.com/7hXHcZTP9e
— Bilawal Sidhu (@bilawalsidhu) May 28, 2024
Toner’s revelations are shining a bright light on OpenAI’s bizarre internal dynamics and raise questions among regulators and investors about the company’s commitment to transparency and accountability. If the company’s CEO can’t even tell the board about its flagship product, ChatGPT, what else is it hiding?
As a pioneer in the development of advanced artificial intelligence systems, OpenAI’s actions and leadership have far-reaching implications for the public’s trust in the responsible development of these powerful technologies.
Regulators and lawmakers may lead initiatives that aim to scrutinize OpenAI a bit more closely in the wake of these revelations, potentially leading to increased calls for oversight and regulation of the AI industry.
Toner and Another Former OpenAI Board Member Seek to Spark the Debate Over the Importance of AI Governance
Toner’s allegations may reignite the debate over the need for robust governance and oversight of AI development, particularly regarding the role of private companies in shaping the future of this transformative technology.
In a recent article for The Economist, Toner and another former OpenAI board member, Tasha McCauley, expressed doubts about the ability of self-governing systems to adequately limit AI development and prioritize the public interests over profit motives.
Also read: 30+ OpenAI Statistics for 2024 – Data on Growth, Revenue & Users
“Society must not let the roll-out of AI be controlled solely by private tech companies,” they wrote, advocating for increased government regulation to ensure the responsible development of artificial intelligence.
Their concerns echo those of industry leaders like Elon Musk, who has sued OpenAI for allegedly abandoning its original mission of developing AI for the benefit of humanity. However, and a bit ironically, Musk just raised $6 billion for his own AI startup called xAI.
The Need for Transparency and Accountability
As the AI revolution continues to accelerate, Toner’s revelations showcase the need for greater transparency and accountability within the companies that are actively spearheading this technological transformation.
While OpenAI has announced the creation of a new safety and security committee, the fact that it is led by Altman calls into question its efficacy in pursuing and achieving its mission. The committee is reportedly tasked with evaluating the company’s existing safety procedures.
Ultimately, the public’s trust in the responsible development of AI will hinge on the industry’s willingness to embrace transparency, prioritize ethical principles, and submit itself to appropriate oversight and regulation.
As Toner’s explosive claims continue to send shockwaves across the internet, it remains to be seen if OpenAI and other industry leaders will take the necessary steps to address the legitimate concerns raised by those committed to ensuring that artificial intelligence remains a force for good that benefits humanity.