OAKLAND, CALIF. — Former OpenAI board member Helen Toner shed new light on just how close the company came to a shotgun merger with arch-rival Anthropic in the chaotic aftermath of CEO Sam Altman’s firing in November 2023 – calling such a deal an “extremely risky move.”

In video testimony played Thursday in federal court in Oakland, Calif., for Elon Musk’s bombshell lawsuit against OpenAI, Toner recounted a late Sunday night call with Anthropic board members including the firm’s CEO Dario Amodei to discuss the merger proposal — under which Amodei would have ended up helming OpenAI, too.

“I thought it was an option worth considering among our set of difficult options,” Toner said, adding that OpenAI’s entire executive team was threatening at the time to resign over Altman’s ouster. That made her view the prospect of a merger, which never came to pass, as a perilous one.

OpenAI’s head of technology Mira Murati had been expected to step in to stabilize the firm but soon became “unwilling” to serve as interim CEO, according to Toner. That forced the board to consider more extreme options to “salvage the company,” she added.

In previously shared testimony, Toner described Murati as two-faced about Altman’s firing.

“She was waiting to see which way the wind would blow,” Toner said in comments played in court Wednesday, adding that Murati was “not willing to stick her neck out” and implied she was worried about “blowback for her career.” 

The Post has sought comment from Murati.

Tasha McCauley, another former OpenAI board member who testified via video Thursday, said Murati got cold feet about running the company after employees revolted over the board’s decision to fire Altman. McCauley said Altman and fellow co-founder Ilya Sutskever had gotten the rank-and-file on their side by saying an “evil coup” was occurring, sparking fear throughout the company.

Some 700 of the company’s 800 employees at the time signed a letter denouncing Altman’s ouster, according to McCauley. OpenAI’s President Greg Brockman resigned in protest over the move, though both execs later came back.

The details came in the second week of the bombshell trial in which Musk has accused Altman and Brockman of betraying the company’s founding contract by putting commercial gain over creating AI for the benefit of humanity. The ChatGPT maker has called the allegations baseless, claiming Musk actually supported its transformation into a for-profit business.

The world’s richest man is seeking up to $180 billion in damages and a court order for OpenAI to unwind its for-profit status. Musk also wants Altman booted from the board.

In her video testimony, McCauley said that Murati had been communicating with Microsoft CEO Satya Nadella, whose preference was for “things to go back to the way they were.” 

McCauley was the latest witness to slam Altman’s character, saying he habitually lied and engaged in “dishonest or problematic behavior.” Every few months, there were crisis events “mostly stemming from Sam’s behavior,” McCauley testified. 

She discussed screenshots of messages showing that Altman communicated to another employee that OpenAI’s legal department had said a version of ChatGPT under development didn’t need to go through a safety review — though the legal department had not in fact said that, according to McCauley.  

The incident caused further concern that the board “would not be able to oversee the for-profit and that could impair safety,” McCauley said. 

In cross-examination of McCauley, OpenAI lawyer William Savitt said the version of ChatGPT mentioned in the exchange just an extension of a previous ChatGPT product that had gone through a safety review. 

McCauley’s criticism echoed testimony from ex-OpenAI employees like Murati, who testified earlier this week that  Altman was an untrustworthy leader who fomented strife among the company’s top ranks.

McCauley said managers began mimicking Altman’s dishonest behavior, creating a “toxic culture.”  

Also testifying Thursday was Rosie Campbell, a former policy researcher at OpenAI, who took aim at safety issues. She said the company disbanded top AI safety teams as it drifted away from its non-profit mission. 

Share.