Every morning during the Elon Musk–Sam Altman trial, before the jury files in, the lawyers from both sides hold their daily squabble sessions with Judge Yvonne Gonzalez Rogers. They’re some of the most sharp-elbowed and revealing portions of the proceedings, as the attorneys debate with the judge over what is or isn’t fair game to discuss in front of the jury.
One morning the topic of Armageddon came up.
Musk’s attorney Steven Molo had been leaning into the existential dangers of artificial intelligence in his questioning of his client and planned to keep going. Developing the technology safely and for the benefit of humanity was, after all, at the core of OpenAI’s nonprofit mission and the very thing Musk’s lawsuit claims the company has abandoned.
OpenAI’s lawyers weren’t having it, and Judge Gonzalez Rogers was increasingly irritated by all the sci-fi being thrown around. Molo responded by loudly pleading his case that AI risks were arguably the most important topic at hand.
“This is a real risk. We all could die as a result of artificial intelligence,” the lanky Chicagoan implored, leaning into the microphone.
William Savitt, OpenAI’s wry and understated attorney, pushed back on the relevance, and the sides began talking over each other until the judge finally cut them off.
“Stop! Both of you!” Gonzalez Rogers said, a silence settling over the teams of lawyers. She then turned to Molo.
“It is ironic that your client, despite these risks, is creating a company that is in the exact space,” Gonzalez Rogers said. She was referring to Musk’s OpenAI competitor, xAI. “I suspect there are plenty of people who wouldn’t like to put the future of humanity in Mr. Musk’s hands.”
She went on, now addressing both sides: “This is not a trial on the safety risks of AI; this is not a trial on whether AI has damaged humanity. It could be that one day in a federal court in this country we may have that trial.”