October11 , 2025

    The fixer’s dilemma: Chris Lehane and OpenAI’s impossible mission | TechCrunch

    Related

    Share


    Chris Lehane is one of the best in the business at making bad news disappear. Al Gore’s press secretary during the Clinton years, Airbnb’s chief crisis manager through every regulatory nightmare from here to Brussels – Lehane knows how to spin. Now he’s two years into what might be his most impossible gig yet: as OpenAI’s VP of global policy, his job is to convince the world that OpenAI genuinely gives a damn about democratizing artificial intelligence while the company increasingly behaves like, well, every other tech giant that’s ever claimed to be different.

    I had 20 minutes with him on stage at the Elevate conference in Toronto earlier this week – 20 minutes to get past the talking points and into the real contradictions eating away at OpenAI’s carefully constructed image. It wasn’t easy or entirely successful. Lehane is genuinely good at his job. He’s likable. He sounds reasonable. He admits uncertainty. He even talks about waking up at 3 a.m. worried about whether any of this will actually benefit humanity.

    But good intentions don’t mean much when your company is subpoenaing critics, draining economically depressed towns of water and electricity, and bringing dead celebrities back to life to assert your market dominance.

    The company’s Sora problem is really at the root of everything else. The video generation tool launched last week with copyrighted material seemingly baked right into it. It was a bold move for a company already getting sued by the New York Times, the Toronto Star, and half the publishing industry. From a business and marketing standpoint, it was also brilliant. The invite-only app soared to the top of the App Store as people created digital versions of themselves, OpenAI CEO Sam Altman; characters like Pikachu, Mario, and Cartman of “South Park”; and dead celebrities like Tupac Shakur.

    Asked what drove OpenAI’s decision to launch this newest version of Sora with these characters, Lehane gave me the standard pitch: Sora is a “general purpose technology” like electricity or the printing press, democratizing creativity for people without talent or resources. Even he – a self-described creative zero – can make videos now, he said on stage.

    What he danced around is that OpenAI initially “let” rights holders opt out of having their work used to train Sora, which is not how copyright use typically works. Then, after OpenAI noticed that people really liked using copyrighted images, it “evolved” toward an opt-in model. That’s not really iterating. That’s testing how much you can get away with. (And by the way, though the Motion Picture Association made some noise last week about legal threats, OpenAI appears to have gotten away with quite a lot.)

    Naturally, the situation brings to mind the aggravation of publishers who accuse OpenAI of training on their work without sharing the financial spoils. When I pressed Lehane about publishers getting cut out of the economics, he invoked fair use, that American legal doctrine that’s supposed to balance creator rights against public access to knowledge. He called it the secret weapon of U.S. tech dominance.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Maybe. But I’d recently interviewed Al Gore – Lehane’s old boss – and realized anyone could simply ask ChatGPT about it instead of reading my piece on TechCrunch. “It’s ‘iterative’,” I said, “but it’s also a replacement.”

    For the first time, Lehane dropped his spiel. “We’re all going to need to figure this out,” he said. “It’s really glib and easy to sit here on stage and say we need to figure out new economic revenue models. But I think we will.” (We’re making it up as we go, in short.)

    Then there’s the infrastructure question nobody wants to answer honestly. OpenAI is already operating a data center campus in Abilene, Texas, and recently broke ground on a massive data center in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened accessibility to AI to the advent of electricity – saying those who accessed it last are still playing catch-up – yet OpenAI’s Stargate project is seemingly targeting some of those same economically challenged places as spots to set up facilities with their massive appetites for water and electricity.

    Asked during our sit-down whether these communities will benefit or merely foot the bill, Lehane went to gigawatts and geopolitics. OpenAI needs about a gigawatt of energy per week, he noted. China brought on 450 gigawatts last year plus 33 nuclear facilities. If democracies want democratic AI, they have to compete. “The optimist in me says this will modernize our energy systems,” he’d said, painting a picture of re-industrialized America with transformed power grids.

    It was inspiring. But it was not an answer about whether people in Lordstown and Abilene are going to watch their utility bills spike while OpenAI generates videos of John F. Kennedy and The Notorious B.I.G. (Video generation is the most energy-intensive AI out there.)

    Which brought me to my most uncomfortable example. Zelda Williams spent the day before our interview begging strangers on Instagram to stop sending her AI-generated videos of her late father, Robin Williams. “You’re not making art,” she wrote. “You’re making disgusting, over-processed hotdogs out of the lives of human beings.”

    When I asked about how the company reconciles this kind of intimate harm with its mission, Lehane answered by talking about processes, including responsible design, testing frameworks, and government partnerships. “There is no playbook for this stuff, right?”

    Lehane showed vulnerability in some moments, saying that he wakes up at 3. a.m. every night, worried about democratization, geopolitics, and infrastructure. “There’s enormous responsibilities that come with this.”

    Whether or not those moments were designed for the audience, I believe him. Indeed, I left Toronto thinking I’d watched a master class in political messaging – Lehane threading an impossible needle while dodging questions about company decisions that, for all I know, he doesn’t even agree with. Then Friday happened.

    Nathan Calvin, a lawyer who works on AI policy at a nonprofit advocacy organization, Encode AI, revealed that at the same time I was talking with Lehane in Toronto, OpenAI had sent a sheriff’s deputy to his house in Washington, D.C., during dinner to serve him a subpoena. They wanted his private messages with California legislators, college students, and former OpenAI employees.

    Calvin is accusing OpenAI of intimidation tactics around a new piece of AI regulation, California’s SB 53. He says the company weaponized its legal battle with Elon Musk as a pretext to target critics, implying Encode was secretly funded by Musk. In fact, Calvin says he fought OpenAI’s opposition to California’s SB 53, an AI safety bill, and that when he saw the company claim it “worked to improve the bill,” he “literally laughed out loud.” In a social media skein, he went on to call Lehane specifically the “master of the political dark arts.”

    In Washington, that might be a compliment. At a company like OpenAI whose mission is “to build AI that benefits all of humanity,” it sounds like an indictment.

    What matters much more is that even OpenAI’s own people are conflicted about what they’re becoming.

    As my colleague Max reported last week, a number of current and former employees took to social media after Sora 2 was released, expressing their misgivings, including Boaz Barak, an OpenAI researcher and Harvard professor, who wrote about Sora 2 that it is “technically amazing but it’s premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.”

    On Friday, Josh Achiam – OpenAI’s head of mission alignment – tweeted something even more remarkable about Calvin’s accusation. Prefacing his comments by saying they were “possibly a risk to my whole career,” Achiam went on to write of OpenAI: “We can’t be doing things that make us into a frightening power instead of a virtuous one. We have a duty to and a mission for all of humanity. The bar to pursue that duty is remarkably high.”

    That’s . . .something. An OpenAI executive publicly questioning whether his company is becoming “a frightening power instead of a virtuous one,” isn’t on a par with a competitor taking shots or a reporter asking questions. This is someone who chose to work at OpenAI, who believes in its mission, and who is now acknowledging a crisis of conscience despite the professional risk.

    It’s a crystallizing moment. You can be the best political operative in tech, a master at navigating impossible situations, and still end up working for a company whose actions increasingly conflict with its stated values – contradictions that may only intensify as OpenAI races toward artificial general intelligence.

    It has me thinking that the real question isn’t whether Chris Lehane can sell OpenAI’s mission. It’s whether others – including, critically, the other people who work there – still believe it.



    Source link