If someone ever makes an HBO Max series about the AI industry, the events of this week will make quite the episode.

On Wednesday, OpenAI’s CEO of applications, Fidji Simo, announced the company had rehired Barret Zoph and Luke Metz, cofounders of Mira Murati’s AI startup, Thinking Machines Lab. Zoph and Metz had left OpenAI in late 2024.

We reported last night on two narratives forming around what led to the departures, and have since learned new information.

A source with direct knowledge says that Thinking Machines leadership believed Zoph engaged in an incident of serious misconduct while at the company last year. That incident broke Murati’s trust, the source says, and disrupted the pair’s working relationship. The source also alleged Murati fired Zoph on Wednesday—before knowing he was going to OpenAI—due to what the company claimed were issues that arose after the alleged misconduct. Around the time the company learned that Zoph was returning to OpenAI, Thinking Machines raised concerns internally about whether he had shared confidential information with competitors. (Zoph has not responded to several requests for comment from WIRED.)

Meanwhile, in a Wednesday memo to employees, Simo claimed the hires had been in the works for weeks and that Zoph told Murati he was considering leaving Thinking Machines on Monday—prior to the date he was fired. Simo also told employees that OpenAI doesn’t share Thinking Machines’ concerns about Zoph’s ethics.

Alongside Zoph and Metz, another former OpenAI researcher that was working at Thinking Machines, Sam Schoenholz, is rejoining the ChatGPT-maker, per Simo’s announcement. At least two more Thinking Machines employees are expected to join OpenAI in the coming weeks, according to a source familiar with the matter. Technology reporter Alex Heath was first to report the additional hires.

A separate source familiar with the matter pushed back on the perception that the recent personnel changes were wholly related to Zoph. “This has been part of a long discussion at Thinking Machines. There were discussions and misalignment on what the company wanted to build—it was about the product, the technology, and the future.”

Thinking Machines Lab and OpenAI declined to comment.

In the aftermath of these events, we’ve been hearing from several researchers at leading AI labs who say they are exhausted by the constant drama in their industry. This specific incident is reminiscent of OpenAI’s brief ouster of Sam Altman in 2023, known inside of OpenAI as “the blip.” Murati played a key role in that event as the company’s then chief technology officer, according to reporting from The Wall Street Journal.

In the years since Altman’s ouster, the drama in the AI industry has continued, with departures of cofounders at several major AI labs, including xAI’s Igor Babuschkin, Safe Superintelligence’s Daniel Gross, and Meta’s Yann LeCun (he did cofound Facebook’s longstanding AI lab, FAIR, after all).

Some might argue the drama is justified for a nascent industry whose expenditures are contributing to America’s GDP growth. Also, if you buy into the idea that one of these researchers might crack a few breakthroughs on the path to AGI, it’s probably worth tracking where they’re going.

That said, many researchers started working before ChatGPT’s breakout success and appear surprised that their industry is now the source of nearly constant scrutiny.

As long as researchers can keep raising billion-dollar seed rounds on a whim, we’re guessing the AI industry’s power shake-ups will continue apace. HBO Max writers, lock in.

How AI Labs Are Training Agents to Do Your Job

People in Silicon Valley have been musing about AI displacing jobs for decades. In the past few months, however, the efforts to actually get AI to do economically valuable work have become far more sophisticated.

AI labs are smartening up about the data they’re using to create AI agents. Last week, WIRED reported that OpenAI has been asking third-party contractors from the firm Handshake to upload examples of their real work from previous jobs to evaluate OpenAI’s agents. The companies ask employees to scrub these documents of any confidential data and personally identifying information. While it’s possible some corporate secrets or names slip by, that’s likely not what OpenAI is after (though the company could get in serious trouble if that happens, experts say).



Source link