ai2 min read·Updated May 8, 2026·Fact-check: reviewed

Barry Diller Says Trust in AI Leaders is 'Irrelevant' Compared to Unknown Risks of AGI

Speaking at the WSJ Future of Everything conference, the billionaire media executive called for urgent guardrails as artificial general intelligence approaches.

BylineEditorial Desk··Updated May 8, 2026
Source context

Primary source: TechCrunch AI. Full source links and update notes are below.

Fast summary

Start here

  • Barry Diller vouched for Sam Altman’s sincerity but warned that personal character cannot mitigate AI's unknown consequences.
  • Diller claims that even AI creators are surprised by their own developments, describing a 'sense of wonder' and uncertainty among engineers.
  • The media mogul emphasized that if humans do not implement guardrails soon, AGI may eventually set its own limits.
Barry Diller speaking at a conference about AI and Sam Altman

What happened

Billionaire media executive Barry Diller stated that personal trust in AI leadership is "irrelevant" in the face of artificial general intelligence (AGI). Speaking at The Wall Street Journal’s "Future of Everything" conference, the chairman of IAC and Expedia Group argued that the potential surprises resulting from AI development exceed the control of any individual leader, regardless of their intentions.

What's new in this update

Diller specifically addressed the character of OpenAI CEO Sam Altman following recent reports questioning Altman's transparency. While Diller defended Altman as a "decent person with good values" and a sincere steward, he cautioned that the speed of progress toward AGI makes individual integrity a secondary concern to the "great unknown" of the technology itself. He noted that the consequences of AI are currently unpredictable even to those building the systems.

Key details

Diller observed a "sense of wonder" among AI creators that stems from a lack of certainty about how these systems evolve. He emphasized that the industry is approaching AGI "quicker and quicker," making it difficult to rely on past management models. Diller, a co-founder of Fox Broadcasting, noted he is not personally invested in these technologies, which he says allows him to focus on the inevitability of progress rather than financial returns.

Background and context

The debate over AI safety has recently centered on the leadership of major firms like OpenAI. Sam Altman has faced criticism from former board members and colleagues regarding his management style, with some alleging manipulative behavior. Diller's comments come at a time when the tech industry is divided between those advocating for rapid acceleration and those demanding more rigorous safety protocols and external oversight.

What to watch next

The focus remains on the implementation of formal guardrails for AI development. Diller warned that if human-led regulations are not established quickly, an "AGI force" might eventually create its own boundaries. He suggested that once the technology is unleashed without proper constraints, there will be "no going back," placing the burden on current leaders to establish limits before AGI reaches its full potential.

Why it matters

Diller's perspective shifts the debate from the personal integrity of AI leaders to the systemic and unpredictable risks inherent in achieving artificial general intelligence.

Read next

Follow this story through the topic hub, more ai coverage, and the latest updates.

Weekly briefing

Get the week's key developments in one concise email.

Get a fast catch-up on the biggest stories, the context behind them, and the links worth your time.

Cadence

Weekly, for a quick catch-up

Coverage

AI, business, world, security, sports

Format

Clear takeaways and useful context

Request the briefing

Leave your email to open a prepared request and get on the list for the weekly briefing.

One concise email.·Weekly cadence.·Prefer RSS instead?

Author

E
Editorial Desk

See who assembled this story and follow more of their work.

Sources and methodology

Barry DillerSam AltmanArtificial General IntelligenceGuardrailsWSJ Future of Everything