Campbell Brown’s Forum AI Targets ‘Murky and Nuanced’ Inaccuracies in Foundation Models
The former Meta news chief is building a system of expert-led AI judges to solve the accuracy problem in high-stakes information funnels.
Primary source: TechCrunch AI. Full source links and update notes are below.
Fast summary
Start here
- Forum AI recruits global experts like Tony Blinken and Niall Ferguson to architect benchmarks for complex, non-binary topics.
- Internal evaluations found leading models frequently suffer from political bias and lack of context in geopolitical reporting.
- The startup is betting that enterprise demand for liability protection will drive the market for more rigorous AI audits.

What happened
Campbell Brown, the former head of news at Meta, detailed her new venture, Forum AI, during a recently held StrictlyVC event in San Francisco. The startup evaluates how leading foundation models perform on "high-stakes topics" including geopolitics, mental health, and finance—areas where Brown argues that objective "yes-or-no" answers are often unavailable and nuance is required.
What's new in this update
Brown revealed a high-profile roster of experts assisting Forum AI in architecting its benchmarks. The list includes former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, Niall Ferguson, and Fareed Zakaria. These experts help design the frameworks that Forum AI then uses to train automated AI judges, aiming for a 90% consensus rate between the machine evaluations and human expertise.
Key details
Initial evaluations by the startup found that major models, including Google’s Gemini, often exhibit systematic failures such as sourcing information from irrelevant foreign government sites or maintaining a consistent left-leaning political bias. Brown noted that while foundation model companies focus heavily on math and coding performance, they frequently struggle with the complexities of news and information, often missing critical context or straw-manning opposing viewpoints.
Background and context
Founded 17 months ago in New York, Forum AI was born out of Brown’s observations at Meta during the release of ChatGPT. She expressed concern that AI is becoming the primary funnel for information while remaining prone to inaccuracies. Drawing from her experience building the now-defunct fact-checking program at Facebook, she warned against repeating the mistakes of social media platforms that optimized for engagement rather than truth.
What to watch next
The startup is positioning itself to serve enterprises that require high accuracy for lending, insurance, and hiring decisions to avoid legal liability. However, Brown cautioned that the current regulatory landscape is inadequate, describing existing AI bias laws and standardized benchmarks as a "joke" that many companies treat as simple checkbox exercises rather than rigorous audits.
Why it matters
As AI models increasingly replace traditional search and news feeds, the lack of rigorous standards for accuracy in nuanced subjects threatens both public discourse and corporate compliance.
Read next
Follow this story through the topic hub, more ai coverage, and the latest updates.
Weekly briefing
Get the week's key developments in one concise email.
Get a fast catch-up on the biggest stories, the context behind them, and the links worth your time.
Cadence
Weekly, for a quick catch-up
Coverage
AI, business, world, security, sports
Format
Clear takeaways and useful context
Request the briefing
Leave your email to open a prepared request and get on the list for the weekly briefing.
Author
See who assembled this story and follow more of their work.
Sources and methodology