ai2 min read·Updated May 5, 2026·Fact-check: reviewed

OpenAI Limits GPT-5.5 Cyber Access Despite Previous Criticism of Anthropic’s Gating

OpenAI CEO Sam Altman confirmed that the company’s new cybersecurity-specific model will only be available to verified defenders through a vetted application process.

BylineEditorial Desk··Updated May 5, 2026
Source context

Primary source: TechCrunch AI. Full source links and update notes are below.

Fast summary

Start here

  • OpenAI is restricting GPT-5.5 Cyber to verified security professionals to prevent the tool's misuse for offensive operations.
  • Sam Altman previously criticized rival Anthropic for 'fear-based marketing' when it implemented similar restrictions for its Mythos tool.
  • The access program, Trusted Access for Cyber (TAC), involves a tiered verification process for individuals and organizations protecting critical software.
Graphic representing restricted access to OpenAI's GPT-5.5 Cyber and Anthropic's Mythos tools.

What happened

OpenAI is adopting a gated release strategy for its specialized cybersecurity model, GPT-5.5 Cyber, limiting its use to 'critical cyber defenders.' The decision is a departure from previous rhetoric by CEO Sam Altman, who had criticized competitor Anthropic for employing similar restrictions. Altman confirmed on Thursday that the tool would begin rolling out to a vetted group of users in the coming days.

What's new in this update

OpenAI provided fresh details on its Trusted Access for Cyber (TAC) program, which manages these permissions. A company spokesperson confirmed the program has already scaled to 'thousands of verified defenders' and hundreds of teams. The program is tiered, offering access to 'cyber-permissive' models like GPT 5.4-Cyber and the upcoming GPT 5.5-Cyber to users who demonstrate legitimate defensive use cases.

Key details

The Cyber model is specifically designed for high-stakes tasks including penetration testing, vulnerability identification and exploitation, and malware reverse engineering. To gain access, users must submit credentials and a planned use case via an application on OpenAI’s website. By vetting users, OpenAI aims to provide a version of the model that operates with less 'friction' from standard safety filters that might otherwise block legitimate security research.

Background and context

The move is notable because Altman had previously described Anthropic's decision to gate its Mythos security tool as 'fear-based marketing.' Despite the initial criticism, both companies have now arrived at a restricted-access model to prevent their tools from being co-opted by bad actors. Critics of the gating strategy previously argued that such rhetoric was overblown, though an unauthorized group reportedly managed to gain access to Anthropic's Mythos regardless of the safeguards.

What to watch next

OpenAI states it is currently consulting with the U.S. government to identify more users with legitimate credentials as it attempts to make Cyber more widely available. The industry will be watching to see if these restricted models remain secure from unauthorized access and whether the TAC program can effectively scale without enabling offensive use of the technology.

Why it matters

This move reflects the ongoing tension between releasing powerful AI tools for defensive research and the risk that these same capabilities could be used to automate cyberattacks.

Read next

Follow this story through the topic hub, more ai coverage, and the latest updates.

Weekly briefing

Get the week's key developments in one concise email.

Get a fast catch-up on the biggest stories, the context behind them, and the links worth your time.

Cadence

Weekly, for a quick catch-up

Coverage

AI, business, world, security, sports

Format

Clear takeaways and useful context

Request the briefing

Leave your email to open a prepared request and get on the list for the weekly briefing.

One concise email.·Weekly cadence.·Prefer RSS instead?

Author

E
Editorial Desk

See who assembled this story and follow more of their work.

Sources and methodology

GPT-5.5 CyberSam AltmanMythos AIVulnerability ResearchAI Safety