Is AGI Upon Us?
According to most definitions, Artificial General Intelligence, or AGI is when AI becomes a generalist: able to reason, learn, and act across a wide range of tasks at or above human level. If you've used Claude Code or Claude Cowork, you might think AGI is already here, and many very smart people think just that.
But what makes Mythos so striking is that its cybersecurity capabilities were apparently not specifically trained. They emerged from general improvements in reasoning. The model wasn't built to be a hacker. It just got smart enough to figure out hacking on its own. And for this reason, Anthropic has decided for now not to release it publicly—instead rolling it out to a hand-picked group of about 40 companies, including Amazon, Apple, Google, and Microsoft, through an initiative called Project Glasswing.
Days later, OpenAI made a similar move: restricting its own advanced cybersecurity AI to a small group of vetted partners. OpenAI has also renamed its product organization to "AGI Deployment." That's not subtle.
These are not gradual improvements. They are huge leaps. And multiple independent observers—from tech analysts to AI researchers—have described the new coming models in terms that put them squarely in the vicinity of what the industry has been calling AGI.
Imagine thousands of data centers full of Einsteins working day and night. That's AGI.