Saturday, April 18, 2026 — A busy morning in AI: Anthropic's most powerful model yet hit the market, Stanford dropped its annual reality-check report, and the industry's biggest rivals quietly joined hands to fight a common enemy.

Anthropic Ships Claude Opus 4.7 — and It's Already Leading the Pack

Anthropic released Claude Opus 4.7 early this morning, and the benchmark numbers are hard to ignore. The model tops every major coding leaderboard and, along with Google's Gemini 3.1 Pro, is among the first generation to crack the 50% threshold on Humanity's Last Exam — a benchmark specifically designed to be unsolvable by current AI. Anthropic says the number of customers paying it $1 million or more annually has more than doubled since February, when the count stood at roughly 500. For context, the company's still-unreleased Claude Mythos Preview reportedly scores 93.9% on SWE-bench Verified and has identified thousands of zero-day vulnerabilities across every major operating system and browser — a hint of what comes next. Source: LLM Stats

Stanford's AI Index 2026: Power Hungry, Profit Driven, and Pulling Away

The Stanford Institute for Human-Centered AI published its annual AI Index Report today, and the headline numbers tell a story of breakneck growth with real-world costs. Global AI data center capacity now draws 29.6 gigawatts of power — enough to run the entire state of New York at peak demand. AI companies are generating revenue faster than any previous technology wave, but they're spending hundreds of billions of dollars to do it. Perhaps most striking: three-quarters of AI's economic gains are being captured by just 20% of companies, according to a concurrent PwC study, with top performers focused on growth rather than mere productivity gains. A separate Nature study released this week added a dose of humility — human scientists still decisively outperform the best AI agents on genuinely complex, open-ended research tasks. Source: IEEE Spectrum | Source: MIT Technology Review

OpenAI, Anthropic, and Google Form an Unlikely Alliance Against Model Piracy

In a development that would have seemed unthinkable two years ago, OpenAI, Anthropic, and Google have begun sharing intelligence through the Frontier Model Forum to combat so-called adversarial distillation — the practice of systematically querying frontier models to extract their capabilities and train cheaper knockoffs, a technique increasingly linked to Chinese AI competitors. The collaboration is a tacit acknowledgment that competitive moats are eroding faster than anyone expected, and that terms-of-service enforcement alone isn't enough. Meanwhile, Meta is pouring fuel on the fire from the other direction: the company's new Muse Spark model (internally code-named Avocado), its first major release since the $14 billion hiring of Scale AI's Alexandr Wang, arrives as Meta projects AI-related capital expenditures of $115–135 billion for 2026 — nearly double last year's spend. Source: Bloomberg | Source: CNBC

What to Watch

Keep an eye on Claude Mythos — if Anthropic's unreleased preview model performs as reported, its public launch could reset expectations for what AI can do in security and software engineering. On the policy front, the OpenAI-Anthropic-Google coalition signals that regulatory pressure isn't the only force shaping the frontier; competitive self-interest is now driving cooperation too. And as Stanford's data makes clear, the gap between the companies capturing AI's upside and everyone else is widening fast — expect that inequality narrative to dominate the next wave of policy debates.