What if the biggest weapon in America's AI competition toolkit has been quietly training its main adversary all along? That question stopped being theoretical today.
DeepSeek's V4 release — exactly one year after its original "Sputnik moment" rattled Silicon Valley — isn't just another benchmark-beating model from a scrappy Chinese lab. The headline numbers are genuinely striking: V4-Pro packs 1.6 trillion parameters, a one-million-token context window, and benchmark scores that match or beat GPT-5 and Claude 4 Opus, all trained for $5.6 million — a figure that doubles V3's already-shocking efficiency. But the detail buried beneath those numbers is the one that should rattle Washington: DeepSeek did not give Nvidia or AMD early access to V4's model weights. It gave them to Huawei.
The real issue: By locking Chinese labs off from the world's best chips, US export controls forced them to become the world's best at building AI that doesn't need those chips.
Jensen Huang understood the stakes perfectly. Speaking on the Dwarkesh Podcast earlier this month, he said: "The day that DeepSeek comes out on Huawei first, that is a horrible outcome for our nation." That day arrived this morning. DeepSeek didn't merely optimise V4 to run on Huawei's Ascend 950PR processor — it locked Nvidia out of the early-access window that Western chipmakers have historically relied on to tune performance. The message is not subtle. The co-development feedback loop that US policymakers most feared — China's best AI software engineers and its domestic semiconductor industry reinforcing each other's capabilities — is now running in production.
Defenders of the export control strategy will argue it is still working: DeepSeek trained on stockpiled Hopper-era GPUs, the Ascend 950PR is not yet truly competitive with Blackwell silicon at scale, and friction and cost matter even when they are not insurmountable. That is not wrong as far as it goes. But it fundamentally misreads the trajectory. Each successive DeepSeek release requires less compute to reach a higher capability level. V4-Pro cost $5.6 million; V3 cost more; V2 more still. That trend line does not converge on a floor — it converges on independence. And markets are reading it that way: Alibaba, ByteDance, and Tencent have already placed bulk orders for Huawei's Ascend 950PR chips, pushing prices up 20% in weeks. That is not desperation hedging. That is conviction. When China's largest AI consumers make that kind of hardware commitment, Huawei has every incentive to invest aggressively in the next generation — and the generation after that.
What is unfolding is a self-reinforcing loop: better AI software optimised for domestic chips makes those chips more capable, which attracts more AI workloads, which funds more chip development. This is precisely the dynamic US policy was designed to prevent. The CUDA software ecosystem — Nvidia's true moat, the de facto language of AI computation — now faces a rival built not despite American pressure but because of it.
None of this means the US is losing the AI race outright. American labs still hold the frontier on most measures, open-source models continue to flow freely, and whether Huawei's stack can match Nvidia's at full production scale remains genuinely unresolved. But the original theory behind chip export controls was never just to stay ahead — it was to prevent China from developing an independent, self-sufficient AI capability stack. On the evidence of today, that goal has failed. The question now is whether Washington can honestly reckon with that failure and rethink the strategy, or whether it will double down on controls that, one year after DeepSeek's first shock, have delivered precisely what they were designed to prevent. I wouldn't hold my breath.