
When the CEO of the world’s most valuable public company speaks for 90 minutes, Wall Street listens closely. Nvidia $NVDA ( ▼ 2.15% ) CEO Jensen Huang’s CES keynote and follow-up Q&A landed as a confidence check on timelines, scale, and where the next leg of AI growth is coming from.
Vera Rubin confidence check
The biggest takeaway was confirmation that Nvidia’s next-generation Vera Rubin GPUs are already in full production, easing lingering worries about delays after the Blackwell rollout. Management emphasized major manufacturing improvements at the system level, including drastically faster board assembly times and early deployment of full racks.
The message to investors was simple: the Rubin timeline remains intact for the second half of 2026, and revenue should be meaningful not long after launch. With the stock still below prior highs tied to earlier capex commentary, the tone suggested room for enthusiasm to rebuild as numbers start to flow through later this year.
Physical AI becomes the next growth engine
Beyond data center GPUs, Nvidia leaned hard into what it’s calling “physical AI.” That includes autonomous vehicles, robotics, simulation, and edge computing. The company framed itself as uniquely positioned across the full stack, from training in data centers to simulation via Omniverse to deployment through edge platforms like Jetson and automotive systems.
Autonomous vehicles were repeatedly highlighted as a long-term opportunity, with management pointing to a revenue potential well north of $10 billion by the end of the decade. The broader implication is that AI growth is no longer just about training large models, but about deploying intelligence into real-world systems at scale.
Inference, scale, and the chip wars
Another clear theme was inference. Nvidia introduced new infrastructure aimed at handling massive context windows more efficiently, effectively opening up a new addressable market tied to inference workloads. The pitch was that tight integration with the Rubin platform should drive fast customer adoption.
At the same time, Nvidia continued to defend the GPU-first approach against custom AI chips. While acknowledging that alternative chips can offer cost advantages at scale, management argued that performance, ecosystem depth, and supply-chain leverage still tilt the balance in Nvidia’s favor. The company also underscored its scale advantage in procuring memory and supporting the broader AI hardware ecosystem.
The bottom line
There were no shock announcements, but that was the point. The CES appearance reinforced Nvidia’s roadmap, validated production timelines, and framed physical AI as the next major growth vector. For Wall Street, it read less like hype and more like a steady signal that the AI buildout still has plenty of runway left.