
OpenAI may be rethinking its reliance on Nvidia $NVDA ( ▼ 3.14% ) . According to Reuters, the ChatGPT maker has been “unsatisfied” with the inference performance of Nvidia’s latest AI chips and has been quietly exploring alternatives since last year, citing multiple sources familiar with the situation.
This comes just after reports that Nvidia’s massive planned investment in OpenAI had stalled, adding a little extra drama to what was already one of the most important partnerships in AI.
It’s Not Training. It’s Thinking.
The issue reportedly centers on inference, the stage where AI models generate answers and “think” in real time. While Nvidia’s chips are widely seen as best-in-class for training giant models, inference is a different beast. It demands efficiency, speed, and cost control at massive scale.
If OpenAI feels Nvidia’s latest hardware is not hitting the mark here, that is a big deal. Inference is where long-term profits in AI will likely be made, since it is tied directly to user activity and recurring workloads.
Nvidia CEO Jensen Huang publicly maintained a positive tone, saying Nvidia would still participate in OpenAI’s upcoming funding round. But behind the scenes, OpenAI appears to be hedging.
The Backup Plan Is Already in Motion
OpenAI is not starting from scratch. Last year, it struck major deals with Advanced Micro Devices $AMD ( ▼ 1.63% ) that could be worth tens of billions in revenue for the chipmaker. It also partnered with Broadcom $AVGO ( ▼ 5.86% ) to develop and deploy 10 gigawatts worth of custom AI accelerators.
That paints a picture of diversification, not a clean breakup. OpenAI still needs Nvidia $NVDA, but it also wants leverage and optionality as AI infrastructure spending explodes.
From the outside, it has the vibe of a classic tech industry relationship shift. Public smiles, private shopping around.