AMD, Marvell, Intel: Which Is The Next Multi-Trillion Chip Stock

view original post

On Monday, AMD (NASDAQ:AMD) made news by entering into a significant agreement with OpenAI, the developer of ChatGPT, to provide tens of thousands of its GPU chips for 6 gigawatts of computing power over the forthcoming five years. The initial gigawatt comprising AMD’s next-generation Instinct MI450 chips will be delivered in the latter half of 2026. This contract signifies one of the largest individual chip acquisitions ever made in the AI industry and highlights OpenAI’s efforts to diversify its hardware supply chain beyond the industry leader Nvidia (NASDAQ:NVDA), which commands over 75% of the AI computing market. This strategic shift could represent a pivotal moment, not only for AMD but also for the ongoing expansion of the AI computing sector, opening avenues for emerging leaders in the AI semiconductor arena.

The next phase of the AI computing race will no longer be about just training large language models – it’s increasingly about inference, or putting AI to work in the real world. Training is akin to teaching an AI model everything they know, while inference is when that knowledge is applied – answering questions, generating text, or running chatbots millions of times a day. As AI applications scale to hundreds of millions of users, the demand for inference capacity is set to explode. Running these workloads efficiently has become the new bottleneck, forcing companies to rethink how and where they deploy compute. Nvidia has dominated training with its H100 and A100 GPUs, but inference could shift the equation. Once models are trained, they must run continuously across billions of devices, making energy efficiency, latency, and hardware availability crucial factors. Nvidia Stock 2x To $350?

That being said, if you seek an upside with less volatility than holding an individual stock like NVDA, consider the High Quality Portfolio. It has comfortably outperformed its benchmark—a combination of the S&P 500, Russell, and S&P MidCap indexes—and has achieved returns exceeding 91% since its inception. Why is that? As a group, HQ Portfolio stocks provided better returns with less risk versus the benchmark index; less of a roller-coaster ride, as evident in HQ Portfolio performance metrics.

A Multi Trillion Opportunity

As AI matures, more of the value creation could shift from training toward inference. Morgan Stanley projects roughly $3 trillion will be invested into the AI build out over the next three years – and it is likely that a significant portion will flow toward inference. To put this in perspective: Nvidia’s market cap is currently around $4.5 trillion, fueled largely by its dominance in training. But inference could actually be the bigger market. In the coming years, inference could easily surpass training in total revenue and total GPU units shipped, opening a new frontier in the AI chip race. This could favor companies that offer cheaper, more energy-efficient, and readily available chips, not just the most powerful ones. This shift opens the field to a broader range of players. So who are the potential winners here?

What The Inferencing Landscape Looks Like

Nvidia is likely to remain a leading player even as the AI market shifts. The company’s deeply entrenched software ecosystem – including CUDA, TensorRT, and cuDNN – are likely to have locked in many early customers. It is also adapting its product line, focusing on powerful new chip architectures for inference and optimized inference software to meet growing workloads from video generation to large-scale real-time AI applications. Combined with its strong partnerships across hyperscalers, AI startups, and enterprise customers, the company is likely to remain an AI leader, although it’s very likely that its share of the AI compute market will trend lower.

MORE FOR YOU

AMD has considerably lagged Nvidia in the early stage of the AI build out, but it could emerge as a key challenger to Nvidia in inference. The OpenAI partnership further validates AMD’s position as a serious inference player and signals that the company is steadily carving out a place as a preferred provider for scalable, efficient AI compute solutions. AMD chips are increasingly competitive in performance while offering cost and memory advantages. Not all organizations need or can afford Nvidia’s top-tier GPUs. Many are likely to opt for older Nvidia chips or cost-effective alternatives like AMD’s MI series, which offer solid performance for inference and fine-tuning of models.

Intel might seem like a wildcard in the AI chip race, but it has a credible shot at the inference market. Its broad portfolio – including CPUs, Gaudi Habana accelerators, and an extensive datacenter ecosystem – positions Intel to deliver energy-efficient, high-volume inference solutions. Intel’s fabs could benefit as AI inference drives demand for high volumes of lower- to mid-tier chips, which Intel may be more well-positioned to produce for other companies. While its Intel’s process technology lags the likes of TSMC at the cutting edge for top-tier GPUs, it is probably more than capable of fabricating efficient, cost-effective inference chips. See: OpenAI’s Next Bet: Intel Stock?

ASIC, or Application-Specific Integrated Circuits, are increasingly gaining attention in the AI space. Unlike versatile, general-purpose GPUs, ASICs are engineered for a single task, making them far more cost- and energy-efficient for large-scale inference workloads. The crypto industry offers a clear precedent: Bitcoin mining initially relied on GPUs, but rapidly shifted to ASICs once scale and efficiency became paramount. related How Marvell Stock Surges To $140 A similar evolution could play out in AI inference. Two companies positioned to capitalize on this trend are Marvell, which has a market capitalization of roughly $1.5 trillion, and Broadcom, both with proven expertise in designing custom silicon for hyperscalers. Broadcom, in particular, has emerged as a standout, securing a $10 billion deal with OpenAI. Up 500%, What’s Happening With AVGO Stock?

U.S. Big Tech such as Amazon (NASDAQ:AMZN), Alphabet (NASDAQ:GOOG), and Meta (NASDAQ:META) are all designing AI chips. Amazon has leaned into training-focused chips, Meta started with inference and is expanding to training, while Google supports both through its TPU (tensor processing unit) lineup. For these hyperscalers, the motivation is not necessarily to beat Nvidia in the market but to lower costs, gain supply control, and improve bargaining power within their massive cloud ecosystems. Over time, that means less incremental demand for Nvidia’s GPUs. Over Q2, Nvidia said that just two of its customers made up roughly 39% of total revenue — likely these big U.S. tech firms — making it vulnerable if they shift workloads to in-house silicon or lower-cost suppliers.

Chinese Internet Giants In China, companies like Alibaba (NYSE:BABA), Baidu (NASDAQ:BIDU), and Huawei are stepping up their AI chip efforts. Recent reports suggested that Alibaba plans to launch a new inference chip for its cloud division. The strategy is twofold: to power inference at scale using its own tech stack, and to secure a steady supply of semiconductors amid U.S. export restrictions. For now, Nvidia’s GPUs are still expected to form the backbone of Alibaba’s AI training workloads, but inference is likely to become the longer-term growth driver for the company. related Alibaba’s AI Chip A Big Deal?

Beyond semiconductors, growing AI inference workloads will drive demand for supporting infrastructure. Fast, reliable, and intelligent networking will be critical to handle billions of requests daily. Companies like Arista Networks and Cisco are well-positioned to benefit, as ultra-low-latency, high-bandwidth interconnects become essential for moving massive datasets between servers, GPUs, and cloud nodes in real time.

The Trefis High Quality (HQ) Portfolio, with a collection of 30 stocks, has a track record of comfortably outperforming its benchmark that includes all 3 – S&P 500, Russell, and S&P midcap indexes—and has achieved returns exceeding 91% since its inception. Why is that? As a group, HQ Portfolio stocks provided better returns with less risk versus the benchmark index; less of a roller-coaster ride, as evident in HQ Portfolio performance metrics.