- The Orca Fin
- Posts
- Gunning for Nvidia
Gunning for Nvidia
Is Nvidia's dominance about to end?
Nvidia has been one of the hottest stocks YTD, single-handedly carrying the Nasdaq 100 and S&P 500. It has been the biggest beneficiary of the AI (Artificial Intelligence) boom. After all, its GPUs (Graphics Processing Units) are at the core of enabling the AI boom. Lots of talk about it being insanely expensive going around. Let’s look at some of the commonly followed valuation metrics of Nvidia for the trailing twelve months (TTM):
EV/Sales = 40.37
P/FCF = 82.4
Gross Margin = 78.4%
Operating Profit Margin = 64.9%
P/E = 73.93
Optically, this would make most investors do a double take and start questioning whether Nvidia deserves such valuations. Actually, these valuations are pedestrian if one considers the Y-o-Y (Year-over-Year) EPS growth of 500%. Also, Y-o-Y revenue growth for the last two quarters has been above 260%.
Are there any other red flags? Operating Cash Flow (OCF) has been below Net Income (NI) for 2021 and 2023, and just about above NI for 2022. One would want to decipher what is happening with their Stock-Based Compensation, Other Operating Activities, and Accounts Receivables.
There are lots of theories going around on X. Some of them boil down to accusing Nvidia of fraud. Personally, I haven’t been able to catch anything in the SEC fillings. However, I can’t but forget two tricks that were used to dupe investors during the dot-com bubble:
Funds were flowing back and forth between companies and the customers. Essentially, round-tripping to boost revenues.
Some companies were playing calendar games of a different kind, stretching out the end date of a reporting period by a few days to record higher revenues.
Is this going on? Back then, some ill-timed press releases, around reporting dates, about orders secured raised eyebrows. However, these are very hard to prove without getting access to internal documents and cross-examining witnesses.
However, this article is not about finding fraud, rather it is about fundamental reasons why time is running out not only for Nvidia but also AMD. But what will be the primary cause? Open Source. Wait, what? Bear with me, this will be a fun exercise.
Current AI GPU Market Dynamics
Nvidia has been through a lot of ups and downs throughout its existence. Facing an existential crisis almost a decade ago, Nvidia made a huge bet on building out GPUs for High-Performance Computing and AI. Nvidia not only designed the chips to enable and meet this requirement when nobody was even talking about it, but they also built a whole ecosystem for developers to enable them to build on top of Nvidia’s GPUs. Many of you might have heard about CUDA
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Nvidia
CUDA is what is keeping competitors at bay. Not only is CUDA a world-class offering but it is also built to tie the customer to Nvidia chips for the long run as it doesn’t run on competitor’s chips. Hence once a customer starts using Nvidia’s GPUs for AI it becomes extremely cost prohibitive to move to a competitor’s offering.
Nvidia (NVDA) spent a decade building CUDA out and has built it so well that developers and companies prefer to use Nvidia chips even if cheaper or similar-performing alternatives are available. A case in point is AMD.
AMD has built its version of CUDA that works with AMD’s GPUs and puts a wrapper around existing code built on CUDA to be able to run on AMD chips. However, there is a big problem with AMD’s code. It is buggy and crashes. One of the most confounding things about AMD’s offerings is that most of the crashes are caused by issues with its drivers. It seems that AMD doesn’t have the requisite software skills or expertise.
Hence, customers are stuck with Nvidia with no viable alternatives available and pay whatever Nvidia charges them. This explains Nvidia’s extremely high margins. However, there are two issues:
People, especially enterprises, are loathe to tie themselves to one vendor or proprietary software.
With those gross margins and operating profit margins, competitors would want a piece of the pie.
Here in comes Open Source. If somebody can provide a solution similar to CUDA based on industry-agreed-upon standards and supports GPUs from all companies, Nvidia is essentially done for. Think of it like what Linux did to the Unix and Windows hegemony at the enterprise level.
UXL Foundation
UXL Foundation was founded by, drumroll, Intel under the Linux Foundation umbrella. The current “Steering Members” of the foundation are:
Intel
ARM
Fujitsu
Google Cloud
Imagination Software
Qualcomm
Samsung
VMWare
In very simplistic terms, the UXL Foundation is building an open-source alternative to CUDA based on the oneAPI programming model that would support GPUs from Nvidia, AMD, Intel, and others.
oneAPI is an open, cross-industry, standards-based, unified, multiarchitecture, multi-vendor programming model that delivers a common developer experience across accelerator architectures – for faster application performance, more productivity, and greater innovation. The oneAPI initiative encourages collaboration on the oneAPI specification and compatible oneAPI implementations across the ecosystem.
UXL Foundation
Intel has made multiple mistakes over the last two decades, starting with its refusal to design and build new processors for the iPhone. Ouch! That must sting to this day. However, it seems to be getting its act straight with its GPU game. Two recent acquisitions that went unnoticed by Wall Street have enabled Intel to up its GPU/AI game:
Habana Labs (2019) - Machine Learning Technology
Codeplay Software (2022) - Leader in cross-architecture open standards-based developer technologies.
Even though Intel’s GPUs are a generation behind Nvidia’s, the performance difference, as claimed by Intel, is minimal and it plans to catch up by the end of 2025. However, Intel’s GPUs and the oneAPI ecosystem offer two very important benefits:
One-third the total cost of ownership compared to Nvidia’s offerings.
Ability to seamlessly and quickly port CUDA-based solutions to the oneAPI standards.
Point number 2 is huge. It nullifies Nvidia’s stranglehold on its customers, allowing customers to choose any other GPU they desire. This changes the whole dynamics of the AI GPU space.
As for adoption:
Amazon Bedrock, Amazon’s AI offering under the AWS banner, is already offering Intel GPUs along with Nvidia’s and AMD’s.
Meanwhile, Google is part of the UXL Foundation. Hence one can assume Google is on board, however, there is no official communication from them.
As for Microsoft, they have thrown their hat behind the much-maligned OpenAI, and not much is known about what they intend to do with these new Intel capabilities.
Also, these capabilities will help Intel showcase its new fabrication capabilities and enable it to take away market share from TSMC.
One can expect to see the benefits of these initiatives and efforts in Intel’s earnings in two to three quarters and a full-blown acceleration in 15 to 18 months.
Market Outperform - INTC, AMZN
Market Perform - AVGO, GOOGL, ARM
Market Underperform - MSFT, TSMC, NVDA, AMD
Disclosures: Here are our internal governance rules to ensure no conflict of interest. For companies we write about:
No existing position.
If we have an existing position, we exit and wait seven days before publishing.
Wait seven days after publishing before initiating any new position.