Nvidia Faces Challenges: Upstarts Seek to Capitalize on AI Chip Gap

Currently, NVIDIA (NASDAQ: NVDA) holds a significant moat in the industry. However, it appears that they may have a weakness that researchers and well-funded upstarts are beginning to exploit. Nvidia initially designed its GPUs and CPUs for non-AI-specific tasks. The power of its provided compute happened to be translatable to deep learning capabilities and other AI tasks, placing Nvidia in a particularly strong position as the demand for AI began to increase exponentially. However, it’s important to note that Nvidia’s significant infrastructure was developed for compute tasks that are not specifically designed for AI.

That doesn’t mean Nvidia isn’t aware of this; they absolutely are and will be doing everything in their power to develop the systems and infrastructure that commit to an AI-first focus in future developed units. For example, Nvidia continues to develop its Tensor Cores, which are specialized processing units within Nvidia’s GPUs for managing AI workloads. Google (GOOG) (GOOGL) also offers Tensor Processing Units, which are Application-Specific Integrated Circuits (‘ASICs’) that are designed to work with Google’s own machine-learning framework and are offered to third parties but through cloud-based services rather than as hardware to deploy in data centers.

However, there is still somewhat of a window here for newer companies to potentially capture a portion of the market if they challenge Nvidia, Google, and other leading technology firms in developing chips designed specifically for AI workloads from the ground up. The key competitive advantage these AI-specific chips would have is efficiency in AI workloads, which is a massive selling point as consumers begin to expect faster inferences from AI systems. It’s not unlikely for a much smaller company than Nvidia to execute this effectively, but it takes the right teams with the right funding and the right ingenuity, in my opinion, to be able to manifest these designs correctly and then have them adopted at a mass scale.

The likes of Groq and other less-pronounced upstarts are already doing this. But to my mind and from my research, Groq seems to have the highest leg up here and should offer quite a compelling rival to Nvidia in the AI market in years to come. It offers a Tensor Streaming Processor architecture, which, unlike traditional GPUs and CPUs, provides a deterministic, software-defined approach. This reduces latency and increases efficiency. Additionally, its single-core design allows for high-speed data processing, making it much more attractive for AI computations. Groq’s chips are designed to offer faster performance in certain AI tasks compared to traditional GPUs, as they are designed to perform trillions of operations per second.

Other notable competitors to Nvidia also include Cerebras and SambaNova. Cerebras offers a very powerful single-chip processor, and SambaNova offers an integrated hardware and software system powered by AI-specific chips. Cerebras has a massive AI chip called the Wafer Scale Engine, which is much larger than traditional chips, occupying almost an entire silicon wafer. Therefore, it has an unprecedented amount of processing power and stands out as a significant competitor. Its latest version, called the Wafer Scale Engine 2, contains 2.6 trillion transistors and 850,000 cores, making it the largest chip ever built. This allows for quick and efficient AI tasks through minimizing data movement.

SambaNova’s integrated hardware and software solution through its DataScale system is powered by chips utilizing its Reconfigurable Dataflow Architecture. This allows for adaptable, scalable AI processing and is quite attractive as it offers flexibility to enterprises that need a range of different levels of compute that vary depending on the needs of their machine-learning tasks at specific times.

We should remember that Nvidia isn’t going to be knocked off the top position in AI infrastructure due to its growing moat, but the part of the market that is focused on the big momentary gap in Nvidia’s strategic focus, which it is now having to readjust to, could do very well. I think the challenge is going to be whether the companies that try to compete with Nvidia in this regard will be able to remain viable once Nvidia adapts to the technological shift accordingly and approaches with a high-scale output. I believe the only option here for smaller competitors is for them to have a stringent focus on quality. I believe that ingenuity and strength in design could far outcompete Nvidia, even for a long time. While Nvidia might be the biggest, it might end up that it isn’t the best.

If Nvidia eventually gets viewed as the company with the largest compute infrastructure offering but doesn’t offer the best AI-specific chips for efficiency, at least for some time, this could mean that the stock valuation is too high at the moment. What Nvidia offers, which I believe is compelling and its most significant competitive advantage, is a full-stack ecosystem for high-compute tasks, including AI, which it is continuing to develop. This is arguably what the market wants and is demanding through the very high valuation. However, if suddenly Nvidia is viewed as the main provider of this but second-best in terms of AI-specific workloads, I think Nvidia’s valuation could be in for a moderate correction.

In light of this risk for Nvidia, I think it is wise for Nvidia shareholders to have some caution in the short-to-medium term. My own view is that the valuation of Nvidia at this time is not too high based on the long-term dependency the world will have on Nvidia’s ecosystem, but in the short-to-medium term, the stock valuation could be seen as too optimistic given that there are some notable competitors to Nvidia arising that should disrupt moderately, but quite successfully, the idea of Nvidia being so dominant a provider of quality in AI workloads. However, Nvidia’s ongoing innovations and AI integrations could mitigate these risks, especially considering its power of funding compared to smaller and newer companies.

I think Jensen Huang is an exceptional entrepreneur and an executive. He has inspired many of my own thoughts and works and is well-documented, stating that he actively looks for and assesses Nvidia’s competition on a daily basis as he realizes that it is true that other companies are trying to take Nvidia’s market-leading position from it. To me, this quite powerfully presents what other upstarts are dealing with, and I believe his ethos is the foundation of the large and growing moat Nvidia shareholders are becoming accustomed to.

Nvidia, by no exaggeration, is an exceptional company. As I touched on in my operations analysis above, I believe Nvidia has its core strength in its full-stack ecosystem. In this area, I believe it will be essentially impossible for competitors to effectively vie for market share from Nvidia in any meaningful way. This is why I think Mr. Huang has done such a good job at consolidating and solidifying Nvidia’s position as the most advanced (and well-financed) technology company developing AI tools. This is also why I believe Nvidia remains a fantastic long-term Buy.

I believe Nvidia’s CUDA deserves a special mention here, as it is the architecture that allows a full range of customers to enable Nvidia’s GPUs for general-purpose processing. In my opinion, this is incredibly clever for Nvidia to do, as it allows them to harness the power of their units for multiple purposes through integration with software. What this does is democratize the power of Nvidia’s hardware infrastructure, but what Groq and other upstarts might be able to do is focus on the niche of AI-specific workloads and design hardware that is absolutely only focused on these tasks. Undoubtedly, that will be faster than high-power units being moderated for varying workloads, AI being one of them. However, the versatility of general-purpose GPUs offers broader applicability, which is also crucial in environments that need multiple types of workloads handled concurrently.

While Nvidia faces competition from AMD (AMD), which has also developed an AI ecosystem called ROCm, AMD’s platform is notably less comprehensive and doesn’t currently have an AI-specific feature set that Nvidia does. Instead, AMD’s core competitive focus is on providing high-performance computing from its chip designs at a cost that is competitive. AMD is still aggressively pursuing improvements in its GPU capabilities and ROCm ecosystem to support AI and machine learning workloads. In some respects, if AMD cleverly focused on AI chip development specifically, like Groq is, then it could have a much more significant competitive advantage over Nvidia. However, that shouldn’t understate that AMD is generally looking to balance its AI focus strategically in its total product portfolio. AMD, like Nvidia, was not initially an AI company.

While Nvidia is leading the field in AI computation infrastructure, a momentary opening has become apparent for innovators to capture latency in Nvidia’s chip configurations. In the near future, I see it possible that Nvidia will be viewed as the largest and best AI development ecosystem, but it may find itself, at least for a period, out-competed in chip design by smaller firms who are focused on the direct quality of chips and units designed specifically for AI. At the end of the day, Nvidia wasn’t an AI company at first; it might take a company devoted to AI from the beginning to really provide the quality that the consumer AI market is about to demand.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top