Elon Musk Dislikes Term ‘GPU,’ Says Tesla AI Infrastructure No Longer Training-Constrained

Elon Musk, the CEO of Tesla Inc. (TSLA), has expressed his dissatisfaction with the term “GPU” while announcing that the company’s core AI infrastructure is no longer training-constrained.

During the first-quarter earnings call on Tuesday, Musk revealed that Tesla has been actively expanding its core AI infrastructure. He stated, “We are, at this point, no longer training-constrained, and so we’re making rapid progress.” The tech billionaire also disclosed that Tesla has installed and commissioned 35,000 H100 computers or GPUs and Tesla anticipates this number to potentially reach around 85,000 by the end of the year, primarily for training purposes.

“We are making sure that we’re being as efficient as possible in our training,” Musk said, adding that it is not just about the number of H100s but “how efficiently they’re used.”

During the conversation, Musk also expressed his discomfort with the term GPU. “I always feel like a wince when I say GPU because it’s not. GPU stand — G stands for graphics, and it doesn’t do graphics,” the tech mogul stated. “GPU is [the] wrong word,” he said, adding, “They need a new word.”

Musk’s statement came after Tesla reported its first-quarter financial revenue of $21.0 billion, which showed a 9% year-over-year decrease, missing the Street consensus estimate of $22.15 billion. The company stated that its revenue was affected by reduced average selling prices and decreased vehicle deliveries during the quarter.

On the other hand, Nvidia Corporation (NVDA) last year made a significant impact on the AI and computing sectors by introducing its H100 data center chip, which added more than $1 trillion to the company’s overall value. In February, earlier this year, it was reported that the demand for the H100 chip, which is four times faster than its predecessor, the A100, in training large language models or LLMs and 30 times faster in responding to user prompts, has been so substantial that customers are encountering wait times of up to six months.

Meanwhile, earlier this month, Piper Sandler analyst Harsh V. Kumar engaged directly with Nvidia’s management team and reported that despite Nvidia’s Hopper GPU being on the market for almost two years, demand remains strong, outstripping supply. Customers are hesitant to shift their orders from the Hopper to the Blackwell, fearing extended wait times due to anticipated supply limitations.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top