The world’s second-fastest supercomputer, Frontier, has achieved a groundbreaking feat: creating the most intricate computer simulation of the universe to date. This simulation, utilizing the Hardware/Hybrid Accelerated Cosmology Code (HACC), pushes the boundaries of cosmological hydrodynamics research, offering unprecedented insights into the universe’s evolution and the nature of dark matter and dark energy.
Results for: Supercomputer
Dell Technologies has achieved a groundbreaking feat, shipping the first-ever liquid-cooled PowerEdge XE9712 systems to CoreWeave. These supercomputers, built in partnership with NVIDIA, boast unparalleled AI processing power, promising to significantly accelerate advancements in artificial intelligence.
Nvidia and SoftBank have announced a partnership to create Japan’s most powerful AI supercomputer, utilizing Nvidia’s advanced Blackwell platform. This move signifies a significant step in Japan’s ambition to become a global leader in AI, with potential benefits for telecommunications, transportation, robotics, and healthcare.
Elon Musk’s new AI company, xAI, has unveiled its groundbreaking supercomputer, Colossus, which boasts a mind-blowing 100,000 NVIDIA H100 AI GPUs and was built in a record-breaking 19 days. NVIDIA CEO Jensen Huang, impressed by the feat, dubbed Musk ‘superhuman’ for achieving what typically takes years.
NVIDIA CEO Jensen Huang discusses the company’s dominance in the AI market and predicts that sophisticated AI personal assistants will soon be commonplace. He reveals insights into Elon Musk’s xAI company and its rapid supercomputer build, showcasing NVIDIA’s key role in the AI revolution.
Atomic Canyon, a leader in generative AI for the nuclear industry, and Oak Ridge National Laboratory (ORNL) have collaborated to develop an advanced AI model trained on the Frontier supercomputer. This open-source model sets new standards for accuracy and speed in nuclear data search, making it a valuable tool for research, engineering, and deployment of nuclear energy.
Meta Platforms is reportedly building a massive AI supercomputer, powered by over 100,000 NVIDIA H100 AI GPUs, to train the next version of its Llama language model. This supercomputer, located in the US, is expected to be fully operational by October or November, representing a significant investment in artificial intelligence and cutting-edge technology.
Japan is set to build the world’s most powerful supercomputer, capable of performing one sextillion calculations per second, surpassing even the current fastest supercomputers by a thousandfold. The ambitious project, dubbed ‘Fugaku Next,’ aims to keep Japan at the forefront of artificial intelligence research and development, and is expected to be operational by 2030.
Elon Musk, CEO of Tesla and SpaceX, shared an update on the timeline for the company’s Dojo supercomputer during the All-In podcast. He revealed that Dojo 2 is expected to reach volume production by the end of 2025 and that the third iteration, slated for late 2026, will be the true test of its capabilities. This powerful supercomputer plays a vital role in Tesla’s Full Self-Driving (FSD) technology, training the neural networks that power autonomous vehicles.
AI startup SingularityNet is building a powerful supercomputer designed to host and train artificial general intelligence (AGI) models, aiming to create AI capable of matching or exceeding human cognition. This ambitious project leverages cutting-edge hardware and aims to usher in a new era of AI development.