Cerebras Systems and U.S. Department of Energy Sign MOU to Accelerate the Genesis Mission and U.S. National AI Initiative
Cerebras Systems and U.S. Department of Energy Sign MOU to Accelerate the Genesis Mission and U.S. National AI Initiative
SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, makers of the fastest AI infrastructure, today announced that it has signed a Memorandum of Understanding (MOU) with the U.S. Department of Energy (DOE) to explore further collaboration on next-generation AI and high-performance computing (HPC) technologies to accelerate AI+HPC for science and national security.
"We are proud to support the Genesis Mission and work with DOE and the National Laboratories to advance the next generation of AI-driven scientific discovery, accelerate R&D productivity, and strengthen America’s leadership in advanced computing"- Cerebras
Share
The MOU expresses Cerebras’ intent to support The White House’s Genesis Mission, a new national effort to use AI to transform how scientific research is conducted and accelerate the speed of scientific discovery.
“For years, Cerebras has worked with the Department of Energy and its national laboratories on forward-leaning topics,” said Andy Hock, Chief Strategy Officer, Cerebras. “We are proud to support the Genesis Mission and seek to work together with DOE and the National Laboratories to advance the next generation of AI-driven scientific discovery, accelerate R&D productivity, and strengthen America’s leadership in advanced computing.”
Advancing U.S. Leadership in AI and Data Center Infrastructure
The MOU establishes a framework for Cerebras and DOE to share information, explore joint research and development, and pursue future agreements that will accelerate the development and deployment of secure, scalable, and energy-efficient AI infrastructure.
Under the MOU, Cerebras and DOE will explore strategic collaborations to advance:
- Development and use of large-scale data sets for science and engineering
- Collaboration on advanced computing technology and hardware R&D collaborations for AI and converged AI+HPC, e.g. new computing architectures, new power – packaging – cooling technologies, new memory and IO technologies
- Development of AI and AI+HPC software and programming models; joint developer community engagement
- Cooperative public engagement in research, education, science, policy, AI, and other areas of mutual interest
Cerebras is also exploring additional opportunities to work with DOE and the National Laboratories to develop world-leading Cerebras wafer-scale AI supercomputers and use of these systems as science and security AI and HPC accelerators, as well as to develop advanced AI models and converged AI+HPC workflows on Cerebras systems, including novel AI “co-scientist” capabilities.
The agreement with DOE may also encompass pilot programs, joint R&D efforts, technical exchanges, and additional future agreements to strengthen domestic AI capabilities and accelerate U.S. R&D productivity.
A Longstanding Partnership Driving Scientific Breakthroughs
Over the past decade, Cerebras has worked with DOE to build world-leading capabilities across AI, HPC, and converged AI+HPC research. DOE laboratories were among the first adopters of the Cerebras Wafer-Scale Engine—the world’s largest and most powerful commercially available AI processor—and have since deployed Cerebras AI supercomputers to accelerate mission-critical science.
Cerebras’s capabilities have enabled major technological advances, including:
- Breakthrough AI models for genomics, clean energy, and other scientific domains
- World-class work accelerating scientific discovery, including three consecutive years of Gordon Bell Prize Finalist-recognized collaborations, and Gordon Bell Special Prize-winning work in 2022 with Argonne National Laboratory for groundbreaking work using AI models to better understand CoVID-19 genomic dynamics
- Faster-than-exascale performance on key DOE mission workloads, such as molecular dynamics with Sandia, Lawrence Livermore and Los Alamos National Laboratoy and computational fluid dynamics with NETL
- State-of-the-art AI inference and converged AI+HPC workflows, enabling scientific modeling at speeds unattainable on traditional general-purpose processors
- Revolutionary hardware and memory co-design, through DOE’s Advanced Memory Technology (AMT) program with Sandia National Laboratories and the NNSA Tri-Labs, including new memory systems that could increase wafer-scale system capacity for scientific simulations and large-scale AI workloads by 100x
For more information about Cerebras and its DOE partnership, please visit https://www.cerebras.ai/customer-spotlights/national-laboratories.
About Cerebras Systems
Cerebras Systems builds the fastest AI infrastructure in the world. We are a team of pioneering computer architects, computer scientists, AI researchers, and engineers of all types. We have come together to make AI blisteringly fast through innovation and invention because we believe that when AI is fast it will change the world. Our flagship technology, the Wafer Scale Engine 3 (WSE-3) is the world’s largest and fastest AI processor.56 times larger than the largest GPU, the WSE uses a fraction of the power per unit compute while delivering inference and training more than 20 times faster than the competition. Leading corporations, research institutes and governments on four continents chose Cerebras to run their AI workloads. Cerebras solutions are available on premise and in the cloud, for further information, visit cerebras.ai or follow us on LinkedIn, X and/or Threads.
Contacts
Media Contact
PR@zmcommunications.com
