-

NCSA Deploys Cerebras CS-2 in New HOLL-I Supercomputer for Large-Scale Artificial Intelligence

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, the pioneer in high performance artificial intelligence (AI) computing, today announced that the National Center for Supercomputing Applications (NCSA) has deployed the Cerebras CS-2 system in their HOLL-I supercomputer.

“We’re thrilled to have the Cerebras CS-2 system up and running in our Center,” said Dr. Volodymyr Kindratenko, Director of the Center for Artificial Intelligence Innovation at NCSA. “This system is unique in the AI computing space in that we will have multiple clusters at NCSA that address the various levels of AI and machine learning needs -- Delta and HAL, our NVIDIA DGX, and now HOLL-I, consisting of the CS-2, as the crown jewel of our capabilities. Each system is at the correct scale for the various types of usage and all having access to our shared center-wide TAIGA filesystem eliminating delays and slowdowns caused by data migration as users move up the ladder of more intense machine learning computation.”

The Cerebras CS-2 is the world’s fastest AI system. It is powered by the largest processor ever built – the Cerebras Wafer-Scale Engine 2 (WSE-2). The Cerebras WSE-2 delivers more AI optimized compute cores, more fast memory, and more fabric bandwidth than any other deep learning processor in existence. Purpose built for AI work, machine learning practitioners can write their models in the opensource frameworks of TensorFlow or PyTorch and without modification run the model on the Cerebras CS-2. With the CS-2 and Cerebras Software Language (CSoft), practitioners can seamlessly scale up from small models like BERT to the largest models in existence like GPT-3.

“We founded Cerebras Systems with the audacious goal to forever change the AI compute landscape,” said Andrew Feldman, CEO and Co-Founder, Cerebras Systems. “Not only are we seeking to accelerate AI workloads by orders of magnitude over what is possible on legacy hardware, but we also want to put this extraordinary capability in the hands of academics and researchers. Partnering with NCSA ensures that academics and researchers will have access to the world’s fastest solution for AI and HPC.”

Large models have demonstrated state of the art accuracy on many language processing and understanding tasks. Training these large models using GPU is challenging and time-consuming. Training from scratch on new datasets often takes weeks and 10s of megawatts of power on large clusters of legacy equipment. Moreover, as the size of the cluster grows, power, cost, and complexity grow exponentially. Programming clusters of graphics processing units requires rare skills, different machine learning frameworks, and specialized tools that require weeks of engineering time for each iteration.

The CS-2 was built to directly address these challenges: Setting up even the largest model takes only a few minutes, and the CS-2 is faster than clusters of 100s of graphics processing units. With less time spent in set up, configuration and training, the CS-2 enables users to explore more ideas in less time.

With customers in North America, Asia, Europe and the Middle East, Cerebras is delivering industry leading AI solutions to a growing roster of customers in the enterprise, government, and high performance computing segments including GlaxoSmithKline, AstraZeneca, TotalEnergies, nference, Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center, Edinburgh Parallel Computing Centre (EPCC), and Tokyo Electron Devices.

For more information about Cerebras for scientific computing, please visit https://cerebras.net/industries/scientific-computing/.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer system, designed for the singular purpose of accelerating AI and changing the future of AI work forever. Our flagship product, the CS-2 system is powered by the world’s largest processor – the 850,000 core Cerebras WSE-2, enables customers to accelerate their deep learning work by orders of magnitude over graphics processing units.

Contacts

Media Contact:
Kim Ziesemer
pr@zmcommunications.com

Cerebras Systems


Release Versions

Contacts

Media Contact:
Kim Ziesemer
pr@zmcommunications.com

More News From Cerebras Systems

Cerebras Delivers End-to-End Training and Inference for Jais 2, the World’s Leading Open Arabic LLM

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, in partnership with G42’s Inception and MBZUAI’s IFM, today announced the release of Jais 2, the leading open-source Arabic LLM – the first frontier language model both trained and deployed for inference on Cerebras Systems. The organizations combined their expertise with leading machine learning techniques, uniquely enabled on Cerebras wafer-scale clusters, to achieve state-of-the-art quality on Jais 2, using only a fraction of compute used...

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium...

Guyana and Cerebras Forge Historic AI Partnership to Launch a 100MW Data Center and Ignite Regional Innovation

GEORGETOWN, Guyana & SUNNYVALE, Calif.--(BUSINESS WIRE)--In a bold step toward shaping the future of technology in South America and the Caribbean, the Government of the Co-operative Republic of Guyana (Guyana) and Cerebras Systems have signed a landmark Memorandum of Understanding (MOU) to build and operate a state-of-the-art artificial intelligence (AI) data center of up to 100MW in Wales, Guyana. This transformative initiative marks a new chapter in Guyana’s journey to become an AI-first nat...
Back to Newsroom