-

Hugging Face Partners with Cerebras to Give Developers Access to Industry’s Fastest AI Inference for Open-Source Models

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras and Hugging Face today announced a new partnership to bring Cerebras Inference to the Hugging Face platform. HuggingFace has integrated Cerebras into HuggingFace Hub, bringing the world’s fastest inference to over five million developers on HuggingFace. Cerebras Inference runs the industry’s most popular models at more than 2,000 tokens/s – 70x faster than leading GPU solutions. Cerebras Inference models including Llama 3.3 70B, will be available to HuggingFace developers, enabling seamless API access to Cerebras CS-3 powered AI models.

Cerebras recently announced industry-leading speeds for Llama 3.3 70B, achieving over 2,200 tokens per second – 70 times faster than GPU-based solutions. Leading industry models like OpenAI o3-mini take minutes to generate reasoning answers – Cerebras Inference completes the same tasks at comparable accuracy in mere seconds.

“We’re excited to partner with Hugging Face to bring our industry-leading inference speeds to the global developer community,” said Andrew Feldman, CEO, Cerebras. “By making Cerebras Inference available through Hugging Face, we’re empowering developers to work faster and more efficiently with open-source AI models, unleashing the potential for even greater innovation across industries.”

For the 5 million Hugging Face developers already using the Inference API, this new integration makes it easier than ever to switch to a faster provider for these popular open-source models. Developers can simply select “Cerebras” as their Inference Provider of choice in the Hugging Face platform.

Why Fast and Accurate Open-Source AI Inference Matters

Fast and precise AI inference is essential for a variety of applications, particularly as demand for higher token counts in inference increases with test-time compute and Agentic AI. Open-source models enable Cerebras to optimize these models for the CS-3, delivering developers faster and more accurate inference speeds— ranging from 10 to 70 times faster performance than GPUs.

“Cerebras has been a leader in inference speed and performance, and we’re thrilled to partner to bring this industry-leading inference on open-source models to our developer community,” said Julien Chaumond, CTO of Hugging Face.

Get Started Today

To try it out, visit any of the Hugging Face model cards already supported by Cerebras Cloud. For instance, you can explore Llama 3.3 70B, select Cerebras as your provider, and experience blazing-fast inference directly via Hugging Face.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit cerebras.ai or follow us on LinkedIn or X.

Contacts

Cerebras


Release Versions

Contacts

More News From Cerebras

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium...

Guyana and Cerebras Forge Historic AI Partnership to Launch a 100MW Data Center and Ignite Regional Innovation

GEORGETOWN, Guyana & SUNNYVALE, Calif.--(BUSINESS WIRE)--In a bold step toward shaping the future of technology in South America and the Caribbean, the Government of the Co-operative Republic of Guyana (Guyana) and Cerebras Systems have signed a landmark Memorandum of Understanding (MOU) to build and operate a state-of-the-art artificial intelligence (AI) data center of up to 100MW in Wales, Guyana. This transformative initiative marks a new chapter in Guyana’s journey to become an AI-first nat...

Cerebras Systems Launches “Cerebras for Nations” -- A Global Initiative to Accelerate and Scale Sovereign AI

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, makers of the fastest AI infrastructure, today announced the launch of “Cerebras for Nations,” a global program to help world governments build, accelerate, and scale their sovereign AI initiatives. Under the Cerebras for Nations initiative, Cerebras will engage with international partner governments and their private sector datacenter, cloud, and AI ecosystems to advance three key pillars of sovereign AI: 1) Co-design and build world-class...
Back to Newsroom