-

Cerebras Partners with Hugging Face, DataRobot, Docker to bring World’s Fastest Inference to AI Developers and Agents

New Integrations Power the Next Generation of Intelligent, Real-Time Agentic AI Applications

PARIS--(BUSINESS WIRE)--Today at the RAISE Summit in Paris, France, Cerebras Systems announced new partnerships and integrations with Hugging Face, DataRobot and Docker. These collaborations dramatically increase accessibility and impact of Cerebras’ ultra-fast AI inference, enabling a new generation of performant, interactive, and intelligent agentic AI applications.

“With Cerebras, SmolAgents become not just smart, but lightning fast,” said Julien Chaumond, CTO and Co-founder of Hugging Face. “We’re excited to see developers build real-world agents that can actually keep up with their ideas.”

Share

Hugging Face + Cerebras: Fast, Interactive Agentic Apps on Hugging Face

Hugging Face’s popular SmolAgents library allows developers to create intelligent agents that can reason, use tools, and run code — all in just a few lines of Python. Now powered by Cerebras inference and deployed with Gradio on Hugging Face Spaces, SmolAgents can deliver near-instant responses with dramatically improved interactivity.

To showcase this integration, Hugging Face and Cerebras debuted a demo financial analysis agent that evaluates portfolios and generates insights — all in real time.

“With Cerebras, SmolAgents become not just smart, but lightning fast,” said Julien Chaumond, CTO and Co-founder of Hugging Face. “We’re excited to see developers build real-world agents that can actually keep up with their ideas.”

DataRobot + Cerebras: Introducing syftr Framework for Pareto-Optimal Agents

DataRobot, a leading enterprise-grade AI platform, recently launched syftr, its open-source AI/ML framework that automates agentic workflows. Now utilizing Cerebras inference

“Syftr was built to optimize agentic AI for real-world scenarios and data, and now -- integrated with Cerebras’ industry-leading AI inference performance -- delivers an unmatched toolchain for production-grade agentic apps,” said Venky Veeraraghavan, Chief Product Officer at DataRobot. “We’re thrilled to give our customers the fastest, easiest way to deliver business value from agentic AI, including the ability to build optimal RAG applications with minimal manual effort.”

Docker + Cerebras: Deploying Agentic Apps Made Dead Simple for 20M+ Developers

Cerebras and Docker are bringing blazing-fast inference to the most developer-friendly container ecosystem on the planet.

Now, with Docker Compose and Cerebras, developers can spin up powerful, multi-agent AI stacks in seconds. Just define your models, agents, and Cerebras APIs—then launch your full agentic environment with a single command. No rewrites. No config gymnastics. Just fast, clean deployment.

“Developers want to move fast without getting tangled in infrastructure,” said Nikhil Kaul, VP of Product Marketing at Docker. “With Docker Compose now supporting agentic apps, they can build sophisticated AI systems locally and scale them into production with the exact same workflow. It’s about giving every developer the superpower to experiment and deploy, without the headaches of translating dev setups into production reality.”

For more information about these new partnerships, please visit www.cerebras.ai.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on-premises. For further information, visit cerebras.ai or follow us on LinkedIn, X and/or Threads

Contacts

Cerebras Systems


Release Versions

Contacts

More News From Cerebras Systems

Cerebras Systems Announces Filing of Registration Statement for Proposed Initial Public Offering

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems Inc. (“Cerebras”) today announced that it has filed a registration statement on Form S-1 with the U.S. Securities and Exchange Commission (“SEC”) relating to a proposed initial public offering of its Class A common stock. The number of shares of Class A common stock to be offered and the price range for the proposed offering have not yet been determined. The offering is subject to market conditions, and there can be no assurance as to whether...

Cerebras Systems Closes $850 Million Revolving Credit Facility

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, makers of the fastest AI infrastructure in the industry, today announced the closing of a new five-year syndicated revolving credit facility for up to $850 million. This follows the company’s $1 billion Series G financing closed in September 2025, and an additional $1 billion Series H in January 2026. “We are pleased to have closed our inaugural credit facility with the support of a syndicate of leading financial institutions,” said Bob Komi...

AWS and Cerebras Collaboration Aims to Set a New Standard for AI Inference Speed and Performance in the Cloud

SEATTLE & SUNNYVALE, Calif.--(BUSINESS WIRE)--Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), and Cerebras Systems today announced a collaboration that will, in the coming months, deliver the fastest AI inference solutions available for generative AI applications and LLM workloads. The solution, to be deployed on Amazon Bedrock in AWS data centers, combines AWS Trainium-powered servers, Cerebras CS-3 systems, and Elastic Fabric Adapter (EFA) networking. Later this y...
Back to Newsroom