-

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium

World’s Leading AI Inference Selected by Innovation Zone Attendees at TSMC’s North America Technology Symposium

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, makers of the fastest AI infrastructure, today announced that Cerebras AI Inference has been named Demo of the Year at 2025 TSMC North America Technology Symposium. Voted on by attendees from TSMC’s customer and partners, the award recognizes the most compelling and impactful innovation demonstrated in the Innovation Zone at TSMC’s annual Technology Symposium.

“Wafer-scale computing was considered impossible for fifty years, and together with TSMC we proved it could be done,” said Dhiraj Mallick, COO, Cerebras Systems.

Share

“Wafer-scale computing was considered impossible for fifty years, and together with TSMC we proved it could be done,” said Dhiraj Mallick, COO, Cerebras Systems. “Since that initial milestone, we’ve built an entire technology platform to run today’s most important AI workloads more than 20x faster than GPUs, transforming a semiconductor breakthrough into a product breakthrough used around the world.”

“At TSMC, we support all our customers of all sizes—from pioneering startups to established industry leaders—with industry-leading semiconductor manufacturing technologies and capacities, helping turn their transformative idea into realities,” said Lucas Tsai, Vice President of Business Management, TSMC North America. “We are glad to work with industry innovators likes Cerebras to enable their semiconductor success and drive advancements in AI.”

In 2019, Cerebras introduced the industry’s first functional wafer-scale processor—a single-die chip 50 times larger than conventional processors—breaking a half-century of semiconductor assumptions through its partnership with TSMC. The Cerebras CS-3 extends this lineage and continues a scaling law unique to Cerebras.

A Showcase of Innovation and Partnership

Cerebras demonstrated CS-3 inference in TSMC North America Technology Symposium’s Innovation Zone, a curated exhibition area highlighting breakthrough technologies from across TSMC’s emerging customers. Cerebras AI Inference received the highest number of votes from attendees at the North America event, reflecting both the technical achievement and the excitement it generated among event attendees.

Cerebras AI Inference Leading the Industry

Cerebras AI Inference is now used across the world’s most demanding environments. It is available through AWS, IBM, Hugging Face, and other cloud platforms. It supports cutting-edge national scientific research at U.S. Department of Energy laboratories and the Department of Defense, and global enterprises across healthcare, biotech, finance, and design have adopted Cerebras to accelerate their most complex AI workloads with real-time performance that GPUs cannot deliver.

Cerebras is also the fastest platform for AI coding—one of the fastest growing and most strategic AI verticals. It generates code more than 20 times faster than competing solutions.

Cerebras has been a pioneer in supporting open-source models from OpenAI, Meta, G42 and others, consistently achieving the fastest inference speeds as verified by independent benchmarking firm Artificial Analysis.

Cerebras now serves trillions of tokens per month across the Cerebras Cloud, on-premises deployments, and leading partner platforms.

For more information on Cerebras AI Inference, please visit www.cerebras.ai.

About Cerebras Systems

Cerebras Systems builds the fastest AI infrastructure in the world. We are a team of pioneering computer architects, computer scientists, AI researchers, and engineers of all types. We have come together to make AI blisteringly fast through innovation and invention because we believe that when AI is fast it will change the world. Our flagship technology, the Wafer Scale Engine 3 (WSE-3) is the world’s largest and fastest AI processor. 56 times larger than the largest GPU, the WSE uses a fraction of the power per unit compute while delivering inference and training more than 20 times faster than the competition. Leading corporations, research institutes and governments on four continents chose Cerebras to run their AI workloads. Cerebras solutions are available on premise and in the cloud, for further information, visit cerebras.ai or follow us on LinkedIn, X and/or Threads.

Contacts

Cerebras Systems


Release Summary
Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium
Release Versions

Contacts

Social Media Profiles
More News From Cerebras Systems

Cerebras Systems Announces Filing of Registration Statement for Proposed Initial Public Offering

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems Inc. (“Cerebras”) today announced that it has filed a registration statement on Form S-1 with the U.S. Securities and Exchange Commission (“SEC”) relating to a proposed initial public offering of its Class A common stock. The number of shares of Class A common stock to be offered and the price range for the proposed offering have not yet been determined. The offering is subject to market conditions, and there can be no assurance as to whether...

Cerebras Systems Closes $850 Million Revolving Credit Facility

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, makers of the fastest AI infrastructure in the industry, today announced the closing of a new five-year syndicated revolving credit facility for up to $850 million. This follows the company’s $1 billion Series G financing closed in September 2025, and an additional $1 billion Series H in January 2026. “We are pleased to have closed our inaugural credit facility with the support of a syndicate of leading financial institutions,” said Bob Komi...

AWS and Cerebras Collaboration Aims to Set a New Standard for AI Inference Speed and Performance in the Cloud

SEATTLE & SUNNYVALE, Calif.--(BUSINESS WIRE)--Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), and Cerebras Systems today announced a collaboration that will, in the coming months, deliver the fastest AI inference solutions available for generative AI applications and LLM workloads. The solution, to be deployed on Amazon Bedrock in AWS data centers, combines AWS Trainium-powered servers, Cerebras CS-3 systems, and Elastic Fabric Adapter (EFA) networking. Later this y...
Back to Newsroom