-

Cerebras Launches Qwen3-235B: World's Fastest Frontier AI Model with Full 131K Context Support

World's fastest frontier AI reasoning model now available on Cerebras Inference Cloud

Delivers production-grade code generation at 30x the speed and 1/10th the cost of closed-source alternatives

PARIS--(BUSINESS WIRE)--Cerebras Systems today announced the launch of Qwen3-235B with full 131K context support on its inference cloud platform. This milestone represents a breakthrough in AI model performance, combining frontier-level intelligence with unprecedented speed at one-tenth the cost of closed-source models, fundamentally transforming enterprise AI deployment.

"With Cerebras' inference, developers using Cline are getting a glimpse of the future, as Cline reasons through problems, reads codebases, and writes code in near real-time," said Saoud Rizwan, CEO of Cline.

Share

Frontier Intelligence on Cerebras

Alibaba’s Qwen3-235B delivers model intelligence that rivals frontier models such as Claude 4 Sonnet, Gemini 2.5 Flash, and DeepSeek R1 across a range of science, coding, and general knowledge benchmarks according to independent tests by Artificial Analysis.

Qwen3-235B uses an efficient mixture-of-experts architecture that delivers exceptional compute efficiency, enabling Cerebras to offer the model at $0.60 per million input tokens and $1.20 per million output tokens—less than one-tenth the cost of comparable closed-source models.

Cut Reasoning Time from Minutes to Seconds

Reasoning models are notoriously slow, often taking minutes to answer a simple question. By leveraging the Wafer Scale Engine, Cerebras accelerates Qwen3-235B to an unprecedented 1,500 tokens per second, reducing response times from 1-2 minutes to 0.6 seconds, making coding, reasoning, and deep-RAG workflows nearly instantaneous.

Based on Artificial Analysis measurements, Cerebras is the only company globally offering a frontier AI model capable of generating output at over 1,000 tokens per second, setting a new standard for real-time AI performance.

131K Context Enables Production-grade Code Generation

Concurrent with this launch, Cerebras has quadrupled its context length support from 32K to 131K tokens—the maximum supported by Qwen3-235B. This expansion directly impacts the model's ability to reason over large codebases and complex documents. While 32K context is sufficient for simple code generation use cases, 131K context allows the model to process dozens of files and tens of thousands of lines of code simultaneously, enabling production-grade application development.

This enhanced context length means Cerebras now directly addresses the enterprise code generation market, which is one of the largest and fastest-growing segments for generative AI.

Strategic Partnership with Cline

To showcase these new capabilities, Cerebras has partnered with Cline, the leading agentic coding agent for Microsoft VS Code with over 1.8 million installations. Cline users can now access Cerebras Qwen models directly within the editor—starting with Qwen3-32B at 64K context on the free tier. This rollout will expand to include Qwen3-235B with 131K context, delivering 10–20x faster code generation speeds compared to alternatives like DeepSeek R1.

"With Cerebras' inference, developers using Cline are getting a glimpse of the future, as Cline reasons through problems, reads codebases, and writes code in near real-time. Everything happens so fast that developers stay in flow, iterating at the speed of thought. This kind of fast inference isn't just nice to have -- it shows us what's possible when AI truly keeps pace with developers,” said Saoud Rizwan, CEO of Cline.

Frontier Intelligence at 30x the Speed and 1/10th the Cost

With today's launch, Cerebras has significantly expanded its inference offering, providing developers looking for an open alternative to OpenAI and Anthropic with comparable levels of model intelligence and code generation capabilities. Moreover, Cerebras delivers something that no other AI provider in the world—closed or open—can do: instant reasoning speed at over 1,500 tokens per second, increasing developer productivity by an order of magnitude vs. GPU solutions. All of this is delivered at one-tenth the token cost of leading closed-source models.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on-premises. For further information, visit cerebras.ai or follow us on LinkedIn, X and/or Threads

Contacts

Cerebras Systems


Release Versions

Contacts

More News From Cerebras Systems

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium...

Guyana and Cerebras Forge Historic AI Partnership to Launch a 100MW Data Center and Ignite Regional Innovation

GEORGETOWN, Guyana & SUNNYVALE, Calif.--(BUSINESS WIRE)--In a bold step toward shaping the future of technology in South America and the Caribbean, the Government of the Co-operative Republic of Guyana (Guyana) and Cerebras Systems have signed a landmark Memorandum of Understanding (MOU) to build and operate a state-of-the-art artificial intelligence (AI) data center of up to 100MW in Wales, Guyana. This transformative initiative marks a new chapter in Guyana’s journey to become an AI-first nat...

Cerebras Systems Launches “Cerebras for Nations” -- A Global Initiative to Accelerate and Scale Sovereign AI

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, makers of the fastest AI infrastructure, today announced the launch of “Cerebras for Nations,” a global program to help world governments build, accelerate, and scale their sovereign AI initiatives. Under the Cerebras for Nations initiative, Cerebras will engage with international partner governments and their private sector datacenter, cloud, and AI ecosystems to advance three key pillars of sovereign AI: 1) Co-design and build world-class...
Back to Newsroom