-

SambaNova Announces That Fugaku-LLM Is Now a Part of Samba-1

HAMBURG , Germany--(BUSINESS WIRE)--ISC24 SambaNova Systems, makers of the only purpose-built, full-stack AI platform, today announced that “Fugaku-LLM”, a Japanese Large Language Model trained on Japan's fastest supercomputer, “Fugaku”, and published on Hugging Face on May 10, has been introduced into SambaNova's industry-leading Samba-1 Composition of Experts (CoE) technology.

Matsuoka Satoshi, Director of the RIKEN Center for Computational Science, said, “We are very pleased that Fugaku-LLM, the Japanese Large Language Model trained on a large scale from scratch by the supercomputer ‘Fugaku’, is introduced into SambaNova's Samba-1 CoE, making the achievements of Fugaku available to many people. The flexibility and scalability of SambaNova's CoE are highly promising as a platform for hosting the results of Large Language Models trained by the world's supercomputers.”

“Samba-1 employs a best-of-breed strategy from open source, which ensures that we always have access to the world's best and fastest AI models,” said Rodrigo Liang, Co-Founder and CEO of SambaNova Systems. “The addition of Fugaku-LLM, a Japanese LLM trained on Japan's renowned supercomputer, ‘Fugaku’, fits into this strategy. We are delighted to incorporate Fugaku's capabilities into this world-leading model.”

SambaNova's unique CoE architecture aggregates multiple expert models and improves performance and accuracy by selecting the best expert for each application. The Fugaku-LLM is implemented on CoE architecture and runs optimally on SambaNova's SN40L chip with its 3-tier memory and Dataflow architecture.

Fugaku-LLM on Samba-1 is being demonstrated at the SambaNova booth #A11, Hall H at ISC24.

About SambaNova Systems

Customers turn to SambaNova to quickly deploy state-of-the-art generative AI capabilities within the enterprise. Our purpose-built enterprise-scale AI platform is the technology backbone for the next generation of AI computing.

Headquartered in Palo Alto, California, SambaNova Systems was founded in 2017 by industry luminaries, and hardware and software design experts from Sun/Oracle and Stanford University. Investors include SoftBank Vision Fund 2, funds and accounts managed by BlackRock, Intel Capital, GV, Walden International, Temasek, GIC, Redline Capital, Atlantic Bridge Ventures, Celesta, and several others. Visit us at sambanova.ai or contact us at info@sambanova.ai. Follow SambaNova Systems on Linkedin and on X.

Contacts

Virginia Jamieson
650-279-8619
virginia.jamieson@sambanova.ai

SambaNova Systems


Release Versions

Contacts

Virginia Jamieson
650-279-8619
virginia.jamieson@sambanova.ai

Social Media Profiles
More News From SambaNova Systems

Research Finds AI’s Energy Use Is Driving Concern

BARCELONA, Spain--(BUSINESS WIRE)--FYFN – SambaNova, builders of the fastest chip for agentic AI, today released research highlighting the mounting concerns over the energy demands of AI data centres and the impact on households and national power grids. As AI deployment accelerates, business leaders and consumers are aware that legacy, GPU-based infrastructure is not built for the efficiency and scale required in a power‑constrained world. The survey of 2,525 adults across the US and UK shows...

SambaNova Unveils Fastest Chip for Agentic AI, Collaborates with Intel, and Raises $350M+

DUBAI, United Arab Emirates--(BUSINESS WIRE)--SambaNova today introduced their SN50 AI chip, which boasts a max speed that’s 5X faster than competitive chips. The company also announced a planned collaboration with Intel to deliver high‑performance, cost‑efficient AI inference solutions, and more than $350M in investment from new and existing investors. Positioned as the most efficient chip for agentic AI, the SN50 chip offers enterprises a 3X lower total cost of ownership — a powerful foundati...

SambaNova Unveils Fastest Chip for Agentic AI, Collaborates with Intel, and Raises $350M+

SAN JOSE, Calif.--(BUSINESS WIRE)--SambaNova today introduced their SN50 AI chip, which boasts a max speed that’s 5X faster than competitive chips. The company also announced a planned collaboration with Intel to deliver high‑performance, cost‑efficient AI inference solutions, and more than $350M in investment from new and existing investors. Positioned as the most efficient chip for agentic AI, the SN50 chip offers enterprises a 3X lower total cost of ownership — a powerful foundation to scale...
Back to Newsroom