d-Matrix Launches New Chiplet Connectivity Platform to Address Exploding Compute Demand for Generative AI

New Jayhawk platform capitalizes on innovative energy efficient chiplet interconnects to improve performance and reduce data center energy consumption

SANTA CLARA, Calif.--()--Today, d-Matrix, a leader in high-efficiency AI-compute and inference processors, announced Jayhawk, the industry’s first Open Domain-Specific Architecture (ODSA) Bunch of Wires (BoW) based chiplet platform for energy efficient die-die connectivity over organic substrates. Building on the back of the Nighthawk chiplet platform launched in 2021, the 2nd generation Jayhawk silicon platform further builds the scale-out chiplet based inference compute platform. d-Matrix customers will be able to use the inference compute platforms to manage Generative AI applications and Large Language Model transformer applications with a 10-20X improvement in performance.

Large transformer models are creating new demands for AI inference at the same time that memory and energy requirements are hitting physical limits. d-Matrix provides one of the first Digital In-Memory Compute (DIMC) based inference compute platforms to come to market, transforming the economics of complex transformers and Generative AI with a scalable platform built to handle the immense data and power requirements of inference AI. Improving performance can make energy-hungry data centers more efficient while reducing latency for end users in AI applications.

“With the announcement of our 2nd generation chiplet platform, Jayhawk, and a track record of execution, we are establishing our leadership in the chiplet ecosystem,” said Sid Sheth, CEO of d-Matrix. “The d-Matrix team has made great progress towards building the world’s first in-memory computing platform with a chiplet-based architecture targeted for power hungry and latency sensitive demands of generative AI.”

d-Matrix’s novel compute platform uses an ingenious combination of an in-memory compute-based IC architecture, sophisticated tools that integrate with leading ANN models, and chiplets in a block grid formation to support scalability and efficiency for demanding ML workloads. By using a modular chiplet-based approach, data center customers can refresh compute platforms on a much faster cadence using a pre-validated chiplet architecture. To enable this, d-Matrix plans to build chiplets based on both BoW and UCIe based interconnects to enable a truly heterogeneous computing platform that can accommodate 3rd party chiplets.

"d-Matrix has moved quickly to seize the chiplet opportunity, which should give them a first-mover advantage,” said Karl Freund, Founder and Principal Analyst at Cambrian-AI Research. “Anyone looking to add an AI accelerator to their SoC design would do well to investigate this new approach for efficient AI.”

The Jayhawk chiplet platform features:

  • 3mm, 15mm, 25 mm trace lengths on organic substrate
  • 16 Gbps/wire high bandwidth throughput
  • 6-nm TSMC process technology
  • <0.5 pJ/bit energy efficiency

Jayhawk is currently available for demos and evaluation. d-Matrix will be showcasing the Jayhawk platform at the Chiplet Summit Jan 24-26 in San Jose, CA.

About d-Matrix

d-Matrix is building a new way of doing datacenter AI inferencing at scale using in-memory computing (IMC) techniques with chiplet level scale-out interconnects. Founded in 2019, d-Matrix has attacked the physics of memory-compute integration using innovative circuit techniques, ML tools, software and algorithms; solving the memory-compute integration problem, which is the final frontier in AI compute efficiency. Learn more at dmatrix.ai.

Contacts

Media Contact
Kristen Caron
kristen.caron@aircoverpr.com
978-407-9283

Release Summary

d-Matrix's Jayhawk platform capitalizes on innovative energy efficient chiplet interconnects.

Contacts

Media Contact
Kristen Caron
kristen.caron@aircoverpr.com
978-407-9283