-

Untether AI Increases Developer Velocity and Adds High-Performance Compute Flow to the imAIgine Software Development Kit

Open, flexible kernel library enables quick iterations of neural network functions;
High-performance compute flow allows development of non-neural network applications such as linear algebra, signal processing, and simulation acceleration

TORONTO--(BUSINESS WIRE)--Untether AI®, the leader in at-memory computation for artificial intelligence (AI) workloads, today announced the availability of the imAIgine® Software Development Kit (SDK) version 22.12. The imAIgine SDK provides an automated path to running neural networks on Untether AI’s runAI™ devices and tsunAImi® accelerator cards, with push-button quantization, optimization, physical allocation, and multi-chip partitioning. This release dramatically improves the speed in which developers can create and deploy neural networks or high-performance compute workloads, saving months of development time.

Increasing Developer Velocity for Custom Neural Networks

“There has been an explosion of neural networks over the last several years,” said Arun Iyengar, CEO of Untether AI. “Keeping up with the support of these new, innovative networks requires an open, flexible tool flow, and with the 22.12 release of the Imagine SDK we’ve made the necessary improvements to allow customers to quickly and easily add support without requiring Untether AI assistance.”

A key innovation with this release is the introduction of flexible kernels, which can automatically adapt to different input and output shapes of neural network layers. Additionally, Untether AI is providing its customers with the source code to the kernels to provide examples of code optimized for at-memory compute. Developers can modify these kernels and register them with the imAIgine compiler so that they can be selected by the compiler in the automatic lowering process. In this manner, customers are free to self-support their neural network development. The imAIgine SDK provides the low-level kernel compiler, code profiler, and cycle-accurate simulator to provide instant feedback to the developer on the performance of their custom kernels.

Introducing the High-Performance Compute Flow

“Customers are seeing the energy-centric benefits of Untether AI’s at-memory compute architecture in other, non-AI applications,” said Mr. Iyengar. “High-performance simulation, signal processing and linear algebra acceleration are a few of the applications that our customers are requesting.”

In response, the 22.12 release introduces a high-performance compute (HPC) design flow in the imAIgine SDK for runAI200 devices. The runAI200 devices have 511 memory banks – each memory bank with its own RISC processor and a two-dimensional array of 512 at-memory processing elements, arranged as a single-instruction multiple-data (SIMD) architecture. With the HPC flow, customers can directly develop “bare metal” kernels for the RISC processors and processing elements in the runAI200 devices. Users can then manually place the kernels in any topology on the memory banks and use pre-defined code for bank-to-bank data transmission. The code profiler tool within the imAIgine SDK shows exactly how the code is running, identifying any compute bottlenecks and data transmission congestions, which can then be rectified through duplication of kernels and re-placement of the kernels in the runAI200 spatial architecture.

Reducing the Learning Curve

Whether using the neural network or the HPC flow, Untether AI provides on-line and downloadable documentation for all of the imAIgine SDK’s tools and procedures to create, quantize, compile, and run neural networks or low-level kernel code on the runAI200 devices. Untether AI also offers a live, instructor-led training program with many tutorials and coding examples included.

Availability

The imAIgine SDK latest version 22.12 is available today and can be downloaded from the Untether AI customer portal. To gain access, please visit www.untether.ai and request download privileges.

About Untether AI

Untether AI provides ultra-efficient, high-performance AI chips to enable new frontiers in AI applications. By combining the power efficiency of at-memory computation with the robustness of digital processing, Untether AI has developed a groundbreaking new chip architecture for neural net inference that eliminates the data movement bottleneck that costs energy and performance in traditional architectures. Founded in Toronto in 2018, Untether AI is funded by CPPIB, GM Ventures, Intel Capital, Radical Ventures, and Tracker Capital. www.untether.ai.

All references to Untether AI trademarks are the property of Untether.AI. All other trademarks mentioned herein are the property of their respective owners.

Contacts

Media Contact for Untether AI:
Michelle Clancy, Cayenne Global, +1.503.702.4732
michelle.clancy@cayennecom.com

Company Contact:
Robert Beachler, Untether AI, +1.650.793.8219
beach@untether.ai

Untether AI


Release Versions

Contacts

Media Contact for Untether AI:
Michelle Clancy, Cayenne Global, +1.503.702.4732
michelle.clancy@cayennecom.com

Company Contact:
Robert Beachler, Untether AI, +1.650.793.8219
beach@untether.ai

More News From Untether AI

Untether AI with AI Platform Alliance Unveils AI-Powered Intelligent Video Solution at ISC West 2025

TORONTO--(BUSINESS WIRE)--Untether AI®, a leader in energy-centric AI inference acceleration, is teaming up with AI Platform Alliance members Ampere® Computing, NETINT, ZoneMinder, AVC Group, and ASA Computers to introduce the Intelligent Video Recording (IVR) solution at ISC West 2025, the premier security industry tradeshow. This groundbreaking AI-powered system sets a new benchmark in video surveillance, delivering up to 8x better AI camera efficiency while operating in an eco-friendly serve...

Untether AI Dramatically Expands AI Model Support and Speeds Developer Velocity with New Generative Compiler Technology

TORONTO--(BUSINESS WIRE)--Untether AI®, a leader in energy-centric AI inference acceleration today introduced a breakthrough in AI model support and developer velocity for users of the imAIgine® Software Development Kit (SDK). Using a breakthrough generative compiler technology, the upcoming release the imAIgine SDK will support 4 times more AI models than the previous releases. Additionally, for new neural networks users may architect, the generative compiler creates new kernels for these laye...

Untether AI and Vertical Data Join Forces to Revolutionize AI-Centric Modular Data Centers

TORONTO--(BUSINESS WIRE)--Untether AI®, a leader in energy-efficient AI inference acceleration, today announced its partnership with Vertical Data, recognized for its cutting-edge infrastructure solutions that enhance computing capabilities at the source of data generation. This collaboration aims to advance modular and portable data center solutions, enabling faster, more secure, and highly efficient AI-driven computing. Untether AI’s industry-leading AI inference accelerators, including the h...
Back to Newsroom