-

Untether AI Ships speedAI 240 Slim: World’s Fastest, Most Energy Efficient AI Inference Accelerator for Cloud to Edge Applications

Delivers AI Inference Silicon and Software Ideally Suited for Markets Including Automotive, Agriculture, and Machine Vision

TORONTO--(BUSINESS WIRE)--Untether AI®, the leader in energy-centric AI inference acceleration, today announced broad availability of its highly anticipated speedAI 240 Slim AI inference accelerator cards. Recently receiving top marks in the MLPerf benchmark for AI inference, speedAI 240 Slim cards provide customers the performance, energy efficiency, AI model support, and scalability they need for a broad range of applications from regional clouds to the edge. J-squared and Ola-Krutrim are among customers who have already deployed speedAI®.

“The true potential for AI does not end with datacenters, it extends to the cars we drive and the fields that produce our food. Bringing AI to these environments is essential, and it requires a vastly different approach at both the hardware and software level," said Chris Walker, CEO of Untether AI. "With our At-Memory Compute architecture, we are bringing proven datacenter-class AI acceleration to edge applications, at a price point, footprint and energy efficiency unrivaled in the industry."

AI at the Edge Demands Energy Efficiency and Cost Efficiency

AI inference acceleration is anticipated to be 80% of the AI chip market by 20271, dominated by edge and on-prem datacenter applications. As the focus of AI shifts from training to inference, the importance and unique needs of edge acceleration are clear. Edge AI applications cannot tolerate the high latency, large capital costs, and non-determinism of cloud-based AI services – they require solutions that meet very different size, power, and operating cost requirements.

Available in a low-profile, 75-watt TDP PCIe design that delivers optimal performance and reduced power consumption, Untether AI’s speedAI 240 Slim accelerator cards were recently recognized as achieving the world’s lowest latency and highest throughput on the MLPerf inference benchmark. Customer applications for speedAI 240 Slim cards are broad, including automotive vision systems, object detection in aerospace and defense, defect identification in machine vision manufacturing and use in agricultural settings. For example, Untether AI recently announced an agreement with J-squared that includes development of edge AI compute machines for agricultural technology applications.

Each of these applications have their own unique AI models that require optimal performance, highlighting the flexibility and maturity of Untether AI’s imAIgine software development kit (SDK). imAIgine SDK provides a push-button flow, streamlining the process of converting trained neural network models into optimized, inference-ready models to be run on speedAI acceleration solutions.

Scalability for on-prem and regional datacenters

The throughput and energy efficiency of Untether AI acceleration solutions, combined with their scalability, make them appealing for low-latency on-prem and regional datacenters. Ola-Krutrim has already deployed speedAI 240 Slim cards at their locations in India and the United States.

“Running the speedAI cards is the first step of our ongoing partnership with Untether AI,” said Sambit Sahu, SVP of Engineering at Ola-Krutrim. “We have them running with an Arm-based CPU system and are seeing that the cards are hitting their AI inference performance and power efficiency targets out of the box.”

speedAI 240 Slim accelerator cards are now available for purchase. For more information please visit Untether AI's website. To order please submit your requests by visiting https://www.untether.ai/about/contact/

About Untether AI

Untether AI® provides energy-centric AI inference acceleration from the edge to the cloud, supporting any type of neural network model. With its at-memory compute architecture, Untether AI has solved the data movement bottleneck that costs energy and performance in traditional CPUs and GPUs, resulting in high-performance, low-latency neural network inference acceleration without sacrificing accuracy. Untether AI embodies its technology in runAI® and speedAI® devices, tsunAImi® acceleration cards, and its imAIgine® Software Development Kit. More information can be found at www.untether.ai.

All references to Untether AI trademarks are the property of Untether AI. All other trademarks mentioned herein are the property of their respective owners.

Contacts

Media Contact for Untether AI:
Michelle Clancy Fuller, Cayenne Global, LLC
Michelle.clancy@cayennecom.com
1-503-702-4732

Company Contact:
Robert Beachler, Untether AI
beach@untether.ai
+1.650.793.8219

Untether AI


Release Versions

Contacts

Media Contact for Untether AI:
Michelle Clancy Fuller, Cayenne Global, LLC
Michelle.clancy@cayennecom.com
1-503-702-4732

Company Contact:
Robert Beachler, Untether AI
beach@untether.ai
+1.650.793.8219

More News From Untether AI

Untether AI with AI Platform Alliance Unveils AI-Powered Intelligent Video Solution at ISC West 2025

TORONTO--(BUSINESS WIRE)--Untether AI®, a leader in energy-centric AI inference acceleration, is teaming up with AI Platform Alliance members Ampere® Computing, NETINT, ZoneMinder, AVC Group, and ASA Computers to introduce the Intelligent Video Recording (IVR) solution at ISC West 2025, the premier security industry tradeshow. This groundbreaking AI-powered system sets a new benchmark in video surveillance, delivering up to 8x better AI camera efficiency while operating in an eco-friendly serve...

Untether AI Dramatically Expands AI Model Support and Speeds Developer Velocity with New Generative Compiler Technology

TORONTO--(BUSINESS WIRE)--Untether AI®, a leader in energy-centric AI inference acceleration today introduced a breakthrough in AI model support and developer velocity for users of the imAIgine® Software Development Kit (SDK). Using a breakthrough generative compiler technology, the upcoming release the imAIgine SDK will support 4 times more AI models than the previous releases. Additionally, for new neural networks users may architect, the generative compiler creates new kernels for these laye...

Untether AI and Vertical Data Join Forces to Revolutionize AI-Centric Modular Data Centers

TORONTO--(BUSINESS WIRE)--Untether AI®, a leader in energy-efficient AI inference acceleration, today announced its partnership with Vertical Data, recognized for its cutting-edge infrastructure solutions that enhance computing capabilities at the source of data generation. This collaboration aims to advance modular and portable data center solutions, enabling faster, more secure, and highly efficient AI-driven computing. Untether AI’s industry-leading AI inference accelerators, including the h...
Back to Newsroom