-

I/ONX Shatters the Host Tax: New Symphony SixtyFour Architecture Delivers 50% TCO Savings Across AI Inference and Fine-Tuning Lifecycle

Eliminating infrastructure overhead of legacy designs, I/ONX debuts a scaled AI inference and fine-tuning stack that cuts power by up to 30kW per rack and reduces cost of rack-scale deployments by up to 70%

LAS VEGAS--(BUSINESS WIRE)--I/ONX High Performance Compute (HPC), a leading provider of heterogeneous AI systems, today announced the global launch of the Symphony SixtyFour, a high-density platform designed to collapse the physical and economic footprint of AI inference and fine-tuning infrastructure. By supporting up to 64 accelerators on a single node, I/ONX eliminates the redundant Host Tax—the massive overhead in power, hardware, and licensing that negatively impacts ROI in enterprise AI.

I/ONX accelerates the enterprise shift toward systems designed specifically for inference and fine-tuning at scale. Symphony SixtyFour is available now, enabling organizations to reclaim critical power capacity and reduce costs.

Share

While inference now accounts for 90% of enterprise AI workloads, enterprises are entirely limited to deploying inference on training hardware platforms. Symphony SixtyFour provides significant advantages for reduced CapEx and OpEx for inference and fine-tuning workloads. In training comparisons, the I/ONX system recovers the 30kW Host Tax typically wasted on redundant CPUs, memory, and support hardware in multi-node clusters and simplifies ongoing support tasks. For production-scale inference on alternative accelerators, the platform is even more transformative, drawing one-fourth the power of a traditional 64-device cluster—completely eliminating liquid cooling needs for inference only.

“Enterprise AI infrastructure is entering a new phase of maturity,” said I/ONX CEO Justyn Hornor. “The training-centric designs of the past served us well during the experimental phase, but they weren't optimized for the power-constrained, production-heavy world we live in today. With Symphony SixtyFour, we’ve reimagined the stack to be more fluid and fit for purpose, allowing organizations to master massive-scale inference while finally eliminating the unnecessary infrastructure waste that has hindered ROI.”

The Symphony SixtyFour Advantage: Fit for purpose Silicon. The platform is engineered to maximize every watt and dollar for Enterprise AI.

  • Eliminating the Training Host Tax: For large-scale inference and fine-tuning, Symphony SixtyFour collapses the infrastructure stack from eight nodes into one. This consolidation removes up to 30kW of wasted support power, allowing for higher compute density within existing power envelopes.
  • Zero-Hop, near-Deterministic Performance: By housing 64 accelerators within a single OS instance, Symphony SixtyFour eliminates the East-West network latency.
  • Heterogeneous Flexibility: Symphony SixtyFour is fully vendor-neutral and built for mixed-mode operations. Enterprises can seamlessly pair high-end GPUs (including AMD/NVIDIA) with more purpose built, low power co-processors and layer in specialized inference silicon (Axelera/FuriosaAI/Tenstorrent) future-proofing infrastructure against shifting market dynamics.
  • Collapsing OpEx by simplifying the Host Tax: Beyond hardware and power, Symphony SixtyFour provides massive operational relief. By presenting a 64-device fleet through a single management environment, I/ONX collapses the Software Tax, saving enterprises up to $500,000 annually in Enterprise Operating Systems and orchestration licensing per cluster.

I/ONX accelerates the enterprise shift toward systems designed specifically for inference and fine-tuning at scale. Symphony SixtyFour is available now, enabling organizations to reclaim critical power capacity and reduce costs. I/ONX is committed to delivering high-density infrastructure required to unlock the maximum economic and operational potential of production AI.

About I/ONX

I/ONX High Performance Compute (HPC) is the pioneer of heterogeneous AI infrastructure, and is redefining the AI lifecycle by eliminating the Host Tax of legacy architectures. The I/ONX flagship Symphony SixtyFour consolidates up to 64 accelerators into a single node, reducing rack-scale TCO by 50% or more. By dramatically lowering power consumption and maximizing hardware utilization, I/ONX enables enterprises to achieve production-scale AI with unprecedented efficiency and faster ROI.

Contacts

I/ONX High Performance Compute
media@i-onx.com

I/ONX High Performance Compute


Release Summary
I/ONX Shatters the Host Tax: New Symphony SixtyFour Architecture Delivers 50% TCO Savings Across AI Inference and Fine-Tuning Lifecycle
Release Versions

Contacts

I/ONX High Performance Compute
media@i-onx.com

Social Media Profiles
More News From I/ONX High Performance Compute

I/ONX and Knightz Group Announce Strategic Partnership

LAS VEGAS--(BUSINESS WIRE)--I/ONX High Performance Compute (HPC), a leading provider of heterogeneous AI and HPC infrastructure, today announced a strategic partnership with Knightz Group, a cybersecurity and technology advisory firm known for delivering advanced security and transformation outcomes for enterprise and public sector organizations. The partnership officially kicks off on March 18 at SecureWorld 2026 in Charlotte, N.C., where industry leaders will gather to discuss the future of c...

I/ONX Partners with University of Nevada Las Vegas College of Engineering

LAS VEGAS--(BUSINESS WIRE)--I/ONX High Performance Compute selected to participate in classes and workshops at University of Nevada Las Vegas’ (UNLV) College of Engineering....

I/ONX and Mote Marine to Collaborate on Incorporating AI Compute into Marine Science Research

LAS VEGAS & SARASOTA, Fla.--(BUSINESS WIRE)--I/ONX High Performance Compute and Mote Marine Laboratory, Inc. are partnering to incorporate AI and heterogeneous compute into marine research....
Back to Newsroom