-

Exostellar Enables AI Infrastructure Efficiency On AMD Instinct GPUs

SANTA CLARA, Calif.--(BUSINESS WIRE)--Exostellar, a leader in self-managed AI infrastructure orchestration, today announced support for AMD solutions, bringing together open, high-performance AMD Instinct™ GPUs and Exostellar’s GPU-agnostic orchestration platform to meet enterprise demands for transparency, choice, and performance.

Why This Matters

As enterprises and OEMs seek more transparent and flexible compute ecosystems, the commitment from AMD to open standards and heterogeneous integration aligns well with Exostellar’s architectural approach. Exostellar’s heterogeneous xPU orchestration platform is designed to be fully GPU agnostic, intelligently decoupling applications from underlying hardware to enable flexible scheduling across mixed infrastructure. This directly addresses a critical, validated industry need: freedom of choice without vendor lock in.

“Open ecosystems are key to building next-generation AI infrastructure,” said Anush Elangovan, Vice President, AI Software at AMD. “Together with Exostellar, we’re enabling advanced capabilities like topology-aware scheduling and resource bin-packing on AMD Instinct™ GPUs, helping enterprises maximize GPU efficiency and shorten time to value for AI workloads.”

Benefits at a Glance

Enterprises stand to realize a range of powerful benefits with this successful enablement of Exostellar’s platform on AMD Instinct™ GPUs.

  1. For infrastructure teams, it delivers centralized visibility across heterogeneous environments, dynamic GPU sizing, and optimized compute utilization—enabled by Exostellar’s fine-grained GPU slicing and the high-bandwidth AMD Instinct GPU architecture.
  2. AI developers will experience reduced queuing times, smarter workload placement, and faster experimentation cycles, thanks to Exostellar’s advanced orchestration and intuitive UI/UX.
  3. For business leaders, these improvements translate into lower total cost of ownership: fewer required nodes, better use of powerful AMD Instinct GPUs, and accelerated model deployment—all supported by Exostellar’s platform automation and hardware efficiency from AMD.

Exostellar’s Technical Differentiation

Unlike blackbox Kubernetes solutions, Exostellar offers:

  1. A superior UI/UX that simplifies cluster management and monitoring.
  2. Workload-aware slicing with Exostellar’s GPU Optimizer on AMD MI300X enables precise resource right-sizing; unlike KAI’s fractional mode, it enforces isolation, while remaining vendor-agnostic alongside NVIDIA’s MIG option.
  3. Offering unique features, some that are unavailable in other open-source alternatives: workload-driven orchestration, resource-aware placement, dynamic scheduling tailored for AMD Instinct GPUs.

These capabilities position Exostellar as a next-generation orchestrator that aligns with the AMD vision and elevates the value of our work together in the compute ecosystem.

AMD Instinct GPUs: Memory Advantage Driving ROI

AMD Instinct GPUs leverage cutting-edge HBM3 and HBM3E technology. For example, AMD Instinct MI300X GPUs deliver up to 192 GB HBM3 with 5.3 TB/s bandwidth, while the MI325X raises the bar to up to 256 GB HBM3E and 6 TB/s, and the current MI355X GPUs deliver up to 288 GB HMBM3e with 8 TB/s bandwidth. This massive memory footprint enables larger model deployment, fewer nodes, and more efficient KV caching—directly benefiting from Exostellar’s fine-grained compute sizing and orchestration capabilities, leading to reduced infrastructure costs and faster time-to-value.

“Our goal has always been to help customers get the most out of their AMD investments. With this collaboration, Exostellar extends that mission—because it’s not just about raw compute, but about next‑level orchestration, utilization, and ROI,” said Tony Shakib, Chairman and CEO of Exostellar.

Contacts

Media Contact
Enterprises ready to unlock transparent, high-performance AI infrastructure via AMD + Exostellar orchestration are invited to contact:
Nayan Lad
Sr. Manager, Product Marketing, Exostellar
nayan@exostellar.ai | www.exostellar.ai

Exostellar


Release Versions

Contacts

Media Contact
Enterprises ready to unlock transparent, high-performance AI infrastructure via AMD + Exostellar orchestration are invited to contact:
Nayan Lad
Sr. Manager, Product Marketing, Exostellar
nayan@exostellar.ai | www.exostellar.ai

More News From Exostellar

Exostellar Launches AIM Platform – The Industry’s First Unified AI Infrastructure Management Solution for Heterogeneous Compute

SANTA CLARA, Calif.--(BUSINESS WIRE)--Exostellar, a leader in AI infrastructure orchestration and optimization, today announced the general availability (GA) of its AIM (AI Infrastructure Management) platform — the industry’s first solution to deliver unified control of multi-cluster GPU environments spanning on-premises, cloud, bare metal, and GPU-as-a-Service deployments. AIM provides a single pane of glass for managing and optimizing heterogeneous accelerators across NVIDIA, AMD, Intel, and...

Intel and Exostellar Multi-Cluster Operator: AI Acceleration Without the Bottleneck

SANTA CLARA, Calif.--(BUSINESS WIRE)--Exostellar, a self-managed AI infrastructure orchestration company, today announced a strategic collaboration with Intel to help enterprises deploy, manage, and scale AI workloads more efficiently by combining Intel® Gaudi® AI accelerators with Exostellar’s advanced Kubernetes-Native AI Orchestration, Multi-Cluster Operator. As AI and machine learning applications evolve, organizations are demanding greater compute power and smarter orchestration to train a...

Exostellar’s Software Defined GPU™ – Vendor-Agnostic Dynamic Fractional Resource Allocation to Make GPU Investments Profitable

SANTA CLARA, Calif.--(BUSINESS WIRE)--Exostellar, a leading innovator in autonomous compute orchestration and cloud optimization, is proud to announce the groundbreaking launch of Software Defined GPU™ (SDG™), a Multi-Vendor GPU Smart Slicing based on Kubernetes DRA. This pioneering technology revolutionizes GPU resource utilization optimization by creating Software-Defined GPU™ for multi-vendor, on-demand slicing and just-in-time resource allocation. A New Paradigm for AI Development SDG™ acce...
Back to Newsroom