-

Mirantis Brings Enterprise-Grade Controls to AI Infrastructure

k0rdent adds capabilities enabling enterprises and GPU cloud operators to govern, scale, and monetize sovereign AI services

CAMPBELL, Calif.--(BUSINESS WIRE)--Mirantis, delivering Kubernetes-native infrastructure for AI, today announced additional capabilities for k0rdent AI, further expanding the platform beyond infrastructure management to help enterprises, neoclouds, and GPU cloud operators monetize AI infrastructure investments.

The new k0rdent AI Model Registry and k0rdent AI Inference Mesh enable organizations to securely host, govern, route, and meter AI models and inference services across federated computing resources. Together, the two new products help organizations transform raw GPU infrastructure into governed, revenue-generating AI platforms.

Mirantis also introduced k0rdent AI Inference Runtime designed to maximize tokens per GPU-second for improved infrastructure efficiency and utilization.

“As organizations move AI projects from experimentation into production, infrastructure teams are increasingly confronting operational and governance challenges around model distribution, inference visibility, compliance enforcement, and GPU economics,” said Kevin Kamel, vice president of product development at Mirantis. “Enterprises and GPU operators have largely been forced to stitch together fragile workflows and disconnected tools to operationalize AI. Models cannot be treated the same as containers because they have their own governance, sovereignty, compliance, and lifecycle requirements. The capabilities we’re providing today are validated and benchmarked for users.”

k0rdent AI Model Registry

k0rdent AI Model Registry is optimized for AI model storage and distribution workflows. It provides a secure, OCI-native registry for managing large language models (LLMs), fine-tuned variants, quantized builds, and related AI artifacts across distributed infrastructure.

The registry reduces the operational complexity often associated with secure AI model distribution.

k0rdent AI Inference Mesh

k0rdent AI Inference Mesh routes, meters, audits, and enforces policy on every inference request across models, regions, clusters, and providers. It provides a full view of where AI requests are going, what they cost, and any compliance gaps.

The new products build on Mirantis’ k0rdent AI platform, which focuses on Kubernetes-native AI infrastructure spanning bare metal, virtual machines, managed Kubernetes, and sovereign clouds. Available today in preview, additional information can be found at the Mirantis website. Also, attendees at Dell Technologies World from May 18-21 in Las Vegas can visit Mirantis booth 414, schedule meetings by going here, and learn more on the Dell Technologies World resources page.

About Mirantis

Mirantis delivers the fastest path to profitable, scalable GPU cloud infrastructure for neoclouds and enterprise AI factories, with full-stack AI infrastructure technology that removes complexity and streamlines operations across the AI lifecycle, from Metal-to-Model. Through k0rdent AI and strategic partnerships, Mirantis enables organizations to transform GPU cloud economics with production-grade multi-tenancy, intelligent workload orchestration, and automated operations that maximize utilization and profitability. With more than 20 years delivering mission-critical open source cloud technologies, Mirantis provides the end-to-end automation, enterprise security and governance, and deep expertise in Kubernetes and GPU orchestration that organizations need to reduce time to market and efficiently scale cloud native, virtualized, and GPU-powered applications across any environment – on-premises, public cloud, hybrid, or edge.

Mirantis serves many of the world’s leading enterprises, including Adobe, Ericsson, Inmarsat, MetLife, PayPal, and Societe Generale. Learn more at www.mirantis.com.

Mirantis is a registered trademark of Mirantis, Inc. Metal-to-Model is a trademark of Mirantis, Inc. All other trademarks are the property of their respective owners.

Contacts

Joseph Eckert for Mirantis
jeckert@eckertcomms.com

More News From Mirantis

Lens Introduces Platform that Governs AI Agents Running Anywhere

CAMPBELL, Calif.--(BUSINESS WIRE)--Lens by Mirantis today announced Lens Agents, a governed platform for running AI agents across enterprise systems, giving organizations a unified, policy-driven way to run, secure, and scale AI agents across desktop and cloud environments. Now available in early access, Lens Agents enables organizations to connect: any AI agent, including desktop tools like Claude, Cursor, and Copilot; external autonomous agents built on any framework; and platform agents crea...

amazee.ai Launches Managed OpenClaw Hosting for Secure, Sovereign AI Agent Deployments

CAMPBELL, Calif.--(BUSINESS WIRE)--amazee.ai, a Mirantis company, today announced the launch of amazeeClaw, a managed OpenClaw hosting platform that enables developers and enterprises to deploy production-ready AI agents with data sovereignty and regional control without having to set up their own infrastructure. As adoption of AI agents and agentic automation accelerates, organizations are discovering that moving from prototype to production is harder than expected. Self-hosting OpenClaw can i...

Mirantis Automates AI Factory Deployments with k0rdent AI and NVIDIA Run:ai

CAMPBELL, Calif.--(BUSINESS WIRE)--Mirantis, delivering Kubernetes-native infrastructure for AI, today announced the company is enabling enterprises and neocloud providers to deploy production-ready AI platforms in minutes, not weeks which is achieved by automated deployment and lifecycle management of NVIDIA Run:ai through the Mirantis k0rdent AI platform. The integration closes the gap between GPU infrastructure provisioning and a fully operational AI factory. As organizations race to build p...
Back to Newsroom