-

Qdrant Cloud Ships Enterprise-Grade Features: GPU-Accelerated Indexing, Multi-AZ Clusters, and Audit Logging

Qdrant Cloud customers can now index faster, reach higher availability, and meet compliance requirements through auditability.

BERLIN & NEW YORK--(BUSINESS WIRE)--Qdrant, the leading provider of high-performance, composable vector search, today announced three enterprise capabilities for Qdrant Cloud: GPU-accelerated indexing, Multi-AZ clusters, and audit logging. Together, these address the performance, availability, and compliance requirements that enterprise teams need for production vector search — especially as AI workloads write continuously, require always-on retrieval, and demand accountability for every decision made on retrieved context.

“Pair (GPU-accelerated HNSW construction) with multi-AZ replication and audit logging, and enterprise teams have everything they need to run Qdrant in production for their most critical workloads." - Andre Zayarni, CEO and Co-Founder

Share

As AI systems continue to become more mission critical, vector search faces new demands: indexing that keeps pace with large write loads, maintaining high availability, and enabling audit trails for autonomous systems making decisions on retrieved context. Qdrant Cloud now addresses these challenges.

“GPUs aren't just for model inference. They're for indexing too. We've supported GPU-accelerated HNSW construction in open source since v1.13, and now it's available in Qdrant Cloud," said Andre Zayarni, CEO and Co-Founder of Qdrant. "Pair that with multi-AZ replication and audit logging, and enterprise teams have everything they need to run Qdrant in production for their most critical workloads."

GPU-accelerated indexing delivers up to 4x faster HNSW index builds on dedicated GPUs in Qdrant Cloud, based on Qdrant benchmarks. Customers can add GPUs to existing clusters for high-volume indexing bursts. Available today on AWS, with additional cloud providers and regions on the roadmap.

Multi-AZ clusters replicate data across three availability zones within a region through cross-AZ replication — not failover. If an availability zone goes down, reads and writes continue from the surviving zones with no failover delay and no customer action required. Available on the Premium Multi-AZ tier, offering up to 99.95% uptime SLAs.

Audit logging captures all operations performed through the Qdrant API: queries, upserts, deletes, collection management, and snapshot operations. Each entry is structured JSON with user and API key attribution, timestamp, target collection, and result of the action (allowed or denied). When an autonomous system acts on retrieved context, audit logging provides the trail showing which service queried which collection, when, and whether the request was authorized. Retention is configurable; for long-term needs, logs can be downloaded via the API and stored externally. Audit logging is available on all paid Qdrant Cloud clusters.

Enterprise teams evaluating vector search typically ask three questions:

  • Can it keep up? High-write workloads (dynamic catalogs, agentic memory, real-time recommendations) need indexing that keeps pace.
  • Can it stay up? SRE teams and procurement require multi-AZ availability before signing off on mission-critical infrastructure.
  • Can we audit it? Compliance and security teams need a trail showing who accessed what and when, especially as AI agents make more autonomous decisions.

With today's release, Qdrant Cloud answers all three, and capabilities are available now for Qdrant Cloud customers. For more information, visit qdrant.tech/blog/qdrant-cloud-enterprise-launch.

About Qdrant

Qdrant is a high-performance, composable vector search engine built in Rust for production-grade semantic, hybrid, and agentic workloads. Engineers combine retrieval primitives with explicit control over ranking, indexing, latency, and relevance trade-offs. Qdrant delivers predictable, low-tail latency at billion-scale across cloud, hybrid, on-premises, and edge deployments. The open-source project has surpassed 250 million downloads and over 30,000 GitHub stars. Qdrant was recognized in The Forrester Wave: Vector Databases, Q3 2024, GigaOm's Radar for Vector Databases v3, and Sifted's 2025 B2B SaaS Rising 100. Qdrant powers production AI workloads at Canva, Tripadvisor, HubSpot, Bosch, and Deutsche Telekom.

Contacts

For more information, visit qdrant.tech or contact press@qdrant.com.

Qdrant


Release Versions

Contacts

For more information, visit qdrant.tech or contact press@qdrant.com.

Social Media Profiles
More News From Qdrant

Qdrant Raises $50 Million Series B to Define Composable Vector Search as Core Infrastructure for Production AI

BERLIN & NEW YORK--(BUSINESS WIRE)--Qdrant, the open-source vector search engine built in Rust for production workloads, today announced $50 million in Series B funding led by AVP, with participation from Bosch Ventures, Unusual Ventures, Spark Capital, and 42CAP. Vector search began as a solution to a narrow problem: retrieving nearest neighbors from dense embeddings over relatively static datasets. Today's AI systems look nothing like that. Retrieval runs within agent loops, executing thousan...

Qdrant Introduces Tiered Multitenancy to Eliminate Noisy Neighbor Problems in Vector Search

BERLIN & NEW YORK--(BUSINESS WIRE)--Qdrant, the open-source vector search engine used by enterprises and AI-native teams, today announced Tiered Multitenancy, a new capability that helps organizations isolate heavy-traffic tenants, improve performance, and scale vector search workloads more efficiently. It is part of the v1.16 release. Modern AI platforms often serve thousands of small tenants alongside a few large enterprise users with significantly higher throughput requirements. This uneven...

Qdrant Announces Qdrant Edge: The First Vector Search Engine for Embedded AI

BERLIN & NEW YORK--(BUSINESS WIRE)--Qdrant, the leading provider of high-performance, open-source vector search, today announced the private beta of Qdrant Edge, a lightweight, embedded vector search engine designed for AI systems running on devices such as robots, point of sales, home assistants, and mobile phones. Qdrant Edge brings vector-based retrieval to resource-constrained environments where low latency, limited compute, and limited network access are fundamental constraints. It enables...
Back to Newsroom