-

Qdrant Expands Enterprise Capabilities with New Cloud Features, Strengthening AI Workloads for Large-Scale Deployments

BERLIN & NEW YORK--(BUSINESS WIRE)--As AI adoption accelerates, enterprises are facing new challenges in scaling, securing, and managing high-performance AI applications. Today, Qdrant, the high-performance, open-source vector database, is introducing new enterprise capabilities in Qdrant Cloud that are designed to remove operational bottlenecks and empower large-scale AI deployments. The latest updates include single sign-on (SSO), cloud role-based access control (RBAC), granular database API keys for granular RBAC, advanced monitoring and observability with Prometheus/OpenMetrics to connect external monitoring systems, and a cloud API for seamless automation.

Qdrant already handles billions of vectors at scale, delivering high-performance search for AI applications. But true enterprise scalability requires more than just speed—it demands robust security, observability, and automation. With these new features, Qdrant Cloud enables enterprises to manage vector search infrastructure with greater efficiency, control, and compliance.

Enterprise-Ready Workflows for Scalable AI

The new enterprise feature suite is designed to integrate seamlessly into existing enterprise architectures, providing enhanced security, automation, and real-time observability to ensure AI applications run smoothly at any scale.

Key enterprise capabilities include:

  • Cloud API for Simplified Management – Enables users to programmatically manage clusters, authentication, and cloud configurations, automating deployments and scaling with minimal overhead, with support for infrastructure-as-code workflows like Terraform for repeatable, version-controlled deployments.
  • Secure Access & Authentication – Cloud RBAC (early access) enables fine-grained permissions for managing clusters, billing, and hybrid cloud deployments in Qdrant Cloud. Single Sign-On (SSO) allows users to log in through Okta, Google Workspace, Azure AD (Entra ID), SAML, PingFederate, and more. Additionally, granular Database API Keys provide per-cluster and per-collection restrictions – specifying read-only or read/write permissions for each key – along with enforced expirations and instant revocation for secure data access.
  • Real-Time Monitoring & Observability – An integration with Prometheus/OpenMetrics enables tracking of query performance, query latency, request volumes, CPU usage, memory usage, and disk space. Compatible with Datadog, Grafana, and other monitoring tools.

Built for Enterprise AI at Scale

“As enterprises scale AI applications, they need a vector database that is not only high-performing but also highly secure, with granular access control, easily manageable, and built for real-world operations,” said André Zayarni, Qdrant CEO & Co-Founder. “With these enterprise-grade enhancements, Qdrant Cloud ensures organizations can deploy, monitor, and control their AI workloads effortlessly, reducing complexity while maintaining peak performance.”

Scale Enterprise Applications with Qdrant

Beyond these new enterprise features, Qdrant continues to lead in vector search for enterprise-scale AI applications by offering Qdrant Hybrid Cloud for unparalleled control and sovereignty over their data and vector search workloads. Qdrant also recently announced its platform-independent GPU-accelerated vector indexing capability, which delivers up to 10x faster index-building and is particularly relevant for large-scale datasets.

By doubling down on enterprise-grade functionality, Qdrant addresses the growing demand for high-performance, secure, and scalable vector search solutions in the AI-driven enterprise landscape.

Learn more about the announcement here.

About Qdrant

Qdrant is the leading, high-performance, scalable, open-source vector database and search engine, essential for building the next generation of AI/ML applications. Qdrant is able to handle billions of vectors, supports the matching of semantically complex objects, and is implemented in Rust for performance, memory safety, and scale. Recently, Qdrant’s open-source project surpassed 10 million installs and earned a place in The Forrester Wave™: Vector Databases, Q3 2024. The company was also recognized as one of Europe’s top 10 startups in Sifted’s 2024 B2B SaaS Rising 100, an annual ranking of the most promising B2B SaaS companies valued under $1 billion.

Contacts

For more information, please visit qdrant.tech or contact:
press@qdrant.com

Qdrant


Release Versions

Contacts

For more information, please visit qdrant.tech or contact:
press@qdrant.com

Social Media Profiles
More News From Qdrant

Qdrant Introduces Tiered Multitenancy to Eliminate Noisy Neighbor Problems in Vector Search

BERLIN & NEW YORK--(BUSINESS WIRE)--Qdrant, the open-source vector search engine used by enterprises and AI-native teams, today announced Tiered Multitenancy, a new capability that helps organizations isolate heavy-traffic tenants, improve performance, and scale vector search workloads more efficiently. It is part of the v1.16 release. Modern AI platforms often serve thousands of small tenants alongside a few large enterprise users with significantly higher throughput requirements. This uneven...

Qdrant Announces Qdrant Edge: The First Vector Search Engine for Embedded AI

BERLIN & NEW YORK--(BUSINESS WIRE)--Qdrant, the leading provider of high-performance, open-source vector search, today announced the private beta of Qdrant Edge, a lightweight, embedded vector search engine designed for AI systems running on devices such as robots, point of sales, home assistants, and mobile phones. Qdrant Edge brings vector-based retrieval to resource-constrained environments where low latency, limited compute, and limited network access are fundamental constraints. It enables...

Qdrant Launches Qdrant Cloud Inference to Unify Embeddings and Vector Search Across Multiple Modalities

BERLIN & NEW YORK--(BUSINESS WIRE)--Qdrant Cloud Inference unifies dense, sparse, and image embeddings with vector search to simplify workflows and accelerate AI development....
Back to Newsroom