-

Elastic Announces First-of-its-kind Search AI Lake to Scale Low Latency Search

The pioneering architecture powers a new Elastic Cloud Serverless offering for rapid search, observability, and security workloads

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today announced Search AI Lake, a first-of-its-kind, cloud-native architecture optimized for real-time, low-latency applications including search, retrieval augmented generation (RAG), observability and security. The Search AI Lake also powers the new Elastic Cloud Serverless offering, which removes operational overhead to automatically scale and manage workloads.

With the expansive storage capacity of a data lake and the powerful search and AI relevance capabilities of Elasticsearch, Search AI Lake delivers low-latency query performance without sacrificing scalability, relevance, or affordability.

Search AI Lake benefits include:

  • Boundless scale, decoupled compute and storage: Fully decoupling storage and compute enables effortless scalability and reliability using object storage, dynamic caching supports high throughput, frequent updates, and interactive querying of large data volumes. This eliminates the need for replicating indexing operations across multiple servers, cutting indexing costs and reducing data duplication.
  • Real-time, low latency: Multiple enhancements maintain excellent query performance even when the data is safely persisted on object stores. This includes the introduction of smart caching and segment-level query parallelization to reduce latency by enabling faster data retrieval and allowing more requests to be processed quickly.
  • Independently scale indexing and querying: By separating indexing and search at a low level, the platform can independently and automatically scale to meet the needs of a wide range of workloads.
  • GAI optimized native inference and vector search: Users can leverage a native suite of powerful AI relevance, retrieval, and reranking capabilities, including a native vector database fully integrated into Lucene, open inference APIs, semantic search, and first- and third-party transformer models, which work seamlessly with the array of search functionalities.
  • Powerful query and analytics: Elasticsearch’s powerful query language, ES|QL, is built in to transform, enrich, and simplify investigations with fast concurrent processing irrespective of data source and structure. Full support for precise and efficient full-text search and time series analytics to identify patterns in geospatial analysis are also included.
  • Native machine learning: Users can build, deploy, and optimize machine learning directly on all data for superior predictions. For security analysts, prebuilt threat detection rules can easily run across historical information, even years back. Similarly, unsupervised models perform near-real-time anomaly detections retrospectively on data spanning much longer time periods than other SIEM platforms.
  • Truly distributed - cross-region, cloud, or hybrid: Query data in the region or data center where it was generated from one interface. Cross-cluster search (CCS) avoids the requirement to centralize or synchronize. It means within seconds of being ingested, any data format is normalized, indexed, and optimized to allow for extremely fast querying and analytics. All while reducing data transfer and storage costs.

Search AI Lake powers a new Elastic Cloud Serverless offering that harnesses the innovative architecture’s speed and scale to remove operational overhead so users can quickly and seamlessly start and scale workloads. All operations, from monitoring and backup to configuration and sizing, are managed by Elastic – users just bring their data and choose Elasticsearch, Elastic Observability, or Elastic Security on Serverless.

“To meet the requirements of more AI and real-time workloads, it’s clear a new architecture is needed that can handle compute and storage at enterprise speed and scale – not one or the other,” said Ken Exner, chief product officer at Elastic. “Search AI Lake pours cold water on traditional data lakes that have tried to fill this need but are simply incapable of handling real-time applications. This new architecture and the serverless projects it powers are precisely what’s needed for the search, observability, and security workloads of tomorrow.”

Search AI Lake and Elastic Cloud Serverless are currently available in tech preview. For more information on how to get started, read the Elastic blog.

About Elastic

Elastic (NYSE: ESTC), the Search AI Company, enables everyone to find the answers they need in real-time using all their data, at scale. Elastic’s solutions for search, observability and security are built on the Elastic Search AI Platform, the development platform used by thousands of companies, including more than 50% of the Fortune 500. Learn more at elastic.co.

Elastic and associated marks are trademarks or registered trademarks of Elastic N.V. and its subsidiaries. All other company and product names may be trademarks of their respective owners.

Contacts

Elastic Global PR
PR-team@elastic.co

Elastic N.V.

NYSE:ESTC

Release Versions

Contacts

Elastic Global PR
PR-team@elastic.co

More News From Elastic N.V.

Elastic Delivers GPU Infrastructure to Self-Managed Elasticsearch Customers via Cloud Connect

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced the availability of Elastic Inference Service (EIS) via Cloud Connect for self-managed Elasticsearch deployments. Organizations can now gain on-demand access to cloud-hosted inference capabilities without managing GPU infrastructure, all while maintaining their core infrastructure and data on-premises. Users also gain immediate access to models by Jina.ai, an Elastic company and a leader in open-source multil...

Elastic Adds High-Precision Multilingual Reranking to Elastic Inference Service with Jina Models

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today made two Jina Rerankers available on Elastic Inference Service (EIS), a GPU-accelerated inference-as-a-service that makes it easy to run fast, high-quality inference without complex setup or hosting. These rerankers bring low-latency, high-precision multilingual reranking to the Elastic ecosystem. As generative AI prototypes move into production-ready search and RAG systems, users run into relevance and inference...

Elastic Announces General Availability of Agent Builder with Expanded Capabilities

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced the general availability of Agent Builder, a complete set of capabilities that helps developers quickly build secure, reliable, context-driven AI agents. AI agents need the right context to perform complex tasks accurately. Built on Elasticsearch, Agent Builder excels at context engineering by delivering relevance in a unified platform that scales, searches, and analyzes enterprise data. It dramatically simpl...
Back to Newsroom