-

Elastic Introduces Native Inference Service in Elastic Cloud

New service to provide GPU-accelerated embedding and retrieval models

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today announced the Elastic Inference Service (EIS), a GPU-accelerated inference-as-a-service for Elasticsearch semantic search, vector search, and generative AI workflows.

Every generative AI and vector search application relies on inference, and Elastic now delivers these capabilities natively as part of Elastic Cloud. As volumes grow, managing infrastructure, testing models, and handling integrations creates operational overhead that slows teams down. This has created a need for GPU-acceleration and an integrated workflow to provide speed, scalability, and cost efficiency.

“Inference at scale is incredibly important for vector search, semantic search and GenAI workflows,” said Steve Kearns, General Manager, Search at Elastic. “The Elastic Inference Service meets that challenge by providing our customers with an API-based inference service using NVIDIA GPUs with our best-in-class Elasticsearch vector database for low-latency, high-throughput inference.”

Elastic Learned Sparse EncodeR (ELSER) — Elastic’s built-in sparse vector model for state-of-the-art search relevance — is the first text-embedding model available on EIS in technical preview. Support for additional models for multilingual embeddings, reranking, and models from the recently announced Jina acquisition, will be available soon.

Some key benefits for developers who use EIS include:

  • Streamlined developer experience: No model downloads, manual configuration, or resource provisioning. EIS integrates directly with semantic_text and the Inference API for a seamless developer experience.
  • Improved end-to-end semantic search experience: EIS is compatible with sparse vectors, dense vectors, or semantic reranking.
  • Simplified generative AI workflows: AI features for ingest, investigation, detection, and analysis work out of the box, reducing the friction of contracts, API keys, and external services.
  • Backward compatibility: The Open Inference API gives users full flexibility to connect any third-party service, while existing Elasticsearch ML Nodes remain supported during adoption.
  • Enhanced performance: GPU-accelerated inference provides consistent latency and up to 10x higher throughput for ingest compared to CPU-based alternatives.
  • Easy to understand pricing: EIS provides consumption-based pricing similar to other inference services, charged per model per million tokens. It is also easy to get started and access support.
  • Peace of mind: Elastic also provides an intellectual property indemnity for all models provided on EIS.

For additional information on the Elastic Inference Service, read the Elastic blog.

Availability

The Elastic Inference Service is available to use on Serverless and Elastic Cloud Hosted deployments. All CSPs and regions can access the inference endpoints on EIS.

Additional models will be available soon to support a wider variety of search and inference needs.

About Elastic

Elastic (NYSE: ESTC), the Search AI Company, integrates its deep expertise in search technology with artificial intelligence to help everyone transform all of their data into answers, actions, and outcomes. Elastic's Search AI Platform — the foundation for its search, observability, and security solutions — is used by thousands of companies, including more than 50% of the Fortune 500. Learn more at elastic.co.

Elastic and associated marks are trademarks or registered trademarks of elasticsearch BV and its subsidiaries. All other company and product names may be trademarks of their respective owners. The release and timing of any features such as the additional models and region availability or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

Contacts

Media Contact
Elastic PR
PR-team@elastic.co

Elastic N.V.

NYSE:ESTC

Release Versions

Contacts

Media Contact
Elastic PR
PR-team@elastic.co

More News From Elastic N.V.

Elastic Delivers GPU Infrastructure to Self-Managed Elasticsearch Customers via Cloud Connect

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced the availability of Elastic Inference Service (EIS) via Cloud Connect for self-managed Elasticsearch deployments. Organizations can now gain on-demand access to cloud-hosted inference capabilities without managing GPU infrastructure, all while maintaining their core infrastructure and data on-premises. Users also gain immediate access to models by Jina.ai, an Elastic company and a leader in open-source multil...

Elastic Adds High-Precision Multilingual Reranking to Elastic Inference Service with Jina Models

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today made two Jina Rerankers available on Elastic Inference Service (EIS), a GPU-accelerated inference-as-a-service that makes it easy to run fast, high-quality inference without complex setup or hosting. These rerankers bring low-latency, high-precision multilingual reranking to the Elastic ecosystem. As generative AI prototypes move into production-ready search and RAG systems, users run into relevance and inference...

Elastic Announces General Availability of Agent Builder with Expanded Capabilities

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced the general availability of Agent Builder, a complete set of capabilities that helps developers quickly build secure, reliable, context-driven AI agents. AI agents need the right context to perform complex tasks accurately. Built on Elasticsearch, Agent Builder excels at context engineering by delivering relevance in a unified platform that scales, searches, and analyzes enterprise data. It dramatically simpl...
Back to Newsroom