-

Elastic Introduces Native Inference Service in Elastic Cloud

New service to provide GPU-accelerated embedding and retrieval models

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today announced the Elastic Inference Service (EIS), a GPU-accelerated inference-as-a-service for Elasticsearch semantic search, vector search, and generative AI workflows.

Every generative AI and vector search application relies on inference, and Elastic now delivers these capabilities natively as part of Elastic Cloud. As volumes grow, managing infrastructure, testing models, and handling integrations creates operational overhead that slows teams down. This has created a need for GPU-acceleration and an integrated workflow to provide speed, scalability, and cost efficiency.

“Inference at scale is incredibly important for vector search, semantic search and GenAI workflows,” said Steve Kearns, General Manager, Search at Elastic. “The Elastic Inference Service meets that challenge by providing our customers with an API-based inference service using NVIDIA GPUs with our best-in-class Elasticsearch vector database for low-latency, high-throughput inference.”

Elastic Learned Sparse EncodeR (ELSER) — Elastic’s built-in sparse vector model for state-of-the-art search relevance — is the first text-embedding model available on EIS in technical preview. Support for additional models for multilingual embeddings, reranking, and models from the recently announced Jina acquisition, will be available soon.

Some key benefits for developers who use EIS include:

  • Streamlined developer experience: No model downloads, manual configuration, or resource provisioning. EIS integrates directly with semantic_text and the Inference API for a seamless developer experience.
  • Improved end-to-end semantic search experience: EIS is compatible with sparse vectors, dense vectors, or semantic reranking.
  • Simplified generative AI workflows: AI features for ingest, investigation, detection, and analysis work out of the box, reducing the friction of contracts, API keys, and external services.
  • Backward compatibility: The Open Inference API gives users full flexibility to connect any third-party service, while existing Elasticsearch ML Nodes remain supported during adoption.
  • Enhanced performance: GPU-accelerated inference provides consistent latency and up to 10x higher throughput for ingest compared to CPU-based alternatives.
  • Easy to understand pricing: EIS provides consumption-based pricing similar to other inference services, charged per model per million tokens. It is also easy to get started and access support.
  • Peace of mind: Elastic also provides an intellectual property indemnity for all models provided on EIS.

For additional information on the Elastic Inference Service, read the Elastic blog.

Availability

The Elastic Inference Service is available to use on Serverless and Elastic Cloud Hosted deployments. All CSPs and regions can access the inference endpoints on EIS.

Additional models will be available soon to support a wider variety of search and inference needs.

About Elastic

Elastic (NYSE: ESTC), the Search AI Company, integrates its deep expertise in search technology with artificial intelligence to help everyone transform all of their data into answers, actions, and outcomes. Elastic's Search AI Platform — the foundation for its search, observability, and security solutions — is used by thousands of companies, including more than 50% of the Fortune 500. Learn more at elastic.co.

Elastic and associated marks are trademarks or registered trademarks of elasticsearch BV and its subsidiaries. All other company and product names may be trademarks of their respective owners. The release and timing of any features such as the additional models and region availability or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

Contacts

Media Contact
Elastic PR
PR-team@elastic.co

Elastic N.V.

NYSE:ESTC

Release Versions

Contacts

Media Contact
Elastic PR
PR-team@elastic.co

More News From Elastic N.V.

Elastic Jina Embeddings v3 Now Available in Gemini Enterprise Agent Platform Model Garden

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced that Jina Embeddings v3 is now available as a self-deployable partner model in Gemini Enterprise Agent Platform Model Garden. As the first Jina model available on the platform, it enables organizations to deploy high-performance retrieval models directly within their own cloud environments. With Jina Embeddings v3 deployed directly inside their Google Cloud projects and Virtual Private Clouds (VPCs), enterpri...

Elastic Adds Native Prometheus and PromQL Support to Elastic Observability

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today announced native Prometheus support, including direct ingestion via Remote Write and full PromQL support in Kibana. These additions enable Site Reliability Engineers (SREs) to analyze Prometheus metrics alongside logs and traces in a single platform, without rewriting queries or rebuilding pipelines. As organizations scale Kubernetes, Prometheus telemetry cardinality and volumes surge, forcing SREs to juggle mult...

Elastic Collaborates with Google Cloud to Bring its Embedded Security Layer to Google Distributed Cloud Air-Gapped Environments

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today announced a deep integration with Google Distributed Cloud (GDC) air-gapped, where Elastic is a critical partner providing a security layer for customers. This deep integration provides a hardened architecture for organizations handling highly sensitive, regulated workloads to use Elastic’s agentic security operations platform to combat modern AI-driven cyber threats. Organizations in highly regulated industries...
Back to Newsroom