-

Elastic Adds High-Precision Multilingual Reranking to Elastic Inference Service with Jina Models

Two new Jina reranker models deliver low-latency, production-ready relevance for hybrid search and RAG workloads

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today made two Jina Rerankers available on Elastic Inference Service (EIS), a GPU-accelerated inference-as-a-service that makes it easy to run fast, high-quality inference without complex setup or hosting. These rerankers bring low-latency, high-precision multilingual reranking to the Elastic ecosystem.

As generative AI prototypes move into production-ready search and RAG systems, users run into relevance and inference latency limits, particularly for multilingual use cases. Rerankers improve search quality by reordering results based on semantic relevance, helping surface the most accurate matches for a query. They improve relevance across aggregated, multi-query results, without reindexing or pipeline changes. This makes them especially valuable for hybrid search, RAG, and context-engineering workflows where better context boosts downstream accuracy.

By delivering GPU-accelerated Jina rerankers as a managed service, Elastic enables teams to improve search and RAG accuracy without managing model infrastructure.

“Search relevance is foundational to AI-driven experiences,” said Steve Kearns, general manager, Search at Elastic. “By bringing these Jina reranker models to Elastic Inference Service, we are enabling teams to deliver fast and accurate multilingual search, RAG, and agentic AI experiences, available out of the box with minimal setup.”

The two new Jina reranker models are optimized for different production needs:

Jina Reranker v2 (jina-reranker-v2-base-multilingual)
Built for scalable, agentic workflows.

  • Low-latency inference at scale: Low-latency inference with strong multilingual performance that can outperform larger rerankers.
  • Support for agentic use cases: Ability to select relevant SQL tables and external functions that best match user queries, enabling more advanced agent-driven workflows.
  • Unbounded candidate support: Scores documents independently to handle arbitrarily large candidate sets. These scores remain consistent across batches, so developers can rerank results incrementally without relying on strict top-k limits.

Jina Reranker v3 (jina-reranker-v3)
Optimized for high-precision shortlist reranking.

  • Lightweight, production-friendly architecture: Optimized for low-latency inference and efficient deployment in production settings.
  • Strong multilingual performance: Benchmarks show that v3 delivers state-of-the-art multilingual performance, outperforming much larger alternatives, and maintains stable top-k rankings under permutation.
  • Cost-efficient, cross-document reranking: v3 reranks up to 64 documents together in a single inference call, reasoning across the full candidate set to improve ordering when results are similar or overlapping. By batching candidates instead of scoring them individually, v3 significantly reduces inference usage, making it a strong fit for RAG and agentic workflows with defined top-k results.

These models extend Elastic’s growing catalogue of ready-to-use models available on EIS, which includes the open source multilingual and multimodal embeddings, rerankers, and small language models built by Jina and acquired by Elastic last year. EIS has an expanding catalogue of ready-to-use models on managed GPUs, with additional models expected to be added over time.

Availability

All Elastic Cloud trials have access to the Elastic Inference Service. Try it now on Elastic Cloud Serverless and Elastic Cloud Hosted.

Additional Resources

About Elastic

Elastic (NYSE: ESTC), the Search AI Company, integrates its deep expertise in search technology with artificial intelligence to help everyone transform all of their data into answers, actions, and outcomes. Elastic's Search AI Platform — the foundation for its search, observability, and security solutions — is used by thousands of companies, including more than 50% of the Fortune 500. Learn more at elastic.co.

Elastic and associated marks are trademarks or registered trademarks of elasticsearch BV and its subsidiaries. All other company and product names may be trademarks of their respective owners.

Contacts

Media Contact
Elastic PR
PR-team@elastic.co

Elastic N.V.

NYSE:ESTC

Release Versions

Contacts

Media Contact
Elastic PR
PR-team@elastic.co

More News From Elastic N.V.

Elastic Delivers GPU Infrastructure to Self-Managed Elasticsearch Customers via Cloud Connect

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced the availability of Elastic Inference Service (EIS) via Cloud Connect for self-managed Elasticsearch deployments. Organizations can now gain on-demand access to cloud-hosted inference capabilities without managing GPU infrastructure, all while maintaining their core infrastructure and data on-premises. Users also gain immediate access to models by Jina.ai, an Elastic company and a leader in open-source multil...

Elastic Announces General Availability of Agent Builder with Expanded Capabilities

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced the general availability of Agent Builder, a complete set of capabilities that helps developers quickly build secure, reliable, context-driven AI agents. AI agents need the right context to perform complex tasks accurately. Built on Elasticsearch, Agent Builder excels at context engineering by delivering relevance in a unified platform that scales, searches, and analyzes enterprise data. It dramatically simpl...

Elastic Supercharges Performance for Serverless Offering on AWS

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced a more powerful Elastic Cloud Serverless on Amazon Web Services (AWS), delivering up to 50% higher indexing throughput and 37% lower search latency using new AWS Graviton instances at no extra cost to users1. Elastic Cloud Serverless is a fully managed, auto-scaled service that enables independent scaling of indexing and search workloads, helping teams balance performance and cost-efficiency across a wide ran...
Back to Newsroom