-

Elasticsearch Open Inference API Now Supports Mistral AI Embeddings

Mistral AI embeddings on Elasticsearch benefit from native chunking via a single API call

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today announced the Elasticsearch vector database now stores and automatically chunks embeddings from mistral-embed, with native integrations to the Open Inference API and the semantic_text field. This reduces time to market for RAG applications and simplifies the development process by eliminating the need to architect bespoke chunking strategies and combining chunking with vector storage.

“We are invested in delivering open-first, enterprise-grade GenAI tools to help developers build next generation search applications,” said Shay Banon, founder and chief technology officer at Elastic. “Through our collaboration with the Mistral AI team, we’re simplifying the process of storing and chunking embeddings in Elasticsearch to a single API call.”

“Mistral AI has always been committed to open-weights and making AI accessible to all,” said Arthur Mensch, co-founder and CEO of Mistral AI. “Working with Elastic allows us to bring Mistral’s tools to more developers through the Elastic open inference API, and gives us the opportunity to work with a company that shares our value of accessibility. We’re excited to see what developers will create.”

Support for Mistral’s AI embedding model is available today, read the Elastic blog to get started.

About Elastic

Elastic (NYSE: ESTC), the Search AI Company, enables everyone to find the answers they need in real-time using all their data, at scale. Elastic’s solutions for search, observability and security are built on the Elastic Search AI Platform, the development platform used by thousands of companies, including more than 50% of the Fortune 500. Learn more at elastic.co.

Elastic and associated marks are trademarks or registered trademarks of Elastic N.V. and its subsidiaries. All other company and product names may be trademarks of their respective owners.

Contacts

Elastic PR
PR-team@elastic.co

Elastic N.V.

NYSE:ESTC

Release Versions

Contacts

Elastic PR
PR-team@elastic.co

More News From Elastic N.V.

Elastic Delivers GPU Infrastructure to Self-Managed Elasticsearch Customers via Cloud Connect

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced the availability of Elastic Inference Service (EIS) via Cloud Connect for self-managed Elasticsearch deployments. Organizations can now gain on-demand access to cloud-hosted inference capabilities without managing GPU infrastructure, all while maintaining their core infrastructure and data on-premises. Users also gain immediate access to models by Jina.ai, an Elastic company and a leader in open-source multil...

Elastic Adds High-Precision Multilingual Reranking to Elastic Inference Service with Jina Models

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today made two Jina Rerankers available on Elastic Inference Service (EIS), a GPU-accelerated inference-as-a-service that makes it easy to run fast, high-quality inference without complex setup or hosting. These rerankers bring low-latency, high-precision multilingual reranking to the Elastic ecosystem. As generative AI prototypes move into production-ready search and RAG systems, users run into relevance and inference...

Elastic Announces General Availability of Agent Builder with Expanded Capabilities

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced the general availability of Agent Builder, a complete set of capabilities that helps developers quickly build secure, reliable, context-driven AI agents. AI agents need the right context to perform complex tasks accurately. Built on Elasticsearch, Agent Builder excels at context engineering by delivering relevance in a unified platform that scales, searches, and analyzes enterprise data. It dramatically simpl...
Back to Newsroom