-

Vespa.ai Announces Significant Performance Gains Over Elasticsearch in New Benchmark

Vespa Delivers 5X Infrastructure Cost Savings with Unmatched Query Efficiency Across Hybrid, Vector, and Lexical Search Types

TRONDHEIM, Norway--(BUSINESS WIRE)--Vespa.ai, developer of the leading platform for building and deploying large-scale, real-time AI applications powered by big data, has released a new benchmark report showcasing its superior performance, scalability, and efficiency in comparison to Elasticsearch. The comprehensive, reproducible study tested both systems on an e-commerce search application using a dataset of 1 million products, evaluating write operations (document ingestion and updates) and multiple query strategies: lexical matching, vector similarity, and hybrid approaches.

This experience was shared by Vinted.com—a leading platform for second-hand items. Facing growing operational costs and hardware demands with Elasticsearch, Vinted Engineering conducted a separate evaluation. Seeking an all-in-one solution for both vector and traditional search, Vinted’s engineering team migrated to Vespa in 2023. For a deeper look at their evaluation and migration, read the Vinted Engineering blog post, “Search Scaling Chapter 8: Goodbye Elasticsearch. Hello Vespa Search Engine.”

Key Findings of the Vespa Benchmark

  • Performance Across Query Types
    • Hybrid Queries: Vespa achieved 8.5X higher throughput per CPU core than Elasticsearch.
    • Vector Searches: Vespa demonstrated up to 12.9X higher throughput per CPU core.
    • Lexical Searches: Vespa delivered 6.5X better throughput per CPU core.
  • Updates
    • Steady-State Efficiency: Vespa is 4X more efficient for in-place updates, handling queries and updates more effectively after the initial bootstrapping phase.
    • Bootstrap: While Elasticsearch showed high efficiency in the initial ingestion phase (from 0 to 1M documents), Vespa stood out in long-term, steady-state operations.
  • Infrastructure Cost Savings
    • Due to higher query throughput and more efficient CPU usage, Vespa can reduce infrastructure costs by up to 5X, as detailed in section 10 of the report.

Jon Bratseth, CEO and Founder, Vespa. “As companies demand ever-faster search results and the ability to handle continuous updates, it is vital to choose a solution that performs robustly at scale and remains within a cost-effective price point. Our benchmark shows that Vespa excels not just in pure query speed but in how efficiently it utilizes resources, which translates directly into measurable infrastructure cost savings.”

About the Benchmark

All query types in the study were configured to return equivalent results, ensuring a fair, apples-to-apples performance comparison. The dataset size, system versions (Vespa 8.427.7 and Elasticsearch 8.15.2), and measurement framework were meticulously documented to enable full reproducibility.

Download the full report here.

About Vespa

Vespa.ai is a powerful platform for developing real-time search-based AI applications. Once built, these applications are deployed through Vespa’s large-scale, distributed architecture, which efficiently manages data, inference, and logic for applications handling large datasets and high concurrent query rates. Vespa delivers all the building blocks of an AI application, including vector database, hybrid search, retrieval augmented generation (RAG), natural language processing (NLP), machine learning, and support for large language models (LLM) and vision language models (VLM). It is available as a managed service and open source.

Contacts

Media Contact
Tim Young
timyoung@vespa.ai

More News From Vespa

GigaOm Radar for Vector Databases v3 Positions Vespa.ai as a Leader and Outperformer

TRONDHEIM, Norway--(BUSINESS WIRE)--Vespa.ai, the creator of the AI Search Platform for building and deploying large-scale, real-time AI applications powered by big data, today announced its recognition as a Leader and Outperformer in the GigaOm Radar for Vector Databases v3, marking the company’s third consecutive year being evaluated in GigaOm’s vector database research. Now in its third edition, the report compares 17 leading open source and commercial solutions using GigaOm’s structured eva...

Harini Gopalakrishnan Joins Vespa.ai as General Manager of Health & Life Sciences

TRONDHEIM, Norway--(BUSINESS WIRE)--Vespa.ai, the platform for large-scale, real-time AI applications powered by big data, today announced the appointment of Harini Gopalakrishnan as General Manager of Health & Life Sciences. Gopalakrishnan, formerly Global CTO for Life Sciences at Snowflake, will spearhead Vespa’s strategic initiatives in the sector. Vespa.ai provides the infrastructure to build and deploy AI-driven applications for search and retrieval-augmented generation (RAG), offering...

Perplexity Partners With Vespa.ai to Bring its Search Function In-House

TRONDHEIM, Norway--(BUSINESS WIRE)--Today, Vespa.ai– the company behind the leading platform to build and deploy large-scale, real-time AI applications powered by big data– joined Perplexity to announce the AI-powered answer engine’s shift bringing its search feature in-house. The move will significantly enhance the speed, accuracy, and relevance of search results at a scale only made possible on Vespa’s platform. “The recipe: 1. Solve Search. 2. Use it to solve everything else,” said Aravind S...
Back to Newsroom