Published on: Thursday, 16 January 2025 ● 3 Min Read
TRONDHEIM, Norway--(BUSINESS WIRE)--Vespa.ai, developer of the leading platform for building and deploying large-scale, real-time AI applications powered by big data, has released a new benchmark report showcasing its superior performance, scalability, and efficiency in comparison to Elasticsearch. The comprehensive, reproducible study tested both systems on an e-commerce search application using a dataset of 1 million products, evaluating write operations (document ingestion and updates) and multiple query strategies: lexical matching, vector similarity, and hybrid approaches.
This experience was shared by Vinted.com—a leading platform for second-hand items. Facing growing operational costs and hardware demands with Elasticsearch, Vinted Engineering conducted a separate evaluation. Seeking an all-in-one solution for both vector and traditional search, Vinted’s engineering team migrated to Vespa in 2023. For a deeper look at their evaluation and migration, read the Vinted Engineering blog post, “Search Scaling Chapter 8: Goodbye Elasticsearch. Hello Vespa Search Engine.”
Key Findings of the Vespa Benchmark
Jon Bratseth, CEO and Founder, Vespa. “As companies demand ever-faster search results and the ability to handle continuous updates, it is vital to choose a solution that performs robustly at scale and remains within a cost-effective price point. Our benchmark shows that Vespa excels not just in pure query speed but in how efficiently it utilizes resources, which translates directly into measurable infrastructure cost savings.”
About the Benchmark
All query types in the study were configured to return equivalent results, ensuring a fair, apples-to-apples performance comparison. The dataset size, system versions (Vespa 8.427.7 and Elasticsearch 8.15.2), and measurement framework were meticulously documented to enable full reproducibility.
Download the full report here.
About Vespa
Vespa.ai is a powerful platform for developing real-time search-based AI applications. Once built, these applications are deployed through Vespa’s large-scale, distributed architecture, which efficiently manages data, inference, and logic for applications handling large datasets and high concurrent query rates. Vespa delivers all the building blocks of an AI application, including vector database, hybrid search, retrieval augmented generation (RAG), natural language processing (NLP), machine learning, and support for large language models (LLM) and vision language models (VLM). It is available as a managed service and open source.
No comments posted
© 2019 KIVAA Group | All right reserved. www.bankingontechnology.com
Leave a reply: