Qdrant lands $50M to push vector search into production AI
Qdrant, an open-source vector search engine built for production environments, has raised $50 million in a Series B funding round led by AVP. The financing also included participation from Bosch Ventures, Unusual Ventures, Spark Capital, and 42CAP.
The company said the new capital will be used to expand its engineering and product teams, accelerate development of its search infrastructure, and strengthen enterprise offerings. Funds will also support global scaling efforts and improvements in performance, deployment flexibility, and reliability for high-volume production workloads.
Why search infrastructure is being rebuilt
Vector search began as a way to find nearest matches in dense datasets, but production AI systems increasingly demand more. Retrieval now often runs inside automated workflows that can trigger thousands of queries per task, across changing datasets and multiple data types. Use cases such as retrieval-augmented generation (RAG), semantic search, and reasoning-driven workflows require systems that maintain speed and accuracy under sustained load.
Composable retrieval as a core design choice
Founded in 2021 by André Zayarni and Andrey Vasnetsov, Qdrant is written in Rust and designed with modular building blocks for indexing, scoring, filtering, and ranking. The company says this lets engineers combine dense and sparse vectors, metadata filters, multi-vector representations, and custom scoring rules within a single query—allowing teams to tune relevance, latency, and compute cost to their workloads.
Adoption and enterprise deployment
Qdrant is built to run across cloud, hybrid, on-premise, and edge deployments, aiming to meet operational and regulatory requirements. The company said enterprises including Tripadvisor, HubSpot, OpenTable, Bazaarvoice, and Bosch use the platform in production. The open-source project has surpassed 250 million downloads and 29,000 GitHub stars.
André Zayarni said the goal is to make retrieval “a composable decision” across indexing, scoring, filtering, and latency-versus-precision tradeoffs, while AVP described the company as part of an emerging “retrieval layer” for advanced AI applications.










