As AI systems scale into critical operations, enterprises are standardizing on retrieval infrastructure built for control, performance, and flexibility
Qdrant, the open-source vector search engine built in Rust for production workloads, announced $50 million in Series B funding led by AVP, with participation from Bosch Ventures, Unusual Ventures, Spark Capital, and 42CAP.
“With every infrastructure shift…purpose-built systems emerge and rapidly scale in fast-growing new markets…[W]e’re seeing this pattern again with Qdrant…at the forefront of building the retrieval layer of the future. ” Warda Shaheen, AVP
Vector search began as a solution to a narrow problem: retrieving nearest neighbors from dense embeddings over relatively static datasets. AI systems look nothing like that. Retrieval runs within agent loops, executing thousands of queries per workflow across hybrid modalities, against data that changes continuously. RAG pipelines, semantic search, and agentic reasoning all depend on retrieval that holds up under sustained, production-scale pressure. And tools limited to single-vector dense similarity or architectures that layer vector search onto legacy indexing models are breaking under these conditions.
Read More: SalesTechStar Interview with Mark Walker, CEO at Nue
Qdrant was engineered at the lowest levels to create foundational infrastructure for the AI era. Built from the ground up in Rust, Qdrant rethinks every layer of the retrieval — indexing, scoring, filtering, ranking — as composable primitives that engineers control directly. Composable vector search means teams choose and combine retrieval capabilities at query time: dense vectors, sparse vectors, metadata filters, multi-vector representations, and custom scoring functions, with explicit control over how each affects relevance, latency, and cost. Rather than accepting opaque defaults, engineers make deliberate decisions tuned to their specific workload.
The result is a search engine that adapts to the problem rather than forcing the problem to fit the tool. Whether a team optimizes for maximum accuracy, lowest latency, or cost efficiency at scale, Qdrant exposes the controls to get there, without re-architecting as requirements evolve.
“Many vector databases were built to only store dense embeddings and return nearest neighbors. That’s table stakes,” said André Zayarni, CEO and Co-Founder of Qdrant. “Production AI systems need a search engine where every aspect of retrieval — how you index, how you score, how you filter, how you balance latency against precision — is a composable decision. That’s what we’ve built, that’s what developers and the most sophisticated enterprises are looking for as they scale internal and external AI workloads, and this funding accelerates our ability to make it the standard.”
As AI systems are moving from experimentation to critical operations, where search runs matters as much as how they run. Composable vector search is designed to operate wherever decisions are made. That could be in the cloud, in hybrid and private (on-prem) environments, or at the edge. This deployment flexibility isn’t an add-on; it follows naturally from an engine built as modular, composable infrastructure rather than a monolithic managed service.
Read More: How API-First SalesTech Is Redefining Revenue Operations?
“With every infrastructure shift, we’ve seen purpose-built systems emerge and rapidly scale in fast-growing new markets, and we’re seeing this pattern again with Qdrant. As an AI-native vector search engine designed for the latency, throughput, and reliability demands of production AI workloads, they’re at the forefront of building the retrieval layer of the future that all advanced AI applications will depend on,” said Warda Shaheen of AVP.
“In production AI applications, retrieving context-relevant information in real-time has become business-critical infrastructure,” said Ingo Ramesohl, Managing Director of Bosch Ventures. “Qdrant’s Rust-based architecture is exemplary of the deep tech innovations that will shape the next generation of powerful and trustworthy AI systems.”
Production-Proven at Scale; Deploy Anywhere, Without Compromise
With AI adoption accelerating and fundamentally redefining B2B and B2C workflows across on-premise, Cloud, and edge environments, Qdrant emerges as the most flexible, lowest-latency-at-scale vector search provider. Addressing the broadest range of AI application use cases, from simple implementations to highly sophisticated workloads, Qdrant stands out as a leading “picks-and-shovels” provider of the ongoing AI revolution, with strong traction among both developers and enterprises.
Enterprises including Tripadvisor, HubSpot, OpenTable, Bazaarvoice, and Bosch rely on Qdrant where vector search runs continuously under real-world load. The open-source project has surpassed 250 million downloads and 29,000 GitHub stars, with a global community driving improvements based on production requirements. Qdrant was recognized in The Forrester Wave™: Vector Databases, Q3 2024, GigaOm’s Radar for Vector Databases v3 in 2025, and Sifted’s 2025 B2B SaaS Rising 100.












