path
stringlengths
9
135
content
stringlengths
8
143k
qdrant-landing/content/blog/hybrid-cloud-ovhcloud.md
--- draft: false title: "Qdrant and OVHcloud Bring Vector Search to All Enterprises" short_description: "Collaborating to support startups and enterprises in Europe with a strong focus on data control and privacy." description: "Collaborating to support startups and enterprises in Europe with a strong focus on data control and privacy." preview_image: /blog/hybrid-cloud-ovhcloud/hybrid-cloud-ovhcloud.png date: 2024-04-10T00:05:00Z author: Qdrant featured: false weight: 1004 tags: - Qdrant - Vector Database --- With the official release of [Qdrant Hybrid Cloud](/hybrid-cloud/), businesses running their data infrastructure on [OVHcloud](https://ovhcloud.com/) are now able to deploy a fully managed vector database in their existing OVHcloud environment. We are excited about this partnership, which has been established through the [OVHcloud Open Trusted Cloud](https://opentrustedcloud.ovhcloud.com/en/) program, as it is based on our shared understanding of the importance of trust, control, and data privacy in the context of the emerging landscape of enterprise-grade AI applications. As part of this collaboration, we are also providing a detailed use case tutorial on building a recommendation system that demonstrates the benefits of running Qdrant Hybrid Cloud on OVHcloud. Deploying Qdrant Hybrid Cloud on OVHcloud's infrastructure represents a significant leap for European businesses invested in AI-driven projects, as this collaboration underscores the commitment to meeting the rigorous requirements for data privacy and control of European startups and enterprises building AI solutions. As businesses are progressing on their AI journey, they require dedicated solutions that allow them to make their data accessible for machine learning and AI projects, without having it leave the company's security perimeter. Prioritizing data sovereignty, a crucial aspect in today's digital landscape, will help startups and enterprises accelerate their AI agendas and build even more differentiating AI-enabled applications. The ability of running Qdrant Hybrid Cloud on OVHcloud not only underscores the commitment to innovative, secure AI solutions but also ensures that companies can navigate the complexities of AI and machine learning workloads with the flexibility and security required. > *“The partnership between OVHcloud and Qdrant Hybrid Cloud highlights, in the European AI landscape, a strong commitment to innovative and secure AI solutions, empowering startups and organisations to navigate AI complexities confidently. By emphasizing data sovereignty and security, we enable businesses to leverage vector databases securely.“* Yaniv Fdida, Chief Product and Technology Officer, OVHcloud #### Qdrant & OVHcloud: High Performance Vector Search With Full Data Control Through the seamless integration between Qdrant Hybrid Cloud and OVHcloud, developers and businesses are able to deploy the fully managed vector database within their existing OVHcloud setups in minutes, enabling faster, more accurate AI-driven insights. - **Simple setup:** With the seamless “one-click” installation, developers are able to deploy Qdrant’s fully managed vector database to their existing OVHcloud environment. - **Trust and data sovereignty**: Deploying Qdrant Hybrid Cloud on OVHcloud enables developers with vector search that prioritizes data sovereignty, a crucial aspect in today's AI landscape where data privacy and control are essential. True to its “Sovereign by design” DNA, OVHcloud guarantees that all the data stored are immune to extraterritorial laws and comply with the highest security standards. - **Open standards and open ecosystem**: OVHcloud’s commitment to open standards and an open ecosystem not only facilitates the easy integration of Qdrant Hybrid Cloud with OVHcloud’s AI services and GPU-powered instances but also ensures compatibility with a wide range of external services and applications, enabling seamless data workflows across the modern AI stack. - **Cost efficient sector search:** By leveraging Qdrant's quantization for efficient data handling and pairing it with OVHcloud's eco-friendly, water-cooled infrastructure, known for its superior price/performance ratio, this collaboration provides a strong foundation for cost efficient vector search. #### Build a RAG-Based System with Qdrant Hybrid Cloud and OVHcloud ![hybrid-cloud-ovhcloud-tutorial](/blog/hybrid-cloud-ovhcloud/hybrid-cloud-ovhcloud-tutorial.png) To show how Qdrant Hybrid Cloud deployed on OVHcloud allows developers to leverage the benefits of an AI use case that is completely run within the existing infrastructure, we put together a comprehensive use case tutorial. This tutorial guides you through creating a recommendation system using collaborative filtering and sparse vectors with Qdrant Hybrid Cloud on OVHcloud. It employs the Movielens dataset for practical application, providing insights into building efficient, scalable recommendation engines suitable for developers and data scientists looking to leverage advanced vector search technologies within a secure, GDPR-compliant European cloud infrastructure. [Try the Tutorial](/documentation/tutorials/recommendation-system-ovhcloud/) #### Get Started Today and Leverage the Benefits of Qdrant Hybrid Cloud Setting up Qdrant Hybrid Cloud on OVHcloud is straightforward and quick, thanks to the intuitive integration with Kubernetes. Here's how: - **Hybrid Cloud Activation**: Log into your Qdrant account and enable 'Hybrid Cloud'. - **Cluster Integration**: Add your OVHcloud Kubernetes clusters as a Hybrid Cloud Environment in the Hybrid Cloud settings. - **Effortless Deployment**: Use the Qdrant Management Console for easy deployment and management of Qdrant clusters on OVHcloud. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-red-hat-openshift.md
--- draft: false title: "Red Hat OpenShift and Qdrant Hybrid Cloud Offer Seamless and Scalable AI" short_description: "Qdrant brings managed vector databases to Red Hat OpenShift for large-scale GenAI." description: "Qdrant brings managed vector databases to Red Hat OpenShift for large-scale GenAI." preview_image: /blog/hybrid-cloud-red-hat-openshift/hybrid-cloud-red-hat-openshift.png date: 2024-04-11T00:04:00Z author: Qdrant featured: false weight: 1003 tags: - Qdrant - Vector Database --- We’re excited about our collaboration with Red Hat to bring the Qdrant vector database to [Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) customers! With the release of [Qdrant Hybrid Cloud](/hybrid-cloud/), developers can now deploy and run the Qdrant vector database directly in their Red Hat OpenShift environment. This collaboration enables developers to scale more seamlessly, operate more consistently across hybrid cloud environments, and maintain complete control over their vector data. This is a big step forward in simplifying AI infrastructure and empowering data-driven projects, like retrieval augmented generation (RAG) use cases, advanced search scenarios, or recommendations systems. In the rapidly evolving field of Artificial Intelligence and Machine Learning, the demand for being able to manage the modern AI stack within the existing infrastructure becomes increasingly relevant for businesses. As enterprises are launching new AI applications and use cases into production, they require the ability to maintain complete control over their data, since these new apps often work with sensitive internal and customer-centric data that needs to remain within the owned premises. This is why enterprises are increasingly looking for maximum deployment flexibility for their AI workloads. >*“Red Hat is committed to driving transparency, flexibility and choice for organizations to more easily unlock the power of AI. By working with partners like Qdrant to enable streamlined integration experiences on Red Hat OpenShift for AI use cases, organizations can more effectively harness critical data and deliver real business outcomes,”* said Steven Huels, Vice President and General Manager, AI Business Unit, Red Hat. #### The Synergy of Qdrant Hybrid Cloud and Red Hat OpenShift Qdrant Hybrid Cloud is the first vector database that can be deployed anywhere, with complete database isolation, while still providing a fully managed cluster management. Running Qdrant Hybrid Cloud on Red Hat OpenShift allows enterprises to deploy and run a fully managed vector database in their own environment, ultimately allowing businesses to run managed vector search on their existing cloud and infrastructure environments, with full data sovereignty. Red Hat OpenShift, the industry’s leading hybrid cloud application platform powered by Kubernetes, helps streamline the deployment of Qdrant Hybrid Cloud within an enterprise's secure premises. Red Hat OpenShift provides features like auto-scaling, load balancing, and advanced security controls that can help you manage and maintain your vector database deployments more effectively. In addition, Red Hat OpenShift supports deployment across multiple environments, including on-premises, public, private and hybrid cloud landscapes. This flexibility, coupled with Qdrant Hybrid Cloud, allows organizations to choose the deployment model that best suits their needs. #### Why Run Qdrant Hybrid Cloud on Red Hat OpenShift? - **Scalability**: Red Hat OpenShift's container orchestration effortlessly scales Qdrant Hybrid Cloud components, accommodating fluctuating workload demands with ease. - **Portability**: The consistency across hybrid cloud environments provided by Red Hat OpenShift allows for smoother operation of Qdrant Hybrid Cloud across various infrastructures. - **Automation**: Deployment, scaling, and management tasks are automated, reducing operational overhead and simplifying the management of Qdrant Hybrid Cloud. - **Security**: Red Hat OpenShift provides built-in security features, including container isolation, network policies, and role-based access control (RBAC), enhancing the security posture of Qdrant Hybrid Cloud deployments. - **Flexibility:** Red Hat OpenShift supports a wide range of programming languages, frameworks, and tools, providing flexibility in developing and deploying Qdrant Hybrid Cloud applications. - **Integration:** Red Hat OpenShift can be integrated with various Red Hat and third-party tools, facilitating seamless integration of Qdrant Hybrid Cloud with other enterprise systems and services. #### Get Started with Qdrant Hybrid Cloud on Red Hat OpenShift We're thrilled about our collaboration with Red Hat to help simplify AI infrastructure for developers and enterprises alike. By deploying Qdrant Hybrid Cloud on Red Hat OpenShift, developers can gain the ability to more easily scale and maintain greater operational consistency across hybrid cloud environments. To get started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud on Red Hat OpenShift. Additionally, you can find more details on the seamless deployment process in our documentation: ![hybrid-cloud-red-hat-openshift-tutorial](/blog/hybrid-cloud-red-hat-openshift/hybrid-cloud-red-hat-openshift-tutorial.png) #### Tutorial: Private Chatbot for Interactive Learning In this tutorial, you will build a chatbot without public internet access. The goal is to keep sensitive data secure and isolated. Your RAG system will be built with Qdrant Hybrid Cloud on Red Hat OpenShift, leveraging Haystack for enhanced generative AI capabilities. This tutorial especially explores how this setup ensures that not a single data point leaves the environment. [Try the Tutorial](/documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/) #### Documentation: Deploy Qdrant in a Few Clicks > Our simple Kubernetes-native design allows you to deploy Qdrant Hybrid Cloud on your Red Hat OpenShift instance in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) This collaboration marks an important milestone in the quest for simplified AI infrastructure, offering a robust, scalable, and security-optimized solution for managing vector databases in a hybrid cloud environment. The combination of Qdrant's performance and Red Hat OpenShift's operational excellence opens new avenues for enterprises looking to leverage the power of AI and ML. #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-scaleway.md
--- draft: false title: "Qdrant Hybrid Cloud and Scaleway Empower GenAI" short_description: "Supporting innovation in AI with the launch of a revolutionary managed database for startups and enterprises." description: "Supporting innovation in AI with the launch of a revolutionary managed database for startups and enterprises." preview_image: /blog/hybrid-cloud-scaleway/hybrid-cloud-scaleway.png date: 2024-04-10T00:06:00Z author: Qdrant featured: false weight: 1002 tags: - Qdrant - Vector Database --- In a move to empower the next wave of AI innovation, Qdrant and [Scaleway](https://www.scaleway.com/en/) collaborate to introduce [Qdrant Hybrid Cloud](/hybrid-cloud/), a fully managed vector database that can be deployed on existing Scaleway environments. This collaboration is set to democratize access to advanced AI capabilities, enabling developers to easily deploy and scale vector search technologies within Scaleway's robust and developer-friendly cloud infrastructure. By focusing on the unique needs of startups and the developer community, Qdrant and Scaleway are providing access to intuitive and easy to use tools, making cutting-edge AI more accessible than ever before. Building on this vision, the integration between Scaleway and Qdrant Hybrid Cloud leverages the strengths of both Qdrant, with its leading open-source vector database, and Scaleway, known for its innovative and scalable cloud solutions. This integration means startups and developers can now harness the power of vector search - essential for AI applications like recommendation systems, image recognition, and natural language processing - within their existing environment without the complexity of maintaining such advanced setups. *"With our partnership with Qdrant, Scaleway reinforces its status as Europe's leading cloud provider for AI innovation. The integration of Qdrant's fast and accurate vector database enriches our expanding suite of AI solutions. This means you can build smarter, faster AI projects with us, worry-free about performance and security." Frédéric BARDOLLE, Lead PM AI @ Scaleway* #### Developing a Retrieval Augmented Generation (RAG) Application with Qdrant Hybrid Cloud, Scaleway, and LangChain Retrieval Augmented Generation (RAG) enhances Large Language Models (LLMs) by integrating vector search to provide precise, context-rich responses. This combination allows LLMs to access and incorporate specific data in real-time, vastly improving the quality of AI-generated content. RAG applications often rely on sensitive or proprietary internal data, emphasizing the importance of data sovereignty. Running the entire stack within your own environment becomes crucial for maintaining control over this data. Qdrant Hybrid Cloud deployed on Scaleway addresses this need perfectly, offering a secure, scalable platform that respects data sovereignty requirements while leveraging the full potential of RAG for sophisticated AI solutions. ![hybrid-cloud-scaleway-tutorial](/blog/hybrid-cloud-scaleway/hybrid-cloud-scaleway-tutorial.png) We created a tutorial that guides you through setting up and leveraging Qdrant Hybrid Cloud on Scaleway for a RAG application, providing insights into efficiently managing data within a secure, sovereign framework. It highlights practical steps to integrate vector search with LLMs, optimizing the generation of high-quality, relevant AI content, while ensuring data sovereignty is maintained throughout. [Try the Tutorial](/documentation/tutorials/rag-chatbot-scaleway/) #### The Benefits of Running Qdrant Hybrid Cloud on Scaleway Choosing Qdrant Hybrid Cloud and Scaleway for AI applications offers several key advantages: - **AI-Focused Resources:** Scaleway aims to be the cloud provider of choice for AI companies, offering the resources and infrastructure to power complex AI and machine learning workloads, helping to advance the development and deployment of AI technologies. This paired with Qdrant Hybrid Cloud provides a strong foundational platform for advanced AI applications. - **Scalable Vector Search:** Qdrant Hybrid Cloud provides a fully managed vector database that allows to effortlessly scale the setup through vertical or horizontal scaling. Deployed on Scaleway, this is a robust setup that is designed to meet the needs of businesses at every stage of growth, from startups to large enterprises, ensuring a full spectrum of solutions for various projects and workloads. - **European Roots and Focus**: With a strong presence in Europe and a commitment to supporting the European tech ecosystem, Scaleway is ideally positioned to partner with European-based companies like Qdrant, providing local expertise and infrastructure that aligns with European regulatory standards. - **Sustainability Commitment**: Scaleway leads with an eco-conscious approach, featuring adiabatic data centers that significantly reduce cooling costs and environmental impact. Scaleway prioritizes extending hardware lifecycle beyond industry norms to lessen our ecological footprint. #### Get Started in a Few Seconds Setting up Qdrant Hybrid Cloud on Scaleway is streamlined and quick, thanks to its Kubernetes-native architecture. Follow these simple three steps to launch: 1. **Activate Hybrid Cloud**: First, log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and select ‘Hybrid Cloud’ to activate. 2. **Integrate Your Clusters**: Navigate to the Hybrid Cloud settings and add your Scaleway Kubernetes clusters as a Hybrid Cloud Environment. 3. **Simplified Management**: Use the Qdrant Management Console for easy creation and oversight of your Qdrant clusters on Scaleway. For more comprehensive guidance, our documentation provides step-by-step instructions for deploying Qdrant on Scaleway. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-stackit.md
--- draft: false title: "STACKIT and Qdrant Hybrid Cloud for Best Data Privacy" short_description: "Empowering German AI development with a data privacy-first platform." description: "Empowering German AI development with a data privacy-first platform." preview_image: /blog/hybrid-cloud-stackit/hybrid-cloud-stackit.png date: 2024-04-10T00:07:00Z author: Qdrant featured: false weight: 1001 tags: - Qdrant - Vector Database --- Qdrant and [STACKIT](https://www.stackit.de/en/) are thrilled to announce that developers are now able to deploy a fully managed vector database to their STACKIT environment with the introduction of [Qdrant Hybrid Cloud](/hybrid-cloud/). This is a great step forward for the German AI ecosystem as it enables developers and businesses to build cutting edge AI applications that run on German data centers with full control over their data. Vector databases are an essential component of the modern AI stack. They enable rapid and accurate retrieval of high-dimensional data, crucial for powering search, recommendation systems, and augmenting machine learning models. In the rising field of GenAI, vector databases power retrieval-augmented-generation (RAG) scenarios as they are able to enhance the output of large language models (LLMs) by injecting relevant contextual information. However, this contextual information is often rooted in confidential internal or customer-related information, which is why enterprises are in pursuit of solutions that allow them to make this data available for their AI applications without compromising data privacy, losing data control, or letting data exit the company's secure environment. Qdrant Hybrid Cloud is the first managed vector database that can be deployed in an existing STACKIT environment. The Kubernetes-native setup allows businesses to operate a fully managed vector database, while maintaining control over their data through complete data isolation. Qdrant Hybrid Cloud's managed service seamlessly integrates into STACKIT's cloud environment, allowing businesses to deploy fully managed vector search workloads, secure in the knowledge that their operations are backed by the stringent data protection standards of Germany's data centers and in full compliance with GDPR. This setup not only ensures that data remains under the businesses control but also paves the way for secure, AI-driven application development. #### Key Features and Benefits of Qdrant on STACKIT: - **Seamless Integration and Deployment**: With Qdrant’s Kubernetes-native design, businesses can effortlessly connect their STACKIT cloud as a Hybrid Cloud Environment, enabling a one-step, scalable Qdrant deployment. - **Enhanced Data Privacy**: Leveraging STACKIT's German data centers ensures that all data processing complies with GDPR and other relevant European data protection standards, providing businesses with unparalleled control over their data. - **Scalable and Managed AI Solutions**: Deploying Qdrant on STACKIT provides a fully managed vector search engine with the ability to scale vertically and horizontally, with robust support for zero-downtime upgrades and disaster recovery, all within STACKIT's secure infrastructure. #### Use Case: AI-enabled Contract Management built with Qdrant Hybrid Cloud, STACKIT, and Aleph Alpha ![hybrid-cloud-stackit-tutorial](/blog/hybrid-cloud-stackit/hybrid-cloud-stackit-tutorial.png) To demonstrate the power of Qdrant Hybrid Cloud on STACKIT, we’ve developed a comprehensive tutorial showcasing how to build secure, AI-driven applications focusing on data sovereignty. This tutorial specifically shows how to build a contract management platform that enables users to upload documents (PDF or DOCx), which are then segmented for searchable access. Designed with multitenancy, users can only access their team or organization's documents. It also features custom sharding for location-specific document storage. Beyond search, the application offers rephrasing of document excerpts for clarity to those without context. [Try the Tutorial](/documentation/tutorials/rag-contract-management-stackit-aleph-alpha/) #### Start Using Qdrant with STACKIT Deploying Qdrant Hybrid Cloud on STACKIT is straightforward, thanks to the seamless integration facilitated by Kubernetes. Here are the steps to kickstart your journey: 1. **Qdrant Hybrid Cloud Activation**: Start by activating ‘Hybrid Cloud’ in your [Qdrant Cloud account](https://cloud.qdrant.io/login). 2. **Cluster Integration**: Add your STACKIT Kubernetes clusters as a Hybrid Cloud Environment in the Hybrid Cloud section. 3. **Effortless Deployment**: Use the Qdrant Management Console to effortlessly create and manage your Qdrant clusters on STACKIT. We invite you to explore the detailed documentation on deploying Qdrant on STACKIT, designed to guide you through each step of the process seamlessly. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-vultr.md
--- draft: false title: "Vultr and Qdrant Hybrid Cloud Support Next-Gen AI Projects" short_description: "Providing a flexible platform for high-performance vector search in next-gen AI workloads." description: "Providing a flexible platform for high-performance vector search in next-gen AI workloads." preview_image: /blog/hybrid-cloud-vultr/hybrid-cloud-vultr.png date: 2024-04-10T00:08:00Z author: Qdrant featured: false weight: 1000 tags: - Qdrant - Vector Database --- We’re excited to share that Qdrant and [Vultr](https://www.vultr.com/) are partnering to provide seamless scalability and performance for vector search workloads. With Vultr's global footprint and customizable platform, deploying vector search workloads becomes incredibly flexible. Qdrant's new [Qdrant Hybrid Cloud](/hybrid-cloud/) offering and its Kubernetes-native design, coupled with Vultr's straightforward virtual machine provisioning, allows for simple setup when prototyping and building next-gen AI apps. #### Adapting to Diverse AI Development Needs with Customization and Deployment Flexibility In the fast-paced world of AI and ML, businesses are eagerly integrating AI and generative AI to enhance their products with new features like AI assistants, develop new innovative solutions, and streamline internal workflows with AI-driven processes. Given the diverse needs of these applications, it's clear that a one-size-fits-all approach doesn't apply to AI development. This variability in requirements underscores the need for adaptable and customizable development environments. Recognizing this, Qdrant and Vultr have teamed up to offer developers unprecedented flexibility and control. The collaboration enables the deployment of a fully managed vector database on Vultr’s adaptable platform, catering to the specific needs of diverse AI projects. This unique setup offers developers the ideal Vultr environment for their vector search workloads. It ensures seamless adaptability and data privacy with all data residing in their environment. For the first time, Qdrant Hybrid Cloud allows for fully managing a vector database on Vultr, promoting rapid development cycles without the hassle of modifying existing setups and ensuring that data remains secure within the organization. Moreover, this partnership empowers developers with centralized management over their vector database clusters via Qdrant’s control plane, enabling precise size adjustments based on workload demands. This joint setup marks a significant step in providing the AI and ML field with flexible, secure, and efficient application development tools. > *"Our collaboration with Qdrant empowers developers to unlock the potential of vector search applications, such as RAG, by deploying Qdrant Hybrid Cloud with its high-performance search capabilities directly on Vultr's global, automated cloud infrastructure. This partnership creates a highly scalable and customizable platform, uniquely designed for deploying and managing AI workloads with unparalleled efficiency."* Kevin Cochrane, Vultr CMO. #### The Benefits of Deploying Qdrant Hybrid Cloud on Vultr Together, Qdrant Hybrid Cloud and Vultr offer enhanced AI and ML development with streamlined benefits: - **Simple and Flexible Deployment:** Deploy Qdrant Hybrid Cloud on Vultr in a few minutes with a simple “one-click” installation by adding your Vutlr environment as a Hybrid Cloud Environment to Qdrant. - **Scalability and Customizability**: Qdrant’s efficient data handling and Vultr’s scalable infrastructure means projects can be adjusted dynamically to workload demands, optimizing costs without compromising performance or capabilities. - **Unified AI Stack Management:** Seamlessly manage the entire lifecycle of AI applications, from vector search with Qdrant Hybrid Cloud to deployment and scaling with the Vultr platform and its AI and ML solutions, all within a single, integrated environment. This setup simplifies workflows, reduces complexity, accelerates development cycles, and simplifies the integration with other elements of the AI stack like model development, finetuning, or inference and training. - **Global Reach, Local Execution**: With Vultr's worldwide infrastructure and Qdrant's fast vector search, deploy AI solutions globally while ensuring low latency and compliance with local data regulations, enhancing user satisfaction. #### Getting Started with Qdrant Hybrid Cloud and Vultr We've compiled an in-depth guide for leveraging Qdrant Hybrid Cloud on Vultr to kick off your journey into building cutting-edge AI solutions. For further insights into the deployment process, refer to our comprehensive documentation. ![hybrid-cloud-vultr-tutorial](/blog/hybrid-cloud-vultr/hybrid-cloud-vultr-tutorial.png) #### Tutorial: Crafting a Personalized AI Assistant with RAG This tutorial outlines creating a personalized AI assistant using Qdrant Hybrid Cloud on Vultr, incorporating advanced vector search to power dynamic, interactive experiences. We will develop a RAG pipeline powered by DSPy and detail how to maintain data privacy within your Vultr environment. [Try the Tutorial](/documentation/tutorials/rag-chatbot-vultr-dspy-ollama/) #### Documentation: Effortless Deployment with Qdrant Our Kubernetes-native framework simplifies the deployment of Qdrant Hybrid Cloud on Vultr, enabling you to get started in just a few straightforward steps. Dive into our documentation to learn more. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud.md
--- draft: false title: "Qdrant Hybrid Cloud: the First Managed Vector Database You Can Run Anywhere" slug: hybrid-cloud short_description: description: preview_image: /blog/hybrid-cloud/hybrid-cloud.png social_preview_image: /blog/hybrid-cloud/hybrid-cloud.png date: 2024-04-15T00:01:00Z author: Andre Zayarni, CEO & Co-Founder featured: true tags: - Hybrid Cloud --- We are excited to announce the official launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) today, a significant leap forward in the field of vector search and enterprise AI. Rooted in our open-source origin, we are committed to offering our users and customers unparalleled control and sovereignty over their data and vector search workloads. Qdrant Hybrid Cloud stands as **the industry's first managed vector database that can be deployed in any environment** - be it cloud, on-premise, or the edge. <p align="center"><iframe width="560" height="315" src="https://www.youtube.com/embed/gWH2uhWgTvM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p> As the AI application landscape evolves, the industry is transitioning from prototyping innovative AI solutions to actively deploying AI applications into production (incl. GenAI, semantic search, or recommendation systems). In this new phase, **privacy**, **data sovereignty**, **deployment flexibility**, and **control** are at the top of developers’ minds. These factors are critical when developing, launching, and scaling new applications, whether they are customer-facing services like AI assistants or internal company solutions for knowledge and information retrieval or process automation. Qdrant Hybrid Cloud offers developers a vector database that can be deployed in any existing environment, ensuring data sovereignty and privacy control through complete database isolation - with the full capabilities of our managed cloud service. - **Unmatched Deployment Flexibility**: With its Kubernetes-native architecture, Qdrant Hybrid Cloud provides the ability to bring your own cloud or compute by deploying Qdrant as a managed service on the infrastructure of choice, such as Oracle Cloud Infrastructure (OCI), Vultr, Red Hat OpenShift, DigitalOcean, OVHcloud, Scaleway, STACKIT, Civo, VMware vSphere, AWS, Google Cloud, or Microsoft Azure. - **Privacy & Data Sovereignty**: Qdrant Hybrid Cloud offers unparalleled data isolation and the flexibility to process vector search workloads in their own environments. - **Scalable & Secure Architecture**: Qdrant Hybrid Cloud's design ensures scalability and adaptability with its Kubernetes-native architecture, separates data and control for enhanced security, and offers a unified management interface for ease of use, enabling businesses to grow and adapt without compromising privacy or control. - **Effortless Setup in Seconds**: Setting up Qdrant Hybrid Cloud is incredibly straightforward, thanks to our [simple Kubernetes installation](/documentation/hybrid-cloud/) that connects effortlessly with your chosen infrastructure, enabling secure, scalable deployments right from the get-go Let’s explore these aspects in more detail: #### Maximizing Deployment Flexibility: Enabling Applications to Run Across Any Environment ![hybrid-cloud-environments](/blog/hybrid-cloud/hybrid-cloud-environments.png) Qdrant Hybrid Cloud, powered by our seamless Kubernetes-native architecture, is the first managed vector database engineered for unparalleled deployment flexibility. This means that regardless of where you run your AI applications, you can now enjoy the benefits of a fully managed Qdrant vector database, simplifying operations across any cloud, on-premise, or edge locations. For this launch of Qdrant Hybrid Cloud, we are proud to collaborate with key cloud providers, including [Oracle Cloud Infrastructure (OCI)](https://blogs.oracle.com/cloud-infrastructure/post/qdrant-hybrid-cloud-now-available-oci-customers), [Red Hat OpenShift](/blog/hybrid-cloud-red-hat-openshift/), [Vultr](/blog/hybrid-cloud-vultr/), [DigitalOcean](/blog/hybrid-cloud-digitalocean/), [OVHcloud](/blog/hybrid-cloud-ovhcloud/), [Scaleway](/blog/hybrid-cloud-scaleway/), [Civo](/documentation/hybrid-cloud/platform-deployment-options/#civo), and [STACKIT](/blog/hybrid-cloud-stackit/). These partnerships underscore our commitment to delivering a versatile and robust vector database solution that meets the complex deployment requirements of today's AI applications. In addition to our partnerships with key cloud providers, we are also launching in collaboration with renowned AI development tools and framework leaders, including [LlamaIndex](/blog/hybrid-cloud-llamaindex/), [LangChain](/blog/hybrid-cloud-langchain/), [Airbyte](/blog/hybrid-cloud-airbyte/), [JinaAI](/blog/hybrid-cloud-jinaai/), [Haystack by deepset](/blog/hybrid-cloud-haystack/), and [Aleph Alpha](/blog/hybrid-cloud-aleph-alpha/). These launch partners are instrumental in ensuring our users can seamlessly integrate with essential technologies for their AI applications, enriching our offering and reinforcing our commitment to versatile and comprehensive deployment environments. Together with our launch partners we have created detailed tutorials that show how to build cutting-edge AI applications with Qdrant Hybrid Cloud on the infrastructure of your choice. These tutorials are available in our [launch partner blog](/blog/hybrid-cloud-launch-partners/). Additionally, you can find expansive [documentation](/documentation/hybrid-cloud/) and instructions on how to [deploy Qdrant Hybrid Cloud](/documentation/hybrid-cloud/hybrid-cloud-setup/). #### Powering Vector Search & AI with Unmatched Data Sovereignty Proprietary data, the lifeblood of AI-driven innovation, fuels personalized experiences, accurate recommendations, and timely anomaly detection. This data, unique to each organization, encompasses customer behaviors, internal processes, and market insights - crucial for tailoring AI applications to specific business needs and competitive differentiation. However, leveraging such data effectively while ensuring its **security, privacy, and control** requires diligence. The innovative architecture of Qdrant Hybrid Cloud ensures **complete database isolation**, empowering developers with the autonomy to tailor where they process their vector search workloads with total data sovereignty. Rooted deeply in our commitment to open-source principles, this approach aims to foster a new level of trust and reliability by providing the essential tools to navigate the exciting landscape of enterprise AI. #### How We Designed the Qdrant Hybrid Cloud Architecture We designed the architecture of Qdrant Hybrid Cloud to meet the evolving needs of businesses seeking unparalleled flexibility, control, and privacy. - **Kubernetes-Native Design**: By embracing Kubernetes, we've ensured that our architecture is both scalable and adaptable. This choice supports our deployment flexibility principle, allowing Qdrant Hybrid Cloud to integrate seamlessly with any infrastructure that can run Kubernetes. - **Decoupled Data and Control Planes**: Our architecture separates the data plane (where the data is stored and processed) from the control plane (which manages the cluster operations). This separation enhances security, allows for more granular control over the data, and enables the data plane to reside anywhere the user chooses. - **Unified Management Interface**: Despite the underlying complexity and the diversity of deployment environments, we designed a unified, user-friendly interface that simplifies the Qdrant cluster management. This interface supports everything from deployment to scaling and upgrading operations, all accessible from the [Qdrant Cloud portal](https://cloud.qdrant.io/login). - **Extensible and Modular**: Recognizing the rapidly evolving nature of technology and enterprise needs, we built Qdrant Hybrid Cloud to be both extensible and modular. Users can easily integrate new services, data sources, and deployment environments as their requirements grow and change. #### Diagram: Qdrant Hybrid Cloud Architecture ![hybrid-cloud-architecture](/blog/hybrid-cloud/hybrid-cloud-architecture.png) #### Quickstart: Effortless Setup with Our One-Step Installation We’ve made getting started with Qdrant Hybrid Cloud as simple as possible. The Kubernetes “One-Step” installation will allow you to connect with the infrastructure of your choice. This is how you can get started: 1. **Activate Hybrid Cloud**: Simply sign up for or log into your [Qdrant Cloud](https://cloud.qdrant.io/login) account and navigate to the **Hybrid Cloud** section. 2. **Onboard your Kubernetes cluster**: Follow the onboarding wizard and add your Kubernetes cluster as a Hybrid Cloud Environment - be it in the cloud, on-premise, or at the edge. 3. **Deploy Qdrant clusters securely, with confidence:** Now, you can effortlessly create and manage Qdrant clusters in your own environment, directly from the central Qdrant Management Console. This supports horizontal and vertical scaling, zero-downtime upgrades, and disaster recovery seamlessly, allowing you to deploy anywhere with confidence. Explore our [detailed documentation](/documentation/hybrid-cloud/) and [tutorials](/documentation/examples/) to seamlessly deploy Qdrant Hybrid Cloud in your preferred environment, and don't miss our [launch partner blog post](/blog/hybrid-cloud-launch-partners/) for practical insights. Start leveraging the full potential of Qdrant Hybrid Cloud and [create your first Qdrant cluster today](https://cloud.qdrant.io/login), unlocking the flexibility and control essential for your AI and vector search workloads. [![hybrid-cloud-get-started](/blog/hybrid-cloud/hybrid-cloud-get-started.png)](https://cloud.qdrant.io/login) ## Launch Partners We launched Qdrant Hybrid Cloud with assistance and support of our trusted partners. Learn what they have to say about our latest offering: #### Oracle Cloud Infrastructure: > *"We are excited to partner with Qdrant to bring their powerful vector search capabilities to Oracle Cloud Infrastructure. By offering Qdrant Hybrid Cloud as a managed service on OCI, we are empowering enterprises to harness the full potential of AI-driven applications while maintaining complete control over their data. This collaboration represents a significant step forward in making scalable vector search accessible and manageable for businesses across various industries, enabling them to drive innovation, enhance productivity, and unlock valuable insights from their data."* Dr. Sanjay Basu, Senior Director of Cloud Engineering, AI/GPU Infrastructure at Oracle Read more in [OCI's latest Partner Blog](https://blogs.oracle.com/cloud-infrastructure/post/qdrant-hybrid-cloud-now-available-oci-customers). #### Red Hat: > *“Red Hat is committed to driving transparency, flexibility and choice for organizations to more easily unlock the power of AI. By working with partners like Qdrant to enable streamlined integration experiences on Red Hat OpenShift for AI use cases, organizations can more effectively harness critical data and deliver real business outcomes,”* said Steven Huels, vice president and general manager, AI Business Unit, Red Hat. Read more in our [official Red Hat Partner Blog](/blog/hybrid-cloud-red-hat-openshift/). #### Vultr: > *"Our collaboration with Qdrant empowers developers to unlock the potential of vector search applications, such as RAG, by deploying Qdrant Hybrid Cloud with its high-performance search capabilities directly on Vultr's global, automated cloud infrastructure. This partnership creates a highly scalable and customizable platform, uniquely designed for deploying and managing AI workloads with unparalleled efficiency."* Kevin Cochrane, Vultr CMO. Read more in our [official Vultr Partner Blog](/blog/hybrid-cloud-vultr/). #### OVHcloud: > *“The partnership between OVHcloud and Qdrant Hybrid Cloud highlights, in the European AI landscape, a strong commitment to innovative and secure AI solutions, empowering startups and organisations to navigate AI complexities confidently. By emphasizing data sovereignty and security, we enable businesses to leverage vector databases securely."* Yaniv Fdida, Chief Product and Technology Officer, OVHcloud Read more in our [official OVHcloud Partner Blog](/blog/hybrid-cloud-ovhcloud/). #### DigitalOcean: > *“Qdrant, with its seamless integration and robust performance, equips businesses to develop cutting-edge applications that truly resonate with their users. Through applications such as semantic search, Q&A systems, recommendation engines, image search, and RAG, DigitalOcean customers can leverage their data to the fullest, ensuring privacy and driving innovation.“* - Bikram Gupta, Lead Product Manager, Kubernetes & App Platform, DigitalOcean. Read more in our [official DigitalOcean Partner Blog](/blog/hybrid-cloud-digitalocean/). #### Scaleway: > *"With our partnership with Qdrant, Scaleway reinforces its status as Europe's leading cloud provider for AI innovation. The integration of Qdrant's fast and accurate vector database enriches our expanding suite of AI solutions. This means you can build smarter, faster AI projects with us, worry-free about performance and security."* Frédéric Bardolle, Lead PM AI, Scaleway Read more in our [official Scaleway Partner Blog](/blog/hybrid-cloud-scaleway/). #### Airbyte: > *“The new Qdrant Hybrid Cloud is an exciting addition that offers peace of mind and flexibility, aligning perfectly with the needs of Airbyte Enterprise users who value the same balance. Being open-source at our core, both Qdrant and Airbyte prioritize giving users the flexibility to build and test locally—a significant advantage for data engineers and AI practitioners. We're enthusiastic about the Hybrid Cloud launch, as it mirrors our vision of enabling users to confidently transition from local development and local deployments to a managed solution, with both cloud and hybrid cloud deployment options.”* AJ Steers, Staff Engineer for AI, Airbyte Read more in our [official Airbyte Partner Blog](/blog/hybrid-cloud-airbyte/). #### deepset: > *“We hope that with Haystack 2.0 and our growing partnerships such as what we have here with Qdrant Hybrid Cloud, engineers are able to build AI systems with full autonomy. Both in how their pipelines are designed, and how their data are managed.”* Tuana Çelik, Developer Relations Lead, deepset. Read more in our [official Haystack by deepset Partner Blog](/blog/hybrid-cloud-haystack/). #### LlamaIndex: > *“LlamaIndex is thrilled to partner with Qdrant on the launch of Qdrant Hybrid Cloud, which upholds Qdrant's core functionality within a Kubernetes-based architecture. This advancement enhances LlamaIndex's ability to support diverse user environments, facilitating the development and scaling of production-grade, context-augmented LLM applications.”* Jerry Liu, CEO and Co-Founder, LlamaIndex Read more in our [official LlamaIndex Partner Blog](/blog/hybrid-cloud-llamaindex/). #### LangChain: > *“The AI industry is rapidly maturing, and more companies are moving their applications into production. We're really excited at LangChain about supporting enterprises' unique data architectures and tooling needs through integrations and first-party offerings through LangSmith. First-party enterprise integrations like Qdrant's greatly contribute to the LangChain ecosystem with enterprise-ready retrieval features that seamlessly integrate with LangSmith's observability, production monitoring, and automation features, and we're really excited to develop our partnership further.”* -Erick Friis, Founding Engineer at LangChain Read more in our [official LangChain Partner Blog](/blog/hybrid-cloud-langchain/). #### Jina AI: > *“The collaboration of Qdrant Hybrid Cloud with Jina AI’s embeddings gives every user the tools to craft a perfect search framework with unmatched accuracy and scalability. It’s a partnership that truly pays off!”* Nan Wang, CTO, Jina AI Read more in our [official Jina AI Partner Blog](/blog/hybrid-cloud-jinaai/). We have also launched Qdrant Hybrid Cloud with the support of **Aleph Alpha**, **STACKIT** and **Civo**. Learn more about our valued partners: - **Aleph Alpha:** [Enhance AI Data Sovereignty with Aleph Alpha and Qdrant Hybrid Cloud](/blog/hybrid-cloud-aleph-alpha/) - **STACKIT:** [STACKIT and Qdrant Hybrid Cloud for Best Data Privacy](/blog/hybrid-cloud-stackit/) - **Civo:** [Deploy Qdrant Hybrid Cloud on Civo Kubernetes](/documentation/hybrid-cloud/platform-deployment-options/#civo)
qdrant-landing/content/blog/indexify-unveiled-diptanu-gon-choudhury-vector-space-talk-009.md
--- draft: false title: Indexify Unveiled - Diptanu Gon Choudhury | Vector Space Talks slug: indexify-content-extraction-engine short_description: Diptanu Gon Choudhury discusses how Indexify is transforming the AI-driven workflow in enterprises today. description: Diptanu Gon Choudhury shares insights on re-imaging Spark and data infrastructure while discussing his work on Indexify to enhance AI-driven workflows and knowledge bases. preview_image: /blog/from_cms/diptanu-choudhury-cropped.png date: 2024-01-26T16:40:55.469Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Indexify - structured extraction engine - rag-based applications --- > *"We have something like Qdrant, which is very geared towards doing Vector search. And so we understand the shape of the storage system now.”*\ — Diptanu Gon Choudhury > Diptanu Gon Choudhury is the founder of Tensorlake. They are building Indexify - an open-source scalable structured extraction engine for unstructured data to build near-real-time knowledgebase for AI/agent-driven workflows and query engines. Before building Indexify, Diptanu created the Nomad cluster scheduler at Hashicorp, inventor of the Titan/Titus cluster scheduler at Netflix, led the FBLearner machine learning platform, and built the real-time speech inference engine at Facebook. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/6MSwo7urQAWE7EOxO7WTns?si=_s53wC0wR9C4uF8ngGYQlg), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RoOgTxHkViA).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/RoOgTxHkViA?si=r0EjWlssjFDVrzo6" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Indexify-Unveiled-A-Scalable-and-Near-Real-time-Content-Extraction-Engine-for-Multimodal-Unstructured-Data---Diptanu-Gon-Choudhury--Vector-Space-Talk-009-e2el8qc/a-aas4nil" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Discover how reimagined data infrastructures revolutionize AI-agent workflows as Diptanu delves into Indexify, transforming raw data into real-time knowledge bases, and shares expert insights on optimizing rag-based applications, all amidst the ever-evolving landscape of Spark. Here's What You'll Discover: 1. **Innovative Data Infrastructure**: Diptanu dives deep into how Indexify is revolutionizing the enterprise world by providing a sharper focus on data infrastructure and a refined abstraction for generative AI this year. 2. **AI-Copilot for Call Centers**: Learn how Indexify streamlines customer service with a real-time knowledge base, transforming how agents interact and resolve issues. 3. **Scaling Real-Time Indexing**: discover the system’s powerful capability to index content as it happens, enabling multiple extractors to run simultaneously. It’s all about the right model and the computing capacity for on-the-fly content generation. 4. **Revamping Developer Experience**: get a glimpse into the future as Diptanu chats with Demetrios about reimagining Spark to fit today's tech capabilities, vastly different from just two years ago! 5. **AI Agent Workflow Insights**: Understand the crux of AI agent-driven workflows, where models dynamically react to data, making orchestrated decisions in live environments. > Fun Fact: The development of Indexify by Diptanu was spurred by the rising use of Large Language Models in applications and the subsequent need for better data infrastructure to support these technologies. > ## Show notes: 00:00 AI's impact on model production and workflows.\ 05:15 Building agents need indexes for continuous updates.\ 09:27 Early RaG and LLMs adopters neglect data infrastructure.\ 12:32 Design partner creating copilot for call centers.\ 17:00 Efficient indexing and generation using scalable models.\ 20:47 Spark is versatile, used for many cases.\ 24:45 Recent survey paper on RAG covers tips.\ 26:57 Evaluation of various aspects of data generation.\ 28:45 Balancing trust and cost in factual accuracy. ## More Quotes from Diptanu: *"In 2017, when I started doing machine learning, it would take us six months to ship a good model in production. And here we are today, in January 2024, new models are coming out every week, and people are putting them in production.”*\ -- Diptanu Gon Choudhury *"Over a period of time, you want to extract new information out of existing data, because models are getting better continuously.”*\ -- Diptanu Gon Choudhury *"We are in the golden age of demos. Golden age of demos with LLMs. Almost anyone, I think with some programming knowledge can kind of like write a demo with an OpenAI API or with an embedding model and so on.”*\ -- Diptanu Gon Choudhury ## Transcript: Demetrios: We are live, baby. This is it. Welcome back to another vector space talks. I'm here with my man Diptanu. He is the founder and creator of Tenterlake. They are building indexify, an open source, scalable, structured extraction engine for unstructured data to build near real time knowledge bases for AI agent driven workflows and query engines. And if it sounds like I just threw every buzzword in the book into that sentence, you can go ahead and say, bingo, we are here, and we're about to dissect what all that means in the next 30 minutes. So, dude, first of all, I got to just let everyone know who is here, that you are a bit of a hard hitter. Demetrios: You've got some track record under some notches on your belt. We could say before you created Tensorlake, let's just let people know that you were at Hashicorp, you created the nomad cluster scheduler, and you were the inventor of Titus cluster scheduler at Netflix. You led the FB learner machine learning platform and built real time speech inference engine at Facebook. You may be one of the most decorated people we've had on and that I have had the pleasure of talking to, and that's saying a lot. I've talked to a lot of people in my day, so I want to dig in, man. First question I've got for you, it's a big one. What the hell do you mean by AI agent driven workflows? Are you talking to autonomous agents? Are you talking, like the voice agents? What's that? Diptanu Gon Choudhury: Yeah, I was going to say that what a great last couple of years has been for AI. I mean, in context, learning has kind of, like, changed the way people do models and access models and use models in production, like at Facebook. In 2017, when I started doing machine learning, it would take us six months to ship a good model in production. And here we are today, in January 2024, new models are coming out every week, and people are putting them in production. It's a little bit of a Yolo where I feel like people have stopped measuring how well models are doing and just ship in production, but here we are. But I think underpinning all of this is kind of like this whole idea that models are capable of reasoning over data and non parametric knowledge to a certain extent. And what we are seeing now is workflows stop being completely heuristics driven, or as people say, like software 10 driven. And people are putting models in the picture where models are reacting to data that a workflow is seeing, and then people are using models behavior on the data and kind of like making the model decide what should the workflow do? And I think that's pretty much like, to me, what an agent is that an agent responds to information of the world and information which is external and kind of reacts to the information and kind of orchestrates some kind of business process or some kind of workflow, some kind of decision making in a workflow. Diptanu Gon Choudhury: That's what I mean by agents. And they can be like autonomous. They can be something that writes an email or writes a chat message or something like that. The spectrum is wide here. Demetrios: Excellent. So next question, logical question is, and I will second what you're saying. Like the advances that we've seen in the last year, wow. And the times are a change in, we are trying to evaluate while in production. And I like the term, yeah, we just yoloed it, or as the young kids say now, or so I've heard, because I'm not one of them, but we just do it for the plot. So we are getting those models out there, we're seeing if they work. And I imagine you saw some funny quotes from the Chevrolet chat bot, that it was a chat bot on the Chevrolet support page, and it was asked if Teslas are better than Chevys. And it said, yeah, Teslas are better than Chevys. Demetrios: So yes, that's what we do these days. This is 2024, baby. We just put it out there and test and prod. Anyway, getting back on topic, let's talk about indexify, because there was a whole lot of jargon that I said of what you do, give me the straight shooting answer. Break it down for me like I was five. Yeah. Diptanu Gon Choudhury: So if you are building an agent today, which depends on augmented generation, like retrieval, augmented generation, and given that this is Qdrant's show, I'm assuming people are very much familiar with Arag and augmented generation. So if people are building applications where the data is external or non parametric, and the model needs to see updated information all the time, because let's say, the documents under the hood that the application is using for its knowledge base is changing, or someone is building a chat application where new chat messages are coming all the time, and the agent or the model needs to know about what is happening, then you need like an index, or a set of indexes, which are continuously updated. And you also, over a period of time, you want to extract new information out of existing data, because models are getting better continuously. And the other thing is, AI, until now, or until a couple of years back, used to be very domain oriented or task oriented, where modality was the key behind models. Now we are entering into a world where information being encoded in any form, documents, videos or whatever, are important to these workflows that people are building or these agents that people are building. And so you need capability to ingest any kind of data and then build indexes out of them. And indexes, in my opinion, are not just embedding indexes, they could be indexes of semi structured data. So let's say you have an invoice. Diptanu Gon Choudhury: You want to maybe transform that invoice into semi structured data of where the invoice is coming from or what are the line items and so on. So in a nutshell, you need good data infrastructure to store these indexes and serve these indexes. And also you need a scalable compute engine so that whenever new data comes in, you're able to index them appropriately and update the indexes and so on. And also you need capability to experiment, to add new extractors into your platform, add new models into your platform, and so on. Indexify helps you with all that, right? So indexify, imagine indexify to be an online service with an API so that developers can upload any form of unstructured data, and then a bunch of extractors run in parallel on the cluster and extract information out of this unstructured data, and then update indexes on something like Qdrant or postgres for semi structured data continuously. Demetrios: Okay? Diptanu Gon Choudhury: And you basically get that in a single application, in a single binary, which is distributed on your cluster. You wouldn't have any external dependencies other than storage systems, essentially, to have a very scalable data infrastructure for your Rag applications or for your LLM agents. Demetrios: Excellent. So then talk to me about the inspiration for creating this. What was it that you saw that gave you that spark of, you know what? There needs to be something on the market that can handle this. Yeah. Diptanu Gon Choudhury: Earlier this year I was working with founder of a generative AI startup here. I was looking at what they were doing, I was helping them out, and I saw that. And then I looked around, I looked around at what is happening. Not earlier this year as in 2023. Somewhere in early 2023, I was looking at how developers are building applications with llms, and we are in the golden age of demos. Golden age of demos with llms. Almost anyone, I think with some programming knowledge can kind of like write a demo with an OpenAI API or with an embedding model and so on. And I mostly saw that the data infrastructure part of those demos or those applications were very basic people would do like one shot transformation of data, build indexes and then do stuff, build an application on top. Diptanu Gon Choudhury: And then I started talking to early adopters of RaG and llms in enterprises, and I started talking to them about how they're building their data pipelines and their data infrastructure for llms. And I feel like people were mostly excited about the application layer, right? A very less amount of thought was being put on the data infrastructure, and it was almost like built out of duct tape, right, of pipeline, like pipelines and workflows like RabbitMQ, like x, Y and z, very bespoke pipelines, which are good at one shot transformation of data. So you put in some documents on a queue, and then somehow the documents get embedded and put into something like Qdrant. But there was no thought about how do you re index? How do you add a new capability into your pipeline? Or how do you keep the whole system online, right? Keep the indexes online while reindexing and so on. And so classically, if you talk to a distributed systems engineer, they would be, you know, this is a mapreduce problem, right? So there are tools like Spark, there are tools like any skills ray, and they would classically solve these problems, right? And if you go to Facebook, we use Spark for something like this, or like presto, or we have a ton of big data infrastructure for handling things like this. And I thought that in 2023 we need a better abstraction for doing something like this. The world is moving to our server less, right? Developers understand functions. Developer thinks about computers as functions and functions which are distributed on the cluster and can transform content into something that llms can consume. Diptanu Gon Choudhury: And that was the inspiration I was thinking, what would it look like if we redid Spark or ray for generative AI in 2023? How can we make it so easy so that developers can write functions to extract content out of any form of unstructured data, right? You don't need to think about text, audio, video, or whatever. You write a function which can kind of handle a particular data type and then extract something out of it. And now how can we scale it? How can we give developers very transparently, like, all the abilities to manage indexes and serve indexes in production? And so that was the inspiration for it. I wanted to reimagine Mapreduce for generative AI. Demetrios: Wow. I like the vision you sent me over some ideas of different use cases that we can walk through, and I'd love to go through that and put it into actual tangible things that you've been seeing out there. And how you can plug it in to these different use cases. I think the first one that I wanted to look at was building a copilot for call center agents and what that actually looks like in practice. Yeah. Diptanu Gon Choudhury: So I took that example because that was super close to my heart in the sense that we have a design partner like who is doing this. And you'll see that in a call center, the information that comes in into a call center or the information that an agent in a human being in a call center works with is very rich. In a call center you have phone calls coming in, you have chat messages coming in, you have emails going on, and then there are also documents which are knowledge bases for human beings to answer questions or make decisions on. Right. And so they're working with a lot of data and then they're always pulling up a lot of information. And so one of our design partner is like building a copilot for call centers essentially. And what they're doing is they want the humans in a call center to answer questions really easily based on the context of a conversation or a call that is happening with one of their users, or pull up up to date information about the policies of the company and so on. And so the way they are using indexify is that they ingest all the content, like the raw content that is coming in video, not video, actually, like audio emails, chat messages into indexify. Diptanu Gon Choudhury: And then they have a bunch of extractors which handle different type of modalities, right? Some extractors extract information out of emails. Like they would do email classification, they would do embedding of emails, they would do like entity extraction from emails. And so they are creating many different types of indexes from emails. Same with speech. Right? Like data that is coming on through calls. They would transcribe them first using ASR extractor, and from there on the speech would be embedded and the whole pipeline for a text would be invoked into it, and then the speech would be searchable. If someone wants to find out what conversation has happened, they would be able to look up things. There is a summarizer extractor, which is like looking at a phone call and then summarizing what the customer had called and so on. Diptanu Gon Choudhury: So they are basically building a near real time knowledge base of one what is happening with the customer. And also they are pulling in information from their documents. So that's like one classic use case. Now the only dependency now they have is essentially like a blob storage system and serving infrastructure for indexes, like in this case, like Qdrant and postgres. And they have a bunch of extractors that they have written in house and some extractors that we have written, they're using them out of the box and they can scale the system to as much as they need. And it's kind of like giving them a high level abstraction of building indexes and using them in llms. Demetrios: So I really like this idea of how you have the unstructured and you have the semi structured and how those play together almost. And I think one thing that is very clear is how you've got the transcripts, you've got the embeddings that you're doing, but then you've also got documents that are very structured and maybe it's from the last call and it's like in some kind of a database. And I imagine we could say whatever, salesforce, it's in a salesforce and you've got it all there. And so there is some structure to that data. And now you want to be able to plug into all of that and you want to be able to, especially in this use case, the call center agents, human agents need to make decisions and they need to make decisions fast. Right. So the real time aspect really plays a part of that. Diptanu Gon Choudhury: Exactly. Demetrios: You can't have it be something that it'll get back to you in 30 seconds, or maybe 30 seconds is okay, but really the less time the better. And so traditionally when I think about using llms, I kind of take real time off the table. Have you had luck with making it more real time? Yeah. Diptanu Gon Choudhury: So there are two aspects of it. How quickly can your indexes be updated? As of last night, we can index all of Wikipedia under five minutes on AWS. We can run up to like 5000 extractors with indexify concurrently and parallel. I feel like we got the indexing part covered. Unless obviously you are using a model as behind an API where we don't have any control. But assuming you're using some kind of embedding model or some kind of extractor model, right, like a named entity extractor or an speech to text model that you control and you understand the I Ops, we can scale it out and our system can kind of handle the scale of getting it indexed really quickly. Now on the generation side, that's where it's a little bit more nuanced, right? Generation depends on how big the generation model is. If you're using GPD four, then obviously you would be playing with the latency budgets that OpenAI provides. Diptanu Gon Choudhury: If you're using some other form of models like mixture MoE or something which is very optimized and you have worked on making the model optimized, then obviously you can cut it down. So it depends on the end to end stack. It's not like a single piece of software. It's not like a monolithic piece of software. So it depends on a lot of different factors. But I can confidently claim that we have gotten the indexing side of real time aspects covered as long as the models people are using are reasonable and they have enough compute in their cluster. Demetrios: Yeah. Okay. Now talking again about the idea of rethinking the developer experience with this and almost reimagining what Spark would be if it were created today. Diptanu Gon Choudhury: Exactly. Demetrios: How do you think that there are manifestations in what you've built that play off of things that could only happen because you created it today as opposed to even two years ago. Diptanu Gon Choudhury: Yeah. So I think, for example, take Spark, right? Spark was born out of big data, like the 2011 twelve era of big data. In fact, I was one of the committers on Apache Mesos, the cluster scheduler that Spark used for a long time. And then when I was at Hashicorp, we tried to contribute support for Nomad in Spark. What I'm trying to say is that Spark is a task scheduler at the end of the day and it uses an underlying scheduler. So the teams that manage spark today or any other similar tools, they have like tens or 15 people, or they're using like a hosted solution, which is super complex to manage. Right. A spark cluster is not easy to manage. Diptanu Gon Choudhury: I'm not saying it's a bad thing or whatever. Software written at any given point in time reflect the world in which it was born. And so obviously it's from that era of systems engineering and so on. And since then, systems engineering has progressed quite a lot. I feel like we have learned how to make software which is scalable, but yet simpler to understand and to operate and so on. And the other big thing in spark that I feel like is missing or any skills, Ray, is that they are not natively integrated into the data stack. Right. They don't have an opinion on what the data stack is. Diptanu Gon Choudhury: They're like excellent Mapreduce systems, and then the data stuff is layered on top. And to a certain extent that has allowed them to generalize to so many different use cases. People use spark for everything. At Facebook, I was using Spark for batch transcoding of speech, to text, for various use cases with a lot of issues under the hood. Right? So they are tied to the big data storage infrastructure. So when I am reimagining Spark, I almost can take the position that we are going to use blob storage for ingestion and writing raw data, and we will have low latency serving infrastructure in the form of something like postgres or something like clickhouse or something for serving like structured data or semi structured data. And then we have something like Qdrant, which is very geared towards doing vector search and so on. And so we understand the shape of the storage system now. Diptanu Gon Choudhury: We understand that developers want to integrate with them. So now we can control the compute layer such that the compute layer is optimized for doing the compute and producing data such that they can be written in those data stores, right? So we understand the I Ops, right? The I O, what is it called? The I O characteristics of the underlying storage system really well. And we understand that the use case is that people want to consume those data in llms, right? So we can make design decisions such that how we write into those, into the storage system, how we serve very specifically for llms, that I feel like a developer would be making those decisions themselves, like if they were using some other tool. Demetrios: Yeah, it does feel like optimizing for that and recognizing that spark is almost like a swiss army knife. As you mentioned, you can do a million things with it, but sometimes you don't want to do a million things. You just want to do one thing and you want it to be really easy to be able to do that one thing. I had a friend who worked at some enterprise and he was talking about how spark engineers have all the job security in the world, because a, like you said, you need a lot of them, and b, it's hard stuff being able to work on that and getting really deep and knowing it and the ins and outs of it. So I can feel where you're coming from on that one. Diptanu Gon Choudhury: Yeah, I mean, we basically integrated the compute engine with the storage so developers don't have to think about it. Plug in whatever storage you want. We support, obviously, like all the blob stores, and we support Qdrant and postgres right now, indexify in the future can even have other storage engines. And now all an application developer needs to do is deploy this on AWS or GCP or whatever, right? Have enough compute, point it to the storage systems, and then now build your application. You don't need to make any of the hard decisions or build a distributed systems by bringing together like five different tools and spend like five months building the data layer, focus on the application, build your agents. Demetrios: So there is something else. As we are winding down, I want to ask you one last thing, and if anyone has any questions, feel free to throw them in the chat. I am monitoring that also, but I am wondering about advice that you have for people that are building rag based applications, because I feel like you've probably seen quite a few out there in the wild. And so what are some optimizations or some nice hacks that you've seen that have worked really well? Yeah. Diptanu Gon Choudhury: So I think, first of all, there is a recent paper, like a rack survey paper. I really like it. Maybe you can have the link on the show notes if you have one. There was a recent survey paper, I really liked it, and it covers a lot of tips and tricks that people can use with Rag. But essentially, Rag is an information. Rag is like a two step process in its essence. One is the document selection process and the document reading process. Document selection is how do you retrieve the most important information out of million documents that might be there, and then the reading process is how do you jam them in the context of a model, and so that the model can kind of ground its generation based on the context. Diptanu Gon Choudhury: So I think the most tricky part here, and the part which has the most tips and tricks is the document selection part. And that is like a classic information retrieval problem. So I would suggest people doing a lot of experimentation around ranking algorithms, hitting different type of indexes, and refining the results by merging results from different indexes. One thing that always works for me is reducing the search space of the documents that I am selecting in a very systematic manner. So like using some kind of hybrid search where someone does the embedding lookup first, and then does the keyword lookup, or vice versa, or does lookups parallel and then merges results together? Those kind of things where the search space is narrowed down always works for me. Demetrios: So I think one of the Qdrant team members would love to know because I've been talking to them quite frequently about this, the evaluating of retrieval. Have you found any tricks or tips around that and evaluating the quality of what is retrieved? Diptanu Gon Choudhury: So I haven't come across a golden one trick that fits every use case type thing like solution for evaluation. Evaluation is really hard. There are open source projects like ragas who are trying to solve it, and everyone is trying to solve various, various aspects of evaluating like rag exactly. Some of them try to evaluate how accurate the results are, some people are trying to evaluate how diverse the answers are, and so on. I think the most important thing that our design partners care about is factual accuracy and factual accuracy. One process that has worked really well is like having a critique model. So let the generation model generate some data and then have a critique model go and try to find citations and look up how accurate the data is, how accurate the generation is, and then feed that back into the system. One another thing like going back to the previous point is what tricks can someone use for doing rag really well? I feel like people don't fine tune embedding models that much. Diptanu Gon Choudhury: I think if people are using an embedding model, like sentence transformer or anything like off the shelf, they should look into fine tuning the embedding models on their data set that they are embedding. And I think a combination of fine tuning the embedding models and kind of like doing some factual accuracy checks lead to a long way in getting like rag working really well. Demetrios: Yeah, it's an interesting one. And I'll probably leave it here on the extra model that is basically checking factual accuracy. You've always got these trade offs that you're playing with, right? And one of the trade offs is going to be, maybe you're making another LLM call, which could be more costly, but you're gaining trust or you're gaining confidence that what it's outputting is actually what it says it is. And it's actually factually correct, as you said. So it's like, what price can you put on trust? And we're going back to that whole thing that I saw on Chevy's website where they were saying that a Tesla is better. It's like that hopefully doesn't happen anymore as people deploy this stuff and they recognize that humans are cunning when it comes to playing around with chat bots. So this has been fascinating, man. I appreciate you coming on here and chatting me with it. Demetrios: I encourage everyone to go and either reach out to you on LinkedIn, I know you are on there, and we'll leave a link to your LinkedIn in the chat too. And if not, check out Tensorleg, check out indexify, and we will be in touch. Man, this was great. Diptanu Gon Choudhury: Yeah, same. It was really great chatting with you about this, Demetrius, and thanks for having me today. Demetrios: Cheers. I'll talk to you later.
qdrant-landing/content/blog/insight-generation-platform-for-lifescience-corporation-hooman-sedghamiz-vector-space-talks-014.md
--- draft: false title: Insight Generation Platform for LifeScience Corporation - Hooman Sedghamiz | Vector Space Talks slug: insight-generation-platform short_description: Hooman Sedghamiz explores the potential of large language models in creating cutting-edge AI applications. description: Hooman Sedghamiz discloses the potential of AI in life sciences, from custom knowledge applications to improving crop yield predictions, while tearing apart the nuances of in-house AI deployment for multi-faceted enterprise efficiency. preview_image: /blog/from_cms/hooman-sedghamiz-bp-cropped.png date: 2024-03-25T08:46:28.227Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Retrieval Augmented Generation - Insight Generation Platform --- > *"There is this really great vector db comparison that came out recently. I saw there are like maybe more than 40 vector stores in 2024. When we started back in 2023, there were only a few. What I see, which is really lacking in this pipeline of retrieval augmented generation is major innovation around data pipeline.”*\ -- Hooman Sedghamiz > Hooman Sedghamiz, Sr. Director AI/ML - Insights at Bayer AG is a distinguished figure in AI and ML in the life sciences field. With years of experience, he has led teams and projects that have greatly advanced medical products, including implantable and wearable devices. Notably, he served as the Generative AI product owner and Senior Director at Bayer Pharmaceuticals, where he played a pivotal role in developing a GPT-based central platform for precision medicine. In 2023, he assumed the role of Co-Chair for the EMNLP 2023 GEM industrial track, furthering his contributions to the field. Hooman has also been an AI/ML advisor and scientist at the University of California, San Diego, leveraging his expertise in deep learning to drive biomedical research and innovation. His strengths lie in guiding data science initiatives from inception to commercialization and bridging the gap between medical and healthcare applications through MLOps, LLMOps, and deep learning product management. Engaging with research institutions and collaborating closely with Dr. Nemati at Harvard University and UCSD, Hooman continues to be a dynamic and influential figure in the data science community. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2oj2ne5l9qrURQSV0T1Hft?si=DMJRTAt7QXibWiQ9CEKTJw), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/yfzLaH5SFX0).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/yfzLaH5SFX0?si=I8dw5QddKbPzPVOB" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Charting-New-Frontiers-Creating-a-Pioneering-Insight-Generation-Platform-for-a-Major-Life-Science-Corporation---Hooman-Sedghamiz--Vector-Space-Talks-014-e2fqnnc/a-aavffjd" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Why is real-time evaluation critical in maintaining the integrity of chatbot interactions and preventing issues like promoting competitors or making false promises? What strategies do developers employ to minimize cost while maximizing the effectiveness of model evaluations, specifically when dealing with LLMs? These might be just some of the many questions people in the industry are asking themselves. We aim to cover most of it in this talk. Check out their conversation as they peek into world of AI chatbot evaluations. Discover the nuances of ensuring your chatbot's quality and continuous improvement across various metrics. Here are the key topics of this episode: 1. **Evaluating Chatbot Effectiveness**: An exploration of systematic approaches to assess chatbot quality across various stages, encompassing retrieval accuracy, response generation, and user satisfaction. 2. **Importance of Real-Time Assessment**: Insights into why continuous and real-time evaluation of chatbots is essential to maintain integrity and ensure they function as designed without promoting undesirable actions. 3. **Indicators of Compromised Systems**: Understand the significance of identifying behaviors that suggest a system may be prone to 'jailbreaking' and the methods available to counter these through API integration. 4. **Cost-Effective Evaluation Models**: Discussion on employing smaller models for evaluation to reduce costs without compromising the depth of analysis, focusing on failure cases and root-cause assessments. 5. **Tailored Evaluation Metrics**: Emphasis on the necessity of customizing evaluation criteria to suit specific use case requirements, including an exploration of the different metrics applicable to diverse scenarios. >Fun Fact: Large language models like Mistral, Llama, and Nexus Raven have improved in their ability to perform function calling with low hallucination and high-quality output. > ## Show notes: 00:00 Introduction to Bayer AG\ 05:15 Drug discovery, trial prediction, medical virtual assistants.\ 10:35 New language models like Llama rival GPT 3.5.\ 12:46 Large language model solving, efficient techniques, open source.\ 16:12 Scaling applications for diverse, individualized models.\ 19:02 Open source offers multilingual embedding.\ 25:06 Stability improved, reliable function calling capabilities emerged.\ 27:19 Platform aims for efficiency, measures impact.\ 31:01 Build knowledge discovery tool, measure value\ 33:10 Wrap up ## More Quotes from Hooman: *"I think there has been concentration around vector stores. So a lot of startups that have appeared around vector store idea, but I think what really is lacking are tools that you have a lot of sources of knowledge, information.*”\ -- Hooman Sedghamiz *"You can now kind of take a look and see that the performance of them is really, really getting close, if not better than GPT 3.5 already at same level and really approaching step by step to GPT 4.”*\ -- Hooman Sedghamiz in advancements in language models *"I think the biggest, I think the untapped potential, it goes back to when you can do scientific discovery and all those sort of applications which are more challenging, not just around the efficiency and all those sort of things.”*\ -- Hooman Sedghamiz ## Transcript: Demetrios: We are here and I couldn't think of a better way to spend my Valentine's Day than with you Hooman this is absolutely incredible. I'm so excited for this talk that you're going to bring and I want to let everyone that is out there listening know what caliber of a speaker we have with us today because you have done a lot of stuff. Folks out there do not let this man's young look fool you. You look like you are not in your fifty's or sixty's. But when it comes to your bio, it looks like you should be in your seventy's. I am very excited. You've got a lot of experience running data science projects, ML projects, LLM projects, all that fun stuff. You're working at Bayern Munich, sorry, not Bayern Munich, Bayer AG. And you're the senior director of AI and ML. Demetrios: And I think that there is a ton of other stuff that you've done when it comes to machine learning, artificial intelligence. You've got both like the traditional ML background, I think, and then you've also got this new generative AI background and so you can leverage both. But you also think about things in data engineering way. You understand the whole lifecycle. And so today we get to talk all about some of this fun. I know you've got some slides prepared for us. I'll let you throw those on and I'll let anyone else in the chat. Feel free to ask questions while Hooman is going through the presentation and I'll jump in and stop them when needed. Demetrios: But also we can have a little discussion after a few minutes of slides. So for everyone looking, we're going to be watching this and then we're going to be checking out like really talking about what 2024 AI in the enterprise looks like and what is needed to really take advantage of that. So Hooman, I'm dropping off to you, man, and I'll jump in when needed. Hooman Sedghamiz: Thanks a lot for the introduction. Let me get started. Do you have my screen already? Demetrios: Yeah, we see it. Hooman Sedghamiz: Okay, perfect. All right, so hopefully I can change the slides. Yes, as you said, first, thanks a lot for spending your day with me. I know it's Valentine's Day, at least here in the US people go crazy when it gets Valentine's. But I know probably a lot of you are in love with large language models, semantic search and all those sort of things, so it's great to have you here. Let me just start with the. I have a lot of slides, by the way, but maybe I can start with kind of some introduction about the company I work for, what these guys are doing and what we are doing at a life science company like Bayer, which is involved in really major humanity needs, right? So health and the food chain and like agriculture, we do three major kind of products or divisions in the company, mainly consumer halls, over the counter medication that probably a lot of you have taken, aspirin, all those sort of good stuff. And we have crop science division that works on ensuring that the yield is high for crops and the food chain is performing as it should, and also pharmaceutical side which is around treatment and prevention. Hooman Sedghamiz: So now you can imagine via is really important to us because it has the potential of unlocking a future where good health is a reality and hunger is a memory. So I maybe start about maybe giving you a hint of what are really the numerous use cases that AI or challenges that AI could help out with. In life science industry. You can think of adverse event detection when patients are taking a medication, too much of it. The patients might report adverse events, stomach bleeding and go to social media post about it. A few years back, it was really difficult to process automatically all this sort of natural text in a kind of scalable manner. But nowadays, thanks to large language models, it's possible to automate this and identify if there is a medication or anything that might have negatively an adverse event on a patient population. Similarly, you can now create a lot of marketing content using these large language models for products. Hooman Sedghamiz: At the same time, drug discovery is making really big strides when it comes to identifying new compounds. You can essentially describe these compounds using formats like smiles, which could be represented as real text. And these large language models can be trained on them and they can predict the sequences. At the same time, you have this clinical trial outcome prediction, which is huge for pharmaceutical companies. If you could predict what will be the outcome of a trial, it would be a huge time and resource saving for a lot of companies. And of course, a lot of us already see in the market a lot of medical virtual assistants using large language models that can answer medical inquiries and give consultations around them. And there is really, I believe the biggest potential here is around real world data, like most of us nowadays, have some sort of sensor or watch that's measuring our health maybe at a minute by minute level, or it's measuring our heart rate. You go to the hospital, you have all your medical records recorded there, and these large language models have their capacity to process this complex data, and you will be able to drive better insights for individualized insights for patients. Hooman Sedghamiz: And our company is also in crop science, as I mentioned, and crop yield prediction. If you could help farmers improve their crop yield, it means that they can produce better products faster with higher quality. So maybe I could start with maybe a history in 2023, what happened? How companies like ours were looking at large language models and opportunities. They bring, I think in 2023, everyone was excited to bring these efficiency games, right? Everyone wanted to use them for creating content, drafting emails, all these really low hanging fruit use cases. That was around. And one of the earlier really nice architectures that came up that I really like was from a 16 z enterprise that was, I think, back in really, really early 2023. LangChain was new, we had land chain and we had all this. Of course, Qdrant been there for a long time, but it was the first time that you could see vector store products could be integrated into applications. Hooman Sedghamiz: Really at large scale. There are different components. It's quite complex architecture. So on the right side you see how you can host large language models. On the top you see how you can augment them using external data. Of course, we had these plugins, right? So you can connect these large language models with Google search APIs, all those sort of things, and some validation that are in the middle that you could use to validate the responses fast forward. Maybe I can kind of spend, let me check out the time. Maybe I can spend a few minutes about the components of LLM APIs and hosting because that I think has a lot of potential in terms of applications that need to be really scalable. Hooman Sedghamiz: Just to give you some kind of maybe summary about my company, we have around 100,000 people in almost all over the world. Like the languages that people speak are so diverse. So it makes it really difficult to build an application that will serve 200,000 people. And it's kind of efficient. It's not really costly and all those sort of things. So maybe I can spend a few minutes talking about what that means and how kind of larger scale companies might be able to tackle that efficiently. So we have, of course, out of the box solutions, right? So you have Chat GPT already for enterprise, you have other copilots and for example from Microsoft and other companies that are offering, but normally they are seat based, right? So you kind of pay a subscription fee, like Spotify, you pay like $20 per month, $30 on average, somewhere between $20 to $60. And for a company, like, I was like, just if you calculate that for 3000 people, that means like 180,000 per month in subscription fees. Hooman Sedghamiz: And we know that most of the users won't use that. We know that it's a usage based application. You just probably go there. Depending on your daily work, you probably use it. Some people don't use it heavily. I kind of did some calculation. If you build it in house using APIs that you can access yourself, and large language models that corporations can deploy internally and locally, that cost saving could be huge, really magnitudes cheaper, maybe 30 to 20 to 30 times cheaper. So looking, comparing 2024 to 2023, a lot of things have changed. Hooman Sedghamiz: Like if you look at the open source large language models that came out really great models from Mistral, now we have models like Llama, two based model, all of these models came out. You can now kind of take a look and see that the performance of them is really, really getting close, if not better than GPT 3.5 already at same level and really approaching step by step to GPT 4. And looking at the price on the right side and speed or throughput, you can see that like for example, Mistral seven eight B could be a really cheap option to deploy. And also the performance of it gets really close to GPT 3.5 for many use cases in the enterprise companies. I think two of the big things this year, end of last year that came out that make this kind of really a reality are really a few large language models. I don't know if I can call them large language models. They are like 7 billion to 13 billion compared to GPT four, GT 3.5. I don't think they are really large. Hooman Sedghamiz: But one was Nexus Raven. We know that applications, if they want to be robust, they really need function calling. We are seeing this paradigm of function calling, which essentially you ask a language model to generate structured output, you give it a function signature, right? You ask it to generate an output, structured output argument for that function. Next was Raven came out last year, that, as you can see here, really is getting really close to GPT four, right? And GPT four being magnitude bigger than this model. This model only being 13 billion parameters really provides really less hallucination, but at the same time really high quality of function calling. So this makes me really excited for the open source and also the companies that want to build their own applications that requires function calling. That was really lacking maybe just five months ago. At the same time, we have really dedicated large language models to programming languages or scripting like SQL, that we are also seeing like SQL coder that's already beating GPT four. Hooman Sedghamiz: So maybe we can now quickly take a look at how model solving will look like for a large company like ours, like companies that have a lot of people across the globe again, in this aspect also, the community has made really big progress, right? So we have text generation inference from hugging face is open source for most purposes, can be used and it's the choice of mine and probably my group prefers this option. But we have Olama, which is great, a lot of people are using it. We have llama CPP which really optimizes the large language models for local deployment as well, and edge devices. I was really amazed seeing Raspberry PI running a large language model, right? Using Llama CPP. And you have this text generation inference that offers quantization support, continuous patching, all those sort of things that make these large LLMs more quantized or more compressed and also more suitable for deployment to large group of people. Maybe I can kind of give you kind of a quick summary of how, if you decide to deploy these large language models, what techniques you could use to make them more efficient, cost friendly and more scalable. So we have a lot of great open source projects like we have Lite LLM which essentially creates an open AI kind of signature on top of your large language models that you have deployed. Let's say you want to use Azure to host or to access GPT four gypty 3.5 or OpenAI to access OpenAI API. Hooman Sedghamiz: To access those, you could put them behind Lite LLM. You could have models using hugging face that are deployed internally, you could put lightlm in front of those, and then your applications could just use OpenAI, Python SDK or anything to call them naturally. And then you could simply do load balancing between those. Of course, we have also, as I mentioned, a lot of now serving opportunities for deploying those models that you can accelerate. Semantic caching is another opportunity for saving cost. Like for example, if you have cute rent, you are storing the conversations. You could semantically check if the user has asked similar questions and if that question is very similar to the history, you could just return that response instead of calling the large language model that can create costs. And of course you have line chain that you can summarize conversations, all those sort of things. Hooman Sedghamiz: And we have techniques like prompt compression. So as I mentioned, this really load balancing can offer a lot of opportunities for scaling this large language model. As you know, a lot of offerings from OpenAI APIs or Microsoft Azure, they have rate limits, right? So you can't call those models extensively. So what you could do, you could have them in multiple regions, you can have multiple APIs, local TGI deployed models using hugging face TGI or having Azure endpoints and OpenAI endpoints. And then you could use light LLM to load balance between these models. Once the users get in. Right. User one, you send the user one to one deployment, you send the user two requests to the other deployment. Hooman Sedghamiz: So this way you can really scale your application to large amount of users. And of course, we have these opportunities for applications called Lorex that use Lora. Probably a lot of you have heard of like very efficient way of fine tuning these models with fewer number of parameters that we could leverage to have really individualized models for a lot of applications. And you can see the costs are just not comparable if you wanted to use, right. So at GPT 3.5, even in terms of performance and all those sort of things, because you can use really small hardware GPU to deploy thousands of Lora weights or adapters, and then you will be able to serve a diverse set of models to your users. I think one really important part of these kind of applications is the part that you add contextual data, you add augmentation to make them smarter and to make them more up to date. So, for example, in healthcare domain, a lot of Americans already don't have high trust in AI when it comes to decision making in healthcare. So that's why augmentation of data or large language models is really, really important for bringing trust and all those sort of state of the art knowledge to this large language model. Hooman Sedghamiz: For example, if you ask about cancer or rededicated questions that need to build on top of scientific knowledge, it's very important to use those. Augmented or retrieval augmented generation. No, sorry, go next. Jumped on one. But let me see. I think I'm missing a slide, but yeah, I have it here. So going through this kind of, let's say retrieval augmented generation, different parts of it. You have, of course, these vector stores that in 2024, I see explosion of vector stores. Hooman Sedghamiz: Right. So there is this really great vector DB comparison that came out recently. I saw there are like maybe more than 40 vector stores in 2024. When we started back in 2023 was only a few. And what I see, which is really lacking in this pipeline of retrieval augmented generation is major innovation around data pipeline. And I think we were talking before this talk together that ETL is not something that is taken seriously. So far. We have a lot of embedding models that are coming out probably on a weekly basis. Hooman Sedghamiz: We have great embedding models that are open source, BgEM. Three is one that is multilingual, 100 plus languages. You could embed text in those languages. We have a lot of vector stores, but we don't have really ETL tools, right? So we have maybe a few airbytes, right? How can you reindex data efficiently? How can you parse scientific articles? Like imagine I have an image here, we have these articles or archive or on a pubmed, all those sort of things that have images and complex structure that our parsers are not able to parse them efficiently and make sense of them so that you can embed them really well. And really doing this Internet level, scientific level retrieval is really difficult. And no one I think is still doing it at scale. I just jumped, I have a love slide, maybe I can jump to my last and then we can pause there and take in some questions. Where I see 2014 and beyond, beyond going for large language models for enterprises, I see assistance, right? I see assistance for personalized assistance, for use cases coming out, right? So these have probably four components. Hooman Sedghamiz: You have even a personalized large language model that can learn from the history of your conversation, not just augmented. Maybe you can fine tune that using Laura and all those techniques. You have the knowledge that probably needs to be customized for your assistant and integrated using vector stores and all those sort of things, technologies that we have out, you know, plugins that bring a lot of plugins, some people call them skills, and also they can cover a lot of APIs that can bring superpowers to the large language model and multi agent setups. Right? We have autogen, a lot of cool stuff that is going on. The agent technology is getting really mature now as we go forward. We have langraph from Langchain that is bringing a lot of more stabilized kind of agent technology. And then you can think of that as for companies building all these kind of like App Stores or assistant stores that use cases, store there. And the colleagues can go there, search. Hooman Sedghamiz: I'm looking for this application. That application is customized for them, or even they can have their own assistant which is customized to them, their own large language model, and they could use that to bring value. And then even a nontechnical person could create their own assistant. They could attach the documents they like, they could select the plugins they like, they'd like to be connected to, for example, archive, or they need to be connected to API and how many agents you like. You want to build a marketing campaign, maybe you need an agent that does market research, one manager. And then you build your application which is customized to you. And then based on your feedback, the large language model can learn from your feedback as well. Going forward, maybe I pause here and then we can it was a bit longer than I expected, but yeah, it's all good, man. Demetrios: Yeah, this is cool. Very cool. I appreciate you going through this, and I also appreciate you coming from the past, from 2014 and talking about what we're going to do in 2024. That's great. So one thing that I want to dive into right away is the idea of ETL and why you feel like that is a bit of a blocker and where you think we can improve there. Hooman Sedghamiz: Yeah. So I think there has been concentration around vector stores. Right. So a lot of startups that have appeared around vector store idea, but I think what really is lacking tools that you have a lot of sources of knowledge, information. You have your Gmail, if you use outlook, if you use scientific knowledge, like sources like archive. We really don't have any startup that I hear that. Okay. I have a platform that offers real time retrieval from archive papers. Hooman Sedghamiz: And you want to ask a question, for example, about transformers. It can do retrieval, augmented generation over all archive papers in real time as they get added for you and brings back the answer to you. We don't have that. We don't have these syncing tools. You can of course, with tricks you can maybe build some smart solutions, but I haven't seen many kind of initiatives around that. And at the same time, we have this paywall knowledge. So we have these nature medicine amazing papers which are paywall. We can access them. Hooman Sedghamiz: Right. So we can build rag around them yet, but maybe some startups can start coming up with strategies, work with this kind of publishing companies to build these sort of things. Demetrios: Yeah, it's almost like you're seeing it not as the responsibility of nature or. Hooman Sedghamiz: Maybe they can do it. Demetrios: Yeah, they can potentially, but maybe that's not their bread and butter and so they don't want to. And so how do startups get in there and take some of this paywalled information and incorporate it into their product? And there is another piece that you mentioned on, just like when it comes to using agents, I wonder, have you played around with them a lot? Have you seen their reliability get better? Because I'm pretty sure a lot of us out there have tried to mess around with agents and maybe just like blown a bunch of money on GPT, four API calls. And it's like this thing isn't that stable. What's going on? So do you know something that we don't? Hooman Sedghamiz: I think they have become much, much more stable. If you look back in 2023, like June, July, they were really new, like auto GPT. We had all these new projects came out, really didn't work out as you say, they were not stable. But I would say by the end of 2023, we had really stable frameworks, for example, customized solutions around agent function calling. I think when function calling came out, the capability that you could provide signature or dot string of, I don't know, a function and you could get back the response really reliably. I think that changed a lot. And Langchen has this OpenAI function calling agent that works with some measures. I mean, of course I wouldn't say you could automate 100% something, but for a knowledge, kind of. Hooman Sedghamiz: So for example, if you have an agent that has access to data sources, all those sort of things, and you ask it to go out there, see what are the latest clinical trial design trends, it can call these tools, it can reliably now get you answer out of ten times, I would say eight times, it works. Now it has become really stable. And what I'm excited about is the latest multi agent scenarios and we are testing them. They are very promising. Right? So you have autogen from Microsoft platform, which is open source, and also you have landgraph from Langchain, which I think the frameworks are becoming really stable. My prediction is between the next few months is lots of, lots of applications will rely on agents. Demetrios: So you also mentioned how to recognize if a project is winning or losing type thing. And considering there are so many areas that you can plug in AI, especially when you're looking at buyer and all the different places that you can say, oh yeah, we could add some AI to this. How are you setting up metrics so, you know, what is worth it to continue investing into versus what maybe sounded like a better idea, but in practice it wasn't actually that good of an idea. Hooman Sedghamiz: Yeah, depends on the platform that you're building. Right? So where we started back in 2023, the platform was aiming for efficiency, right? So how can you make our colleagues more efficient? They can be faster in their daily work, like really delegate this boring stuff, like if you want to summarize or you want to create a presentation, all those sort of things, and you have measures in place that, for example, you could ask, okay, now you're using this platform for months. Let us know how many hours you're saving during your daily work. And really we could see the shift, right? So we did a questionnaire and I think we could see a lot of shift in terms of saving hours, daily work, all those sort of things that is measurable. And it's like you could then convert it, of course, to the value that brings for the enterprise on the company. And I think the biggest, I think the untapped potential, it goes back to when you can do scientific discovery and all those sort of applications which are more challenging, not just around the efficiency and all those sort of things. And then you need to really, if you're building a product, if it's not the general product. And for example, let's say if you're building a natural language to SQL, let's say you have a database. Hooman Sedghamiz: It was a relational database. You want to build an application that searches cars in the background. The customers go there and ask, I'm looking for a BMW 2013. It uses qudrant in the back, right. It kind of does semantic search, all these cool things and returns the response. I think then you need to have really good measures to see how satisfied your customers are when you're integrating a kind of generative application on top of your website that's selling cars. So measuring this in a kind of, like, cyclic manner, people are not going to be happy because you start that there are a lot of things that you didn't count for. You measure all those kind of metrics and then you go forward, you improve your platform. Demetrios: Well, there's also something else that you mentioned, and it brought up this thought in my mind, which is undoubtedly you have these low hanging fruit problems, and it's mainly based on efficiency gains. Right. And so it's helping people extract data from pdfs or what be it, and you're saving time there. You're seeing that you're saving time, and it's a fairly easy setup. Right. But then you have moonshots, I would imagine, like creating a whole new type of aspirin or tylenol or whatever it is, and that is a lot more of an investment of time and energy and infrastructure and everything along those lines. How do you look at both of these and say, we want to make sure that we make headway in both directions. And I'm not sure if you have unlimited resources to be able to just do everything or if you have to recognize what the trade offs are and how you measure those types of metrics. Demetrios: Again, in seeing where do we invest and where do we cut ties with different initiatives. Hooman Sedghamiz: Yeah. So that's a great question. So for product development, like the example that you made, there are really a lot of stages involved. Right. So you start from scientific discovery stage. So I can imagine that you can have multiple products along the way to help out. So if you have a product already out there that you want to generate insights and see. Let's say you have aspirin out there. Hooman Sedghamiz: You want to see if it is also helpful for cardiovascular problems that patients might have. So you could build a sort of knowledge discovery tool that could search for you, give it a name of your product, it will go out there, look into pubmed, all these articles that are being published, brings you back the results. Then you need to have really clear metrics to see if this knowledge discovery platform, after a few months is able to bring value to the customers or the stakeholders that you build the platform for. We have these experts that are really experts in their own field. Takes them really time to go read these articles to make conclusions or answer questions about really complex topic. I think it's really difficult based on the initial feedback we see, it helps, it helps save them time. But really I think it goes back again to the ETL problem that we still don't have your paywall. We can't access a lot of scientific knowledge yet. Hooman Sedghamiz: And these guys get a little bit discouraged at the beginning because they expect that a lot of people, especially non technical, say like you go to Chat GPT, you ask and it brings you the answer, right? But it's not like that. It doesn't work like that. But we can measure it, we can see improvements, they can access knowledge faster, but it's not comprehensive. That's the problem. It's not really deep knowledge. And I think the companies are still really encouraging developing these platforms and they can see that that's a developing field. Right. So it's very hard to give you a short answer, very hard to come up with metrics that gives you success of failure in a short term time period. Demetrios: Yeah, I like the creativity that you're talking about there though. That is like along this multistepped, very complex product creation. There are potential side projects that you can do that show and prove value along the way, and they don't necessarily need to be as complex as that bigger project. Hooman Sedghamiz: True. Demetrios: Sweet, man. Well, this has been awesome. I really appreciate you coming on here to the vector space talks for anyone that would like to join us and you have something cool to present. We're always open to suggestions. Just hit me up and we will make sure to send you some shirt or whatever kind of swag is on hand. Remember, all you astronauts out there, don't get lost in vector space. This has been another edition of the Qdrant vector space talks with Hooman, my man, on Valentine's Day. I can't believe you decided to spend it with me. Demetrios: I appreciate it. Hooman Sedghamiz: Thank you. Take care.
qdrant-landing/content/blog/introducing-the-quaterion-a-framework-for-fine-tuning-similarity-learning-models.md
--- draft: true preview_image: /blog/from_cms/new-cmp-demo.gif sitemapExclude: true title: "Introducing the Quaterion: a framework for fine-tuning similarity learning models" slug: quaterion short_description: Please meet Quaterion—a framework for training and fine-tuning similarity learning models. description: We're happy to share the result of the work we've been into during the last months - Quaterion. It is a framework for fine-tuning similarity learning models that streamlines the training process to make it significantly faster and cost-efficient. date: 2022-06-28T12:48:36.622Z author: Andrey Vasnetsov featured: true author_link: https://www.linkedin.com/in/andrey-vasnetsov-75268897/ tags: - Corporate news - Release - Quaterion - PyTorch categories: - News - Release - Quaterion --- We're happy to share the result of the work we've been into during the last months - [Quaterion](https://quaterion.qdrant.tech/). It is a framework for fine-tuning similarity learning models that streamlines the training process to make it significantly faster and cost-efficient. To develop Quaterion, we utilized PyTorch Lightning, leveraging a high-performing AI research approach to constructing training loops for ML models. ![quaterion](/blog/from_cms/new-cmp-demo.gif) This framework empowers vector search [solutions](/solutions/), such as semantic search, anomaly detection, and others, by advanced coaching mechanism, specially designed head layers for pre-trained models, and high flexibility in terms of customization according to large-scale training pipelines and other features. Here you can read why similarity learning is preferable to the traditional machine learning approach and how Quaterion can help benefit <https://quaterion.qdrant.tech/getting_started/why_quaterion.html#why-quaterion>    A quick start with Quaterion:<https://quaterion.qdrant.tech/getting_started/quick_start.html>\ \ And try it and give us a star on GitHub :) <https://github.com/qdrant/quaterion>
qdrant-landing/content/blog/iris-agent-qdrant.md
--- title: "IrisAgent and Qdrant: Redefining Customer Support with AI" draft: false slug: iris-agent-qdrant short_description: Pushing the boundaries of AI in customer support description: Learn how IrisAgent leverages Qdrant for RAG to automate support, and improve resolution times, transforming customer service preview_image: /case-studies/iris/irisagent-qdrant.png date: 2024-03-06T07:45:34-08:00 author: Manuel Meyer featured: false tags: - news - blog - irisagent - customer support weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- Artificial intelligence is evolving customer support, offering unprecedented capabilities for automating interactions, understanding user needs, and enhancing the overall customer experience. [IrisAgent](https://irisagent.com/), founded by former Google product manager [Palak Dalal Bhatia](https://www.linkedin.com/in/palakdalal/), demonstrates the concrete impact of AI on customer support with its AI-powered customer support automation platform. Bhatia describes IrisAgent as “the system of intelligence which sits on top of existing systems of records like support tickets, engineering bugs, sales data, or product data,” with the main objective of leveraging AI and generative AI, to automatically detect the intent and tags behind customer support tickets, reply to a large number of support tickets chats improve the time to resolution and increase the deflection rate of support teams. Ultimately, IrisAgent enables support teams to more with less and be more effective in helping customers. ## The Challenge Throughout her career Bhatia noticed a lot of manual and inefficient processes in support teams paired with information silos between important functions like customer support, product management, engineering teams, and sales teams. These silos typically prevent support teams from accurately solving customers’ pain points, as they are only able to access a fraction of the internal knowledge and don’t get the relevant information and insights that other teams have. IrisAgent is addressing these challenges with AI and GenAI by generating meaningful customer experience insights about what the root cause of specific customer escalations or churn. “The platform allows support teams to gather these cross-functional insights and connect them to a single view of customer problems,” Bhatia says. Additionally, IrisAgent facilitates the automation of mundane and repetitive support processes. In the past, these tasks were difficult to automate effectively due to the limitations of early AI technologies. Support functions often depended on rudimentary solutions like legacy decision trees, which suffered from a lack of scalability and robustness, primarily relying on simplistic keyword matching. However, advancements in AI and GenAI technologies have now enabled more sophisticated and efficient automation of these support processes. ## The Solution “IrisAgent provides a very holistic product profile, as we are the operating system for support teams,” Bhatia says. The platform includes features like omni-channel customer support automation, which integrates with other parts of the business, such as engineering or sales platforms, to really understand customer escalation points. Long before the advent of technologies such as ChatGPT, IrisAgeny had already been refining and advancing their AI and ML stack. This has enabled them to develop a comprehensive range of machine learning models, including both proprietary solutions and those built on cloud technologies. Through this advancement, IrisAgent was able to finetune on public and private customer data to achieve the level of accuracy that is needed to successfully deflect and resolve customer issues at scale. ![Iris GPT info](/blog/iris-agent-qdrant/iris_gpt.png) Since IrisAgent built out a lot of their AI related processes in-house with proprietary technology, they wanted to find ways to augment these capabilities with RAG technologies and vector databases. This strategic move was aimed at abstracting much of the technical complexity, thereby simplifying the process for engineers and data scientists on the team to interact with data and develop a variety of solutions built on top of it. ![Quote from CEO of IrisAgent](/blog/iris-agent-qdrant/iris_ceo_quote.png) “We were looking at a lot of vector databases in the market and one of our core requirements was that the solution needed to be open source because we have a strong emphasis on data privacy and security,” Bhatia says. Also, performance played a key role for IrisAgent during their evaluation as Bhatia mentions: “Despite it being a relatively new project at the time we tested Qdrant, the performance was really good.” Additional evaluation criteria were the ease of ability to deployment, future maintainability, and the quality of available documentation. Ultimately, IrisAgent decided to build with Qdrant as their vector database of choice, given these reasons: * **Open Source and Flexibility**: IrisAgent required a solution that was open source, to align with their data security needs and preference for self-hosting. Qdrant's open-source nature allowed IrisAgent to deploy it on their cloud infrastructure seamlessly. * **Performance**: Early on, IrisAgent recognized Qdrant's superior performance, despite its relative newness in the market. This performance aspect was crucial for handling large volumes of data efficiently. * **Ease of Use**: Qdrant's user-friendly SDKs and compatibility with major programming languages like Go and Python made it an ideal choice for IrisAgent's engineering team. Additionally, IrisAgent values Qdrant’s the solid documentation, which is easy to follow. * **Maintainability**: IrisAgent prioritized future maintainability in their choice of Qdrant, notably valuing the robustness and efficiency Rust provides, ensuring a scalable and future-ready solution. ## Optimizing IrisAgent's AI Pipeline: The Evaluation and Integration of Qdrant IrisAgent utilizes comprehensive testing and sandbox environments, ensuring no customer data is used during the testing of new features. Initially, they deployed Qdrant in these environments to evaluate its performance, leveraging their own test data and employing Qdrant’s console and SDK features to conduct thorough data exploration and apply various filters. The primary languages used in these processes are Go, for its efficiency, and Python, for its strength in data science tasks. After the successful testing, Qdrant's outputs are now integrated into IrisAgent’s AI pipeline, enhancing a suite of proprietary AI models designed for tasks such as detecting hallucinations and similarities, and classifying customer intents. With Qdrant, IrisAgent saw significant performance and quality gains for their RAG use cases. Beyond this, IrisAgent also performs fine-tuning further in the development process. Qdrant’s emphasis on open-source technology and support for main programming languages (Go and Python) ensures ease of use and compatibility with IrisAgent’s production environment. IrisAgent is deploying Qdrant on Google Cloud in order to fully leverage Google Cloud's robust infrastructure and innovative offerings. ![Iris agent flow chart](/blog/iris-agent-qdrant/iris_agent_flow_chart.png) ## Future of IrisAgent Looking ahead, IrisAgent is committed to pushing the boundaries of AI in customer support, with ambitious plans to evolve their product further. The cornerstone of this vision is a feature that will allow support teams to leverage historical support data more effectively, by automating the generation of knowledge base content to redefine how FAQs and product documentation are created. This strategic initiative aims not just to reduce manual effort but also to enrich the self-service capabilities of users. As IrisAgent continues to refine its AI algorithms and expand its training datasets, the goal is to significantly elevate the support experience, making it more seamless and intuitive for end-users.
qdrant-landing/content/blog/neural-search-tutorial.md
--- draft: true title: Neural Search Tutorial slug: neural-search-tutorial short_description: Neural Search Tutorial description: Step-by-step guide on how to build a neural search service. preview_image: /blog/from_cms/1_vghoj7gujfjazpdmm9ebxa.webp date: 2024-01-05T14:09:57.544Z author: Andrey Vasnetsov featured: false tags: [] --- <!--StartFragment--> Step-by-step guide on how to build a neural search service. <!--EndFragment--> ![](/blog/from_cms/1_yoyuyv4zrz09skc8r6_lta.webp "How to build a neural search service with BERT + Qdrant + FastAPI") Information retrieval technology is one of the main technologies that enabled the modern Internet to exist. These days, search technology is the heart of a variety of applications. From web-pages search to product recommendations. For many years, this technology didn’t get much change until neural networks came into play. In this tutorial we are going to find answers to these questions: * What is the difference between regular and neural search? * What neural networks could be used for search? * In what tasks is neural network search useful? * How to build and deploy own neural search service step-by-step? **What is neural search?** A regular full-text search, such as Google’s, consists of searching for keywords inside a document. For this reason, the algorithm can not take into account the real meaning of the query and documents. Many documents that might be of interest to the user are not found because they use different wording. Neural search tries to solve exactly this problem — it attempts to enable searches not by keywords but by meaning. To achieve this, the search works in 2 steps. In the first step, a specially trained neural network encoder converts the query and the searched objects into a vector representation called *embeddings*. The encoder must be trained so that similar objects, such as texts with the same meaning or alike pictures get a close vector representation. ![](/blog/from_cms/1_vghoj7gujfjazpdmm9ebxa.webp "Neural encoder places cats closer together") Having this vector representation, it is easy to understand what the second step should be. To find documents similar to the query you now just need to find the nearest vectors. The most convenient way to determine the distance between two vectors is to calculate the cosine distance. The usual Euclidean distance can also be used, but it is not so efficient due to the [curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality). **Which model could be used?** It is ideal to use a model specially trained to determine the closeness of meanings. For example, models trained on Semantic Textual Similarity (STS) datasets. Current state-of-the-art models could be found on this [leaderboard](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts-benchmark?p=roberta-a-robustly-optimized-bert-pretraining). However, not only specially trained models can be used. If the model is trained on a large enough dataset, its internal features can work as embeddings too. So, for instance, you can take any pre-trained on ImageNet model and cut off the last layer from it. In the penultimate layer of the neural network, as a rule, the highest-level features are formed, which, however, do not correspond to specific classes. The output of this layer can be used as an embedding. **What tasks is neural search good for?** Neural search has the greatest advantage in areas where the query cannot be formulated precisely. Querying a table in a SQL database is not the best place for neural search. On the contrary, if the query itself is fuzzy, or it cannot be formulated as a set of conditions — neural search can help you. If the search query is a picture, sound file or long text, neural network search is almost the only option. If you want to build a recommendation system, the neural approach can also be useful. The user’s actions can be encoded in vector space in the same way as a picture or text. And having those vectors, it is possible to find semantically similar users and determine the next probable user actions. **Let’s build our own** With all that said, let’s make our neural network search. As an example, I decided to make a search for startups by their description. In this demo, we will see the cases when text search works better and the cases when neural network search works better. I will use data from [startups-list.com](https://www.startups-list.com/). Each record contains the name, a paragraph describing the company, the location and a picture. Raw parsed data can be found at [this link](https://storage.googleapis.com/generall-shared-data/startups_demo.json). **Prepare data for neural search** To be able to search for our descriptions in vector space, we must get vectors first. We need to encode the descriptions into a vector representation. As the descriptions are textual data, we can use a pre-trained language model. As mentioned above, for the task of text search there is a whole set of pre-trained models specifically tuned for semantic similarity. One of the easiest libraries to work with pre-trained language models, in my opinion, is the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) by UKPLab. It provides a way to conveniently download and use many pre-trained models, mostly based on transformer architecture. Transformers is not the only architecture suitable for neural search, but for our task, it is quite enough. We will use a model called **`distilbert-base-nli-stsb-mean-tokens**\`. DistilBERT means that the size of this model has been reduced by a special technique compared to the original BERT. This is important for the speed of our service and its demand for resources. The word \`stsb` in the name means that the model was trained for the Semantic Textual Similarity task. The complete code for data preparation with detailed comments can be found and run in [Colab Notebook](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing). ![](/blog/from_cms/1_lotmmhjfexth1ucmtuhl7a.webp "What tasks is neural search good for? Neural search has the greatest advantage in areas where the query cannot be formulated precisely. Querying a table in a SQL database is not the best place for neural search. On the contrary, if the query itself is fuzzy, or it cannot be formulated as a set of conditions — neural search can help you. If the search query is a picture, sound file or long text, neural network search is almost the only option. If you want to build a recommendation system, the neural approach can also be useful. The user’s actions can be encoded in vector space in the same way as a picture or text. And having those vectors, it is possible to find semantically similar users and determine the next probable user actions. Let’s build our own With all that said, let’s make our neural network search. As an example, I decided to make a search for startups by their description. In this demo, we will see the cases when text search works better and the cases when neural network search works better. I will use data from startups-list.com. Each record contains the name, a paragraph describing the company, the location and a picture. Raw parsed data can be found at this link. Prepare data for neural search To be able to search for our descriptions in vector space, we must get vectors first. We need to encode the descriptions into a vector representation. As the descriptions are textual data, we can use a pre-trained language model. As mentioned above, for the task of text search there is a whole set of pre-trained models specifically tuned for semantic similarity. One of the easiest libraries to work with pre-trained language models, in my opinion, is the sentence-transformers by UKPLab. It provides a way to conveniently download and use many pre-trained models, mostly based on transformer architecture. Transformers is not the only architecture suitable for neural search, but for our task, it is quite enough. We will use a model called `distilbert-base-nli-stsb-mean-tokens`. DistilBERT means that the size of this model has been reduced by a special technique compared to the original BERT. This is important for the speed of our service and its demand for resources. The word `stsb` in the name means that the model was trained for the Semantic Textual Similarity task. The complete code for data preparation with detailed comments can be found and run in Colab Notebook.") **Vector search engine** Now as we have a vector representation for all our records, we need to store them somewhere. In addition to storing, we may also need to add or delete a vector, save additional information with the vector. And most importantly, we need a way to search for the nearest vectors. The vector search engine can take care of all these tasks. It provides a convenient API for searching and managing vectors. In our tutorial we will use [Qdrant](/) vector search engine. It not only supports all necessary operations with vectors but also allows to store additional payload along with vectors and use it to perform filtering of the search result. Qdrant has a client for python and also defines the API schema if you need to use it from other languages. The easiest way to use Qdrant is to run a pre-built image. So make sure you have Docker installed on your system. To start Qdrant, use the instructions on its [homepage](https://github.com/qdrant/qdrant). Download image from [DockerHub](https://hub.docker.com/r/generall/qdrant): `docker pull qdrant/qdrant` And run the service inside the docker: `docker run -p 6333:6333 \`\ `-v $(pwd)/qdrant_storage:/qdrant/storage \`\ `qdrant/qdrant` You should see output like this ```abuild `...`\ `[...] Starting 12 workers`\ `[...] Starting "actix-web-service-0.0.0.0:6333" service on 0.0.0.0:6333` ``` This means that the service is successfully launched and listening port 6333. To make sure you can test <http://localhost:6333/> in your browser and get qdrant version info. All uploaded to Qdrant data is saved into the `*./qdrant_storage*` directory and will be persisted even if you recreate the container. **Upload data to Qdrant** Now once we have the vectors prepared and the search engine running, we can start uploading the data. To interact with Qdrant from python, I recommend using an out-of-the-box client library. To install it, use the following command `pip install qdrant-client` At this point, we should have startup records in file `*startups.json*\`, encoded vectors in file `*startup_vectors.npy*`, and running Qdrant on a local machine. Let’s write a script to upload all startup data and vectors into the search engine. First, let’s create a client object for Qdrant. ```abuild # Import client library from qdrant_client import QdrantClient from qdrant_client import models qdrant_client = QdrantClient(host=’localhost’, port=6333) ``` Qdrant allows you to combine vectors of the same purpose into collections. Many independent vector collections can exist on one service at the same time. Let’s create a new collection for our startup vectors. ```abuild `qdrant_client.recreate_collection(`\ `collection_name='startups',`\ `vectors_config=models.VectorParams(size=768, distance="Cosine")`\ `)` ``` The `*recreate_collection*` function first tries to remove an existing collection with the same name. This is useful if you are experimenting and running the script several times. The `*vector_size*\` parameter is very important. It tells the service the size of the vectors in that collection. All vectors in a collection must have the same size, otherwise, it is impossible to calculate the distance between them. `*768*` is the output dimensionality of the encoder we are using. The `*distance*` parameter allows specifying the function used to measure the distance between two points. The Qdrant client library defines a special function that allows you to load datasets into the service. However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input. Let’s create an iterator over the startup data and vectors. ``` import numpy as np import json fd = open('./startups.json') # payload is now an iterator over startup data payload = map(json.loads, fd) # Here we load all vectors into memory, numpy array works as iterable for itself. # Other option would be to use Mmap, if we don't want to load all data into RAM vectors = np.load('./startup_vectors.npy') # And the final step - data uploading qdrant_client.upload_collection( collection_name='startups', vectors=vectors, payload=payload, ids=None, # Vector ids will be assigned automatically batch_size=256 # How many vectors will be uploaded in a single request? ``` Now we have vectors, uploaded to the vector search engine. On the next step we will learn how to actually search for closest vectors. The full code for this step could be found [here](https://github.com/qdrant/qdrant_demo/blob/master/qdrant_demo/init_vector_search_index.py). **Make a search API** Now that all the preparations are complete, let’s start building a neural search class. First, install all the requirements: `pip install sentence-transformers numpy` In order to process incoming requests neural search will need 2 things. A model to convert the query into a vector and Qdrant client, to perform a search queries. ``` # File: neural_searcher.py from qdrant_client import QdrantClient from sentence_transformers import SentenceTransformer class NeuralSearcher: def __init__(self, collection_name): self.collection_name = collection_name # Initialize encoder model self.model = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens', device='cpu') # initialize Qdrant client self.qdrant_client = QdrantClient(host='localhost', port=6333) # The search function looks as simple as possible: def search(self, text: str): # Convert text query into vector vector = self.model.encode(text).tolist() # Use `vector` for search for closest vectors in the collection search_result = self.qdrant_client.search( collection_name=self.collection_name, query_vector=vector, query_filter=None, # We don't want any filters for now top=5 # 5 the most closest results is enough ) # `search_result` contains found vector ids with similarity scores along with the stored payload # In this function we are interested in payload only payloads = [hit.payload for hit in search_result] return payloads ``` With Qdrant it is also feasible to add some conditions to the search. For example, if we wanted to search for startups in a certain city, the search query could look like this: We now have a class for making neural search queries. Let’s wrap it up into a service. **Deploy as a service** To build the service we will use the FastAPI framework. It is super easy to use and requires minimal code writing. To install it, use the command `pip install fastapi uvicorn` Our service will have only one API endpoint and will look like this: Now, if you run the service with `python service.py` [ttp://localhost:8000/docs](http://localhost:8000/docs) , you should be able to see a debug interface for your service. ![](/blog/from_cms/1_f4gzrt6rkyqg8xvjr4bdtq-1-.webp "FastAPI Swagger interface") Feel free to play around with it, make queries and check out the results. This concludes the tutorial. **Online Demo** The described code is the core of this [online demo](https://demo.qdrant.tech/). You can try it to get an intuition for cases when the neural search is useful. The demo contains a switch that selects between neural and full-text searches. You can turn neural search on and off to compare the result with regular full-text search. Try to use startup description to find similar ones. **Conclusion** In this tutorial, I have tried to give minimal information about neural search, but enough to start using it. Many potential applications are not mentioned here, this is a space to go further into the subject. Subscribe to my [telegram channel](https://t.me/neural_network_engineering), where I talk about neural networks engineering, publish other examples of neural networks and neural search applications. Subscribe to the [Qdrant user’s group](https://discord.gg/tdtYvXjC4h) if you want to be updated on latest Qdrant news and features.
qdrant-landing/content/blog/new-0-7-update-of-the-qdrant-engine-went-live.md
--- draft: true title: New 0.7.0 update of the Qdrant engine went live slug: qdrant-0-7-0-released short_description: Qdrant v0.7.0 engine has been released description: Qdrant v0.7.0 engine has been released preview_image: /blog/from_cms/v0.7.0.png date: 2022-04-13T08:57:07.604Z author: Alyona Kavyerina author_link: https://www.linkedin.com/in/alyona-kavyerina/ featured: true categories: - News - Release update tags: - Corporate news - Release sitemapExclude: True --- We've released the new version of Qdrant neural search engine.  Let's see what's new in update 0.7.0. * 0.7 engine now supports JSON as a payload.  * It redeems a lost API. Alias API in gRPC is available. * Provides new filtering conditions: refactoring, bool, IsEmpty, and ValuesCount filters are available.  * It has a lot of improvements regarding geo payload indexing, HNSW performance, and many more. Read detailed release notes on [GitHub](https://github.com/qdrant/qdrant/releases/tag/v0.7.0). Stay tuned for new updates.\ If you have any questions or need support, join our [Discord](https://discord.com/invite/tdtYvXjC4h) community.
qdrant-landing/content/blog/open-source-vector-search-engine-and-vector-database.md
--- draft: false title: Open Source Vector Search Engine and Vector Database - Andrey Vasnetsov slug: open-source-vector-search-engine-vector-database short_description: CTO of Qdrant Andrey talks about Vector search engines and the technical facets and challenges encountered in developing an open-source vector database. description: Andrey Vasnetsov, CTO and Co-founder of Qdrant, presents an in-depth look into the intricacies of their open-source vector search engine and database, detailing its optimized architecture, data structure challenges, and innovative filtering techniques for efficient vector similarity searches. preview_image: /blog/from_cms/andrey-vasnetsov-cropped.png date: 2024-01-10T16:04:57.804Z author: Demetrios Brinkmann featured: false tags: - Qdrant - Vector Search Engine - Vector Database --- > *"For systems like Qdrant, scalability and performance in my opinion, is much more important than transactional consistency, so it should be treated as a search engine rather than database."*\ -- Andrey Vasnetsov > Discussing core differences between search engines and databases, Andrey underlined the importance of application needs and scalability in database selection for vector search tasks. Andrey Vasnetsov, CTO at Qdrant is an enthusiast of Open Source, machine learning, and vector search. He works on Open Source projects related to Vector Similarity Search and Similarity Learning. He prefers practical over theoretical, working demo over arXiv paper. ***You can watch this episode on [YouTube](https://www.youtube.com/watch?v=bU38Ovdh3NY).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/bU38Ovdh3NY?si=GiRluTu_c-4jESMj" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> ***This episode is part of the [ML⇄DB Seminar Series](https://db.cs.cmu.edu/seminar2023/#) (Machine Learning for Databases + Databases for Machine Learning) of the Carnegie Mellon University Database Research Group.*** ## **Top Takeaways:** Dive into the intricacies of vector databases with Andrey as he unpacks Qdrant's approach to combining filtering and vector search, revealing how in-place filtering during graph traversal optimizes precision without sacrificing search exactness, even when scaling to billions of vectors. 5 key insights you’ll learn: - 🧠 **The Strategy of Subgraphs:** Dive into how overlapping intervals and geo hash regions can enhance the precision and connectivity within vector search indices. - 🛠️ **Engine vs Database:** Discover the differences between search engines and relational databases and why considering your application's needs is crucial for scalability. - 🌐 **Combining Searches with Relational Data:** Get insights on integrating relational and vector search for improved efficiency and performance. - 🚅 **Speed and Precision Tactics:** Uncover the techniques for controlling search precision and speed by tweaking the beam size in HNSW indices. - 🔗 **Connected Graph Challenges:** Learn about navigating the difficulties of maintaining a connected graph while filtering during search operations. > Fun Fact: The Qdrant system is capable of in-place filtering during graph traversal, which is a novel approach compared to traditional post-filtering methods, ensuring the correct quantity of results that meet the filtering conditions. > ## Timestamps: 00:00 Search professional with expertise in vectors and engines.\ 09:59 Elasticsearch: scalable, weak consistency, prefer vector search.\ 12:53 Optimize data structures for faster processing efficiency.\ 21:41 Vector indexes require special treatment, like HNSW's proximity graph and greedy search.\ 23:16 HNSW index: approximate, precision control, CPU intensive.\ 30:06 Post-filtering inefficient, prefiltering costly.\ 34:01 Metadata-based filters; creating additional connecting links.\ 41:41 Vector dimension impacts comparison speed, indexing complexity high.\ 46:53 Overlapping intervals and subgraphs for precision.\ 53:18 Postgres limits scalability, additional indexing engines provide faster queries.\ 59:55 Embedding models for time series data explained.\ 01:02:01 Cheaper system for serving billion vectors. ## More Quotes from Andrey: *"It allows us to compress vector to a level where a single dimension is represented by just a single bit, which gives total of 32 times compression for the vector."*\ -- Andrey Vasnetsov on vector compression in AI *"We build overlapping intervals and we build these subgraphs with additional links for those intervals. And also we can do the same with, let's say, location data where we have geocoordinates, so latitude, longitude, we encode it into geo hashes and basically build this additional graph for overlapping geo hash regions."*\ -- Andrey Vasnetsov *"We can further compress data using such techniques as delta encoding, as variable byte encoding, and so on. And this total effect, total combined effect of this optimization can make immutable data structures order of minute more efficient than mutable ones."*\ -- Andrey Vasnetsov
qdrant-landing/content/blog/pienso-qdrant-future-proofing-generative-ai-for-enterprise-level-customers.md
--- draft: true title: "Pienso & Qdrant: Future Proofing Generative AI for Enterprise-Level Customers" slug: pienso-case-study short_description: Case study description: Case study preview_image: /blog/from_cms/title.webp date: 2024-01-05T15:10:57.473Z author: Author featured: false --- <!--StartFragment--> # Pienso & Qdrant: Future Proofing Generative AI for Enterprise-Level Customers <!--EndFragment--><!--StartFragment--> The partnership between Pienso and Qdrant is set to revolutionize interactive deep learning, making it practical, efficient, and scalable for global customers. Pienso’s low-code platform provides a streamlined and user-friendly process for deep learning tasks. This exceptional level of convenience is augmented by Qdrant’s scalable and cost-efficient high vector computation capabilities, which enable reliable retrieval of similar vectors from high-dimensional spaces. Together, Pienso and Qdrant will empower enterprises to harness the full potential of generative AI on a large scale. By combining the technologies of both companies, organizations will be able to train their own large language models and leverage them for downstream tasks that demand data sovereignty and model autonomy. This collaboration will help customers unlock new possibilities and achieve advanced AI-driven solutions. Strengthening LLM Performance Qdrant enhances the accuracy of large language models (LLMs) by offering an alternative to relying solely on patterns identified during the training phase. By integrating with Qdrant, Pienso will empower customer LLMs with dynamic long-term storage, which will ultimately enable them to generate concrete and factual responses. Qdrant effectively preserves the extensive context windows managed by advanced LLMs, allowing for a broader analysis of the conversation or document at hand. By leveraging this extended context, LLMs can achieve a more comprehensive understanding and produce contextually relevant outputs. ## [](/case-studies/pienso/#joint-dedication-to-scalability-efficiency-and-reliability)Joint Dedication to Scalability, Efficiency and Reliability > “Every commercial generative AI use case we encounter benefits from faster training and inference, whether mining customer interactions for next best actions or sifting clinical data to speed a therapeutic through trial and patent processes.” - Birago Jones, CEO, Pienso Pienso chose Qdrant for its exceptional LLM interoperability, recognizing the potential it offers in maximizing the power of large language models and interactive deep learning for large enterprises. Qdrant excels in efficient nearest neighbor search, which is an expensive and computationally demanding task. Our ability to store and search high-dimensional vectors with remarkable performance and precision will offer a significant peace of mind to Pienso’s customers. Through intelligent indexing and partitioning techniques, Qdrant will significantly boost the speed of these searches, accelerating both training and inference processes for users. ### [](/case-studies/pienso/#scalability-preparing-for-sustained-growth-in-data-volumes)Scalability: Preparing for Sustained Growth in Data Volumes Qdrant’s distributed deployment mode plays a vital role in empowering large enterprises dealing with massive data volumes. It ensures that increasing data volumes do not hinder performance but rather enrich the model’s capabilities, making scalability a seamless process. Moreover, Qdrant is well-suited for Pienso’s enterprise customers as it operates best on bare metal infrastructure, enabling them to maintain complete control over their data sovereignty and autonomous LLM regimes. This ensures that enterprises can maintain their full span of control while leveraging the scalability and performance benefits of Qdrant’s solution. ### [](/case-studies/pienso/#efficiency-maximizing-the-customer-value-proposition)Efficiency: Maximizing the Customer Value Proposition Qdrant’s storage efficiency delivers cost savings on hardware while ensuring a responsive system even with extensive data sets. In an independent benchmark stress test, Pienso discovered that Qdrant could efficiently store 128 million documents, consuming a mere 20.4GB of storage and only 1.25GB of memory. This storage efficiency not only minimizes hardware expenses for Pienso’s customers, but also ensures optimal performance, making Qdrant an ideal solution for managing large-scale data with ease and efficiency. ### [](/case-studies/pienso/#reliability-fast-performance-in-a-secure-environment)Reliability: Fast Performance in a Secure Environment Qdrant’s utilization of Rust, coupled with its memmap storage and write-ahead logging, offers users a powerful combination of high-performance operations, robust data protection, and enhanced data safety measures. Our memmap storage feature offers Pienso fast performance comparable to in-memory storage. In the context of machine learning, where rapid data access and retrieval are crucial for training and inference tasks, this capability proves invaluable. Furthermore, our write-ahead logging (WAL), is critical to ensuring changes are logged before being applied to the database. This approach adds additional layers of data safety, further safeguarding the integrity of the stored information. > “We chose Qdrant because it’s fast to query, has a small memory footprint and allows for instantaneous setup of a new vector collection that is going to be queried. Other solutions we evaluated had long bootstrap times and also long collection initialization times {..} This partnership comes at a great time, because it allows Pienso to use Qdrant to its maximum potential, giving our customers a seamless experience while they explore and get meaningful insights about their data.” - Felipe Balduino Cassar, Senior Software Engineer, Pienso ## [](/case-studies/pienso/#whats-next)What’s Next? Pienso and Qdrant are dedicated to jointly develop the most reliable customer offering for the long term. Our partnership will deliver a combination of no-code/low-code interactive deep learning with efficient vector computation engineered for open source models and libraries. ### [](/case-studies/pienso/#to-learn-more-about-how-we-plan-on-achieving-this-join-the-founders-for-a-technical-fireside-chat-at-0930-pst-thursday-20th-july-on-discordhttpsdiscordggvnvg3fheevent1128331722270969909)To learn more about how we plan on achieving this, join the founders for a [technical fireside chat at 09:30 PST Thursday, 20th July on Discord](https://discord.gg/Vnvg3fHE?event=1128331722270969909). <!--EndFragment--> ![](/blog/from_cms/founderschat.png)
qdrant-landing/content/blog/production-scale-rag-for-real-time-news-distillation-robert-caulk-vector-space-talks.md
--- draft: false title: Production-scale RAG for Real-Time News Distillation - Robert Caulk | Vector Space Talks slug: real-time-news-distillation-rag short_description: Robert Caulk tackles the challenges and innovations in open source AI and news article modeling. description: Robert Caulk, founder of Emergent Methods, discusses the complexities of context engineering, the power of Newscatcher API for broader news access, and the sophisticated use of tools like Qdrant for improved recommendation systems, all while emphasizing the importance of efficiency and modularity in technology stacks for real-time data management. preview_image: /blog/from_cms/robert-caulk-bp-cropped.png date: 2024-03-25T08:49:22.422Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Retrieval Augmented Generation - LLM --- > *"We've got a lot of fun challenges ahead of us in the industry, I think, and the industry is establishing best practices. Like you said, everybody's just trying to figure out what's going on. And some of these base layer tools like Qdrant really enable products and enable companies and they enable us.”*\ -- Robert Caulk > Robert, Founder of Emergent Methods is a scientist by trade, dedicating his career to a variety of open-source projects that range from large-scale artificial intelligence to discrete element modeling. He is currently working with a team at Emergent Methods to adaptively model over 1 million news articles per day, with a goal of reducing media bias and improving news awareness. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7lQnfv0v2xRtFksGAP6TUW?si=Vv3B9AbjQHuHyKIrVtWL3Q), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/0ORi9QJlud0).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/0ORi9QJlud0?si=rpSOnS2kxTFXiVBq" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Production-scale-RAG-for-Real-Time-News-Distillation---Robert-Caulk--Vector-Space-Talks-015-e2g6464/a-ab0c1sq" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** How do Robert Caulk and Emergent Methods contribute to the open-source community, particularly in AI systems and news article modeling? In this episode, we'll be learning stuff about open-source projects that are reshaping how we interact with AI systems and news article modeling. Robert takes us on an exploration into the evolving landscape of news distribution and the tech making it more efficient and balanced. Here are some takeaways from this episode: 1. **Context Matters**: Discover the importance of context engineering in news and how it ensures a diversified and consumable information flow. 2. **Introducing Newscatcher API**: Get the lowdown on how this tool taps into 50,000 news sources for more thorough and up-to-date reporting. 3. **The Magic of Embedding**: Learn about article summarization and semantic search, and how they're crucial for discovering content that truly resonates. 4. **Qdrant & Cloud**: Explore how Qdrant's cloud offering and its single responsibility principle support a robust, modular approach to managing news data. 5. **Startup Superpowers**: Find out why startups have an edge in implementing new tech solutions and how incumbents are tied down by legacy products. > Fun Fact: Did you know that startups' lack of established practices is actually a superpower in the face of new tech paradigms? Legacy products can't keep up! > ## Show notes: 00:00 Intro to Robert and Emergent Methods.\ 05:22 Crucial dedication to scaling context engineering.\ 07:07 Optimizing embedding for semantic similarity in search.\ 13:07 New search technology boosts efficiency and speed.\ 14:17 Reliable cloud provider with privacy and scalability.\ 17:46 Efficient data movement and resource management.\ 22:39 GoLang for services, Rust for security.\ 27:34 Logistics organized; Newscatcher provides up-to-date news.\ 30:27 Tested Weaviate and another in Rust.\ 32:01 Filter updates by starring and user preferences. ## More Quotes from Robert: *"Web search is powerful, but it's slow and ultimately inaccurate. What we're building is real time indexing and we couldn't do that without Qdrant*”\ -- Robert Caulk *"You need to start thinking about persistence and search and making sure those services are robust. That's where Qdrant comes into play. And we found that the all in one solutions kind of sacrifice performance for convenience, or sacrifice accuracy for convenience, but it really wasn't for us. We'd rather just orchestrate it ourselves and let Qdrant do what Qdrant does, instead of kind of just hope that an all in one solution is handling it for us and that allows for modularity performance.”*\ -- Robert Caulk *"Anyone riding the Qdrant wave is just reaping benefits. It seems monthly, like two months ago, sparse vector support got added. There's just constantly new massive features that enable products.”*\ -- Robert Caulk ## Transcript: Demetrios: Robert, it's great to have you here for the vector space talks. I don't know if you're familiar with some of this fun stuff that we do here, but we get to talk with all kinds of experts like yourself on what they're doing when it comes to the vector space and how you've overcome challenges, how you're working through things, because this is a very new field and it is not the most intuitive, as you will tell us more in this upcoming talk. I really am excited because you've been a scientist by trade. Now, you're currently founder at Emergent Methods and you've dedicated your career to a variety of open source projects that range from the large scale AI systems to the discrete element modeling. Now at emergent methods, you are adaptively modeling over 1 million news articles per day. That sounds like a whole lot of news articles. And you've been talking and working through production grade RAG, which is basically everyone's favorite topic these days. So I know you got to talk for us, man. Demetrios: I'm going to hand it over to you. I'll bring up your screen right now, and when someone wants to answer or ask a question, feel free to throw it in the chat and I'll jump out at Robert and stop him if needed. Robert Caulk: Sure. Demetrios: Great to have you here, man. I'm excited for this one. Robert Caulk: Thanks for having me, Demetrios. Yeah, it's a great opportunity. I love talking about vector spaces, parameter spaces. So to talk on the show is great. We've got a lot of fun challenges ahead of us in the industry, I think, and the industry is establishing best practices. Like you said, everybody's just trying to figure out what's going on. And some of these base layer tools like Qdrant really enable products and enable companies and they enable us. So let me start. Robert Caulk: Yeah, like you said, I'm Robert and I'm a founder of emergent methods. Our background, like you said, we are really committed to free and open source software. We started with a lot of narrow AI. Freak AI was one of our original projects, which is AI ML for algo trading very narrow AI, but we came together and built flowdapt. It's a really nice cluster orchestration software, and I'll talk a little bit about that during this presentation. But some of our background goes into, like you said, large scale deep learning for supercomputers. Really cool, interesting stuff. We have some cloud experience. Robert Caulk: We really like configuration, so let's dive into it. Why do we actually need to engineer context in the news? There's a lot of reasons why news is important and why it needs to be distributed in a way that's balanced and diversified, but also consumable. Right, let's look at Chat GPT on the left. This is Chat GPT plus it's kind of hanging out searching for Gaza news on Bing, trying to find the top three articles live. Web search is powerful, but it's slow and ultimately inaccurate. What we're building is real time indexing and we couldn't do that without Qdrant, and there's a lot of reasons which I'll be perfectly happy to dive into, but eventually Chat GPT will pull something together here. There it is. And the first thing it reports is 25 day old article with 25 day old nudes. Robert Caulk: Old news. So it's just inaccurate. So it's borderline dangerous, what's happening here. Right, so this is a very delicate topic. Engineering context in news properly, which takes a lot of energy, a lot of time and dedication and focus, and not every company really has this sort of resource. So we're talking about enforcing journalistic standards, right? OpenAI and Chat GPt, they just don't have the time and energy to build a dedicated prompt for this sort of thing. It's fine, they're doing great stuff, they're helping you code. But someone needs to step in and really do enforce some journalistic standards here. Robert Caulk: And that includes enforcing diversity, languages, regions and sources. If I'm going to read about Gaza, what's happening over there, you can bet I want to know what Egypt is saying and what France is saying and what Algeria is saying. So let's do this right. That's kind of what we're suggesting, and the only way to do that is to parse a lot of articles. That's how you avoid outdated, stale reporting. And that's a real danger, which is kind of what we saw on that first slide. Everyone here knows hallucination is a problem and it's something you got to minimize, especially when you're talking about the news. It's just a really high cost if you get it wrong. Robert Caulk: And so you need people dedicated to this. And if you're going to dedicate a ton of resources and ton of people, you might as well scale that properly. So that's kind of where this comes into. We call this context engineering news context engineering, to be precise, before llama two, which also is enabling products left and right. As we all know, the traditional pipeline was chunk it up, take 512 tokens, put it through a translator, put it through distill art, do some sentence extraction, and maybe text classification, if you're lucky, get some sentiment out of it and it works. It gets you something. But after we're talking about reading full articles, getting real rich, context, flexible output, translating, summarizing, really deciding that custom extraction on the fly as your product evolves, that's something that the traditional pipeline really just doesn't support. Right. Robert Caulk: We're talking being able to on the fly say, you know what, actually we want to ask this very particular question of all articles and get this very particular field out. And it's really just a prompt modification. This all is based on having some very high quality, base level, diversified news. And so we'll talk a little bit more. But newscatchers is one of the sources that we're using, which opens up 50,000 different sources. So check them out. That's newscatcherapi.com. They even give free access to researchers if you're doing research in this. Robert Caulk: So I don't want to dive too much into the direct rag stuff. We can go deep, but I'm happy to talk about some examples of how to optimize this and how we've optimized it. Here on the right, you can see the diagram where we're trying to follow along the process of summarizing and embedding. And I'll talk a bit more about that in a moment. It's here to support after we've summarized those articles and we're ready to embed that. Embedding is really important to get that right because like the name of the show suggests you have to have a clean cluster vector space if you're going to be doing any sort of really rich semantic similarity searches. And if you're going to be able to dive deep into extracting important facts out of all 1 million articles a day, you're going to need to do this right. So having a user query which is not equivalent to the embedded page where this is the data, the enriched data that the embedding that we really want to be able to do search on. Robert Caulk: And then how do we connect the dots here? Of course, there are many ways to go about it. One way which is interesting and fun to talk about is ide. So that's basically a hypothetical document embedding. And what you do is you use the LLM directly to generate a fake article. And that's what we're showing here on the right. So let's say if the user says, what's going on in New York City government, well, you could say, hey, write me just a hypothetical summary based, it could completely fake and use that to create a fake embedding page and use that for the search. Right. So then you're getting a lot closer to where you want to go. Robert Caulk: There's some limitations to this, to it's, there's a computational cost also, it's not updated. It's based on whatever. It's basically diving into what it knows about the New York City government and just creating keywords for you. So there's definitely optimizations here as well. When you talk about ambiguity, well, what if the user follows up and says, well, why did they change the rules? Of course, that's where you can start prompt engineering a little bit more and saying, okay, given this historic conversation and the current question, give me some explicit question without ambiguity, and then do the high, if that's something you want to do. The real goal here is to stay in a single parameter space, a single vector space. Stay as close as possible when you're doing your search as when you do your embedding. So we're talking here about production scale of stuff. Robert Caulk: So I really am happy to geek out about the stack, the open source stack that we're relying on, which includes Qdrant here. But let's start with VLLM. I don't know if you guys have heard of it. This is a really great new project, and their focus on continuous batching and page detention. And if I'm being completely honest with you, it's really above my pay grade in the technicals and how they're actually implementing all of that inside the GPU memory. But what we do is we outsource that to that project and we really like what they're doing, and we've seen really good results. It's increasing throughput. So when you're talking about trying to parse through a million articles, you're going to need a lot of throughput. Robert Caulk: The other is text embedding inference. This is a great server. A lot of vector databases will say, okay, we'll do all the embedding for you and we'll do all everything. But when you move to production scale, I'll talk a bit about this later. You need to be using micro service architecture, so it's not super smart to have your database bogged down with doing sorting out the embeddings and sorting out other things. So honestly, I'm a real big fan of single responsibility principle, and that's what Tei does for you. And it also does dynamic batching, which is great in this world where everything is heterogeneous lengths of what's coming in and what's going out. So it's great. Robert Caulk: It really simplifies the process and allows you to isolate resources. But now the star of the show Qdrant, it's really come into its own. Anyone riding the Qdrant wave is just reaping benefits. It seems monthly, like two months ago, sparse vector support got added. There's just constantly new massive features that enable products. Right. So for us, we're doing so much up Cert, we really need to minimize client connections and networking overhead. So you got that batch up cert. Robert Caulk: The filters are huge. We're talking about real time filtering. We can't be searching on news articles from a month ago, two months ago, if the user is asking for a question that's related to the last 24 hours. So having that timestamp filtering and having it be efficient, which is what it is in Qdrant, is huge. Keyword filtering really opens up a massive realm of product opportunities for us. And then the sparse vectors, we hopped on this train immediately and are just seeing benefits. I don't want to say replacement of elasticsearch, but elasticsearch is using sparse vectors as well. So you can add splade into elasticsearch, and splade is great. Robert Caulk: It's a really great alternative to BM 25. It's based on that architecture, and that really opens up a lot of opportunities for filtering out keywords that are kind of useless to the search when the user uses the and a, and then there, these words that are less important splays a bit of a hybrid into semantics, but sparse retrieval. So it's really interesting. And then the idea of hybrid search with semantic and a sparse vector also opens up the ability to do ranking, and you got a higher quality product at the end, which is really the goal, right, especially in production. Point number four here, I would say, is probably one of the most important to us, because we're dealing in a world where latency is king, and being able to deploy Qdrant inside of the same cluster as all the other services. So we're just talking through the switch. That's huge. We're never getting bogged down by network. Robert Caulk: We're never worried about a cloud provider potentially getting overloaded or noisy neighbor problems, stuff like that, completely removed. And then you got high privacy, right. All the data is completely isolated from the external world. So this point number four, I'd say, is one of the biggest value adds for us. But then distributing deployment is huge because high availability is important, and deep storage, which when you're in the business of news archival, and that's one of our main missions here, is archiving the news forever. That's an ever growing database, and so you need a database that's going to be able to grow with you as your data grows. So what's the TLDR to this context? Engineering? Well, service orchestration is really just based on service orchestration in a very heterogeneous and parallel event driven environment. On the right side, we've got the user requests coming in. Robert Caulk: They're hitting all the same services, which every five minutes or every two minutes, whatever you've scheduled the scrape workflow on, also hitting the same services, this requires some orchestration. So that's kind of where I want to move into discussing the real production, scaling, orchestration of the system and how we're doing that. Provide some diagrams to show exactly why we're using the tools we're using here. This is an overview of our Kubernetes cluster with the services that we're using. So it's a bit of a repaint of the previous diagram, but a better overview about showing kind of how these things are connected and why they're connected. I'll go through one by one on these services to just give a little deeper dive into each one. But the goal here is for us, in our opinion, microservice orchestration is key. Sticking to single responsibility principle. Robert Caulk: Open source projects like Qdrant, like Tei, like VLLM and Kubernetes, it's huge. Kubernetes is opening up doors for security and for latency. And of course, if you're going to be getting involved in this game, you got to find the strong DevOps. There's no escaping that. So let's step through kind of piece by piece and talk about flow Dapp. So that's our project. That's our open source project. We've spent about two years building this for our needs, and we're really excited because we did a public open sourcing maybe last week or the week before. Robert Caulk: So finally, after all of our testing and rewrites and refactors, we're open. We're open for business. And it's running asknews app right now, and we're really excited for where it's going to go and how it's going to help other people orchestrate their clusters. Our goal and our priorities were highly paralyzed compute and we were running tests using all sorts of different executors, comparing them. So when you use Flowdapt, you can choose ray or dask. And that's key. Especially with vanilla Python, zero code changes, you don't need to know how ray or dask works. In the back end, flowdapt is vanilla Python. Robert Caulk: That was a key goal for us to ensure that we're optimizing how data is moving around the cluster. Automatic resource management this goes back to Ray and dask. They're helping manage the resources of the cluster, allocating a GPU to a task, or allocating multiple tasks to one GPU. These can come in very, very handy when you're dealing with very heterogeneous workloads like the ones that we discussed in those previous slides. For us, the biggest priority was ensuring rapid prototyping and debugging locally. When you're dealing with clusters of 1015 servers, 40 or 5100 with ray, honestly, ray just scales as far as you want. So when you're dealing with that big of a cluster, it's really imperative that what you see on your laptop is also what you are going to see once you deploy. And being able to debug anything you see in the cluster is big for us, we really found the need for easy cluster wide data sharing methods between tasks. Robert Caulk: So essentially what we've done is made it very easy to get and put values. And so this makes it extremely easy to move data and share data between tasks and make it highly available and stay in cluster memory or persist it to disk, so that when you do the inevitable version update or debug, you're reloading from a persisted state in the real time. News business scheduling is huge. Scheduling, making sure that various workflows are scheduled at different points and different periods or frequencies rather, and that they're being scheduled correctly, and that their triggers are triggering exactly what you need when you need it. Huge for real time. And then one of our biggest selling points, if you will, for this project is Kubernetes style. Everything. Our goal is everything's Kubernetes style, so that if you're coming from Kubernetes, everything's familiar, everything's resource oriented. Robert Caulk: We even have our own flowectl, which would be the Kubectl style command schemas. A lot of what we've done is ensuring deployment cycle efficiency here. So the goal is that flowdapt can schedule everything and manage all these services for you, create workflows. But why these services? For this particular use case, I'll kind of skip through quickly. I know I'm kind of running out of time here, but of course you're going to need some proprietary remote models. That's just how it works. You're going to of course share that load with on premise llms to reduce cost and to have some reasoning engine on premise. But there's obviously advantages and disadvantages to these. Robert Caulk: I'm not going to go through them. I'm happy to make these slides available, and you're welcome to kind of parse through the details. Yeah, for sure. You need to start thinking about persistence and search and making sure those services are robust. That's where Qdrant comes into play. And we found that the all in one solutions kind of sacrifice performance for convenience, or sacrifice accuracy for convenience, but it really wasn't for us. We'd rather just orchestrate it ourselves and let Qdrant do what Qdrant does, instead of kind of just hope that an all in one solution is handling it for us and that allows for modularity performance. And we'll dump Qdrant if we want to. Robert Caulk: Probably we won't. Or we'll dump if we need to, or we'll swap out for whatever replaces vllm. Trying to keep things modular so that future engineers are able to adapt with the tech that's just blowing up and exploding right now. Right. The last thing to talk about here in a production scale environment is really minimizing the latency. I touched on this with Kubernetes ensuring that these services are sitting on the same network, and that is huge. But that talks about decommunication latency. But when you start talking about getting hit with a ton of traffic, production scale, tons of people asking a question all simultaneously, and you needing to go hit a variety of services, well, this is where you really need to isolate that to an asynchronous environment. Robert Caulk: And of course, if you could write this all in Golang, that's probably going to be your best bet for us. We have some services written in Golang, but predominantly, especially the endpoints that the ML engineers need to work with. We're using fast API on pydantic and honestly, it's powerful. Pydantic V 2.0 now runs on Rust, and as anyone in the Qdrant community knows, Rust is really valuable when you're dealing with highly parallelized environments that require high security and protections for immutability and atomicity. Forgive me for the pronunciation, that kind of sums up the production scale talk, and I'm happy to answer questions. I love diving into this sort of stuff. I do have some just general thoughts on why startups are so much more well positioned right now than some of these incumbents, and I'll just do kind of a quick run through, less than a minute just to kind of get it out there. We can talk about it, see if we agree or disagree. Robert Caulk: But you touched on it, Demetrios, in the introduction, which was the best practices have not been established. That's it. That is why startups have such a big advantage. And the reason they're not established is because, well, the new paradigm of technology is just underexplored. We don't really know what the limits are and how to properly handle these things. And that's huge. Meanwhile, some of these incumbents, they're dealing with all sorts of limitations and resistance to change and stuff, and then just market expectations for incumbents maintaining these kind of legacy products and trying to keep them hobbling along on this old tech. In my opinion, startups, you got your reasoning engine building everything around a reasoning engine, using that reasoning engine for every aspect of your system to really open up the adaptivity of your product. Robert Caulk: And okay, I won't put elasticsearch in the incumbent world. I'll keep elasticsearch in the middle. I understand it still has a lot of value, but some of these vendor lock ins, not a huge fan of. But anyway, that's it. That's kind of all I have to say. But I'm happy to take questions or chat a bit. Demetrios: Dude, I've got so much to ask you and thank you for breaking down that stack. That is like the exact type of talk that I love to see because you open the kimono full on. And I was just playing around with asknews app. And so I think it's probably worth me sharing my screen just to show everybody what exactly that is and how that looks at the moment. So you should be able to see it now. Right? And super cool props to you for what you've built. Because I went, and intuitively I was able to say like, oh, cool, I can change, I can see positive news, and I can go by the region that I'm looking at. I want to make sure that I'm checking out all the stuff in Europe or all the stuff in America categories. Demetrios: I can look at sports, blah blah blah, like as if you were flipping the old newspaper and you could go to the sports section or the finance section, and then you cite the sources and you see like, oh, what's the trend in the coverage here? What kind of coverage are we getting? Where are we at in the coverage cycle? Probably something like that. And then, wait, although I was on the happy news, I thought murder, she wrote. So anyway, what we do is we. Robert Caulk: Actually sort it from we take the poll and we actually just sort most positive to the least positive. But you're right, we were talking the other day, we're like, let's just only show the positive. But yeah, that's a good point. Demetrios: There you go. Robert Caulk: Murder, she wrote. Demetrios: But the one thing that I was actually literally just yesterday talking to someone about was how you update things inside of your vector database. So I can imagine that news, as you mentioned, news cycles move very fast and the news that happened 2 hours ago is very different. The understanding of what happened in a very big news event is very different 2 hours ago than it is right now. So how do you make sure that you're always pulling the most current and up to date information? Robert Caulk: This is another logistical point that we think needs to get sorted properly and there's a few layers to it. So for us, as we're parsing that data coming in from Newscatcher, so newscatcher is doing a good job of always feeding the latest buckets to us. Sometimes one will be kind of arrive, but generally speaking, it's always the latest news. So we're taking five minute buckets, and then with those buckets, we're going through and doing all of our enrichment on that, adding it to Qdrant. And that is the point where we use that timestamp filtering, which is such an important point. So in the metadata of Qdrant, we're using the range filter, which is where we call that the timestamp filter, but it's really range filter, and that helps. So when we're going back to update things, we're sorting and ensuring that we're filtering out only what we haven't seen. Demetrios: Okay, that makes complete sense. And basically you could generalize this to something like what I was talking to with people yesterday about, which was, hey, I've got an HR policy that gets updated every other month or every quarter, and I want to make sure that if my HR chatbot is telling people what their vacation policy is, it's pulling from the most recent HR policy. So how do I make sure and do that? And how do I make sure that my vector database isn't like a landmine where it's pulling any information, but we don't necessarily have that control to be able to pull the correct information? And this comes down to that retrieval evaluation, which is such a hot topic, too. Robert Caulk: That's true. No, I think that's a key piece of the puzzle. Now, in that particular example, maybe you actually want to go in and start cleansing a bit, your database, just to make sure if it's really something you're never going to need again. You got to get rid of it. This is a piece I didn't add to the presentation, but it's tangential. You got to keep multiple databases and you got to making sure to isolate resources and cleaning out a database, especially in real time. So ensuring that your database is representative of what you want to be searching on. And you can do this with collections too, if you want. Robert Caulk: But we find there's sometimes a good opportunity to isolate resources in that sense, 100%. Demetrios: So, another question that I had for you was, I noticed Mongo was in the stack. Why did you not just use the Mongo vector option? Is it because of what you were mentioning, where it's like, yeah, you have these all-in-one options, but you sacrifice that performance for the convenience? Robert Caulk: We didn't test that, to be honest, I can't say. All I know is we tested weaviate, we tested one other, and I just really like. Although I was going to say I like that it's written in rust, although I believe Mongo is also written in rust, if I'm not mistaken. But for us, the document DB is more of a representation of state and what's happening, especially for our configurations and workflows. Meanwhile, we really like keeping and relying on Qdrant and all the features. Qdrant is updating, so, yeah, I'd say single responsibility principle is key to that. But I saw some chat in Qdrant discord about this, which I think the only way to use vector is actually to use their cloud offering, if I'm not mistaken. Do you know about this? Demetrios: Yeah, I think so, too. Robert Caulk: This would also be a piece that we couldn't do. Demetrios: Yeah. Where it's like it's open source, but not open source, so that makes sense. Yeah. This has been excellent, man. So I encourage anyone who is out there listening, check out again this is asknews app, and stay up to date with the most relevant news in your area and what you like. And I signed in, so I'm guessing that when I sign in, it's going to tweak my settings. Am I going to be able. Robert Caulk: Good question. Demetrios: Catch this next time. Robert Caulk: Well, at the moment, if you star a story, a narrative that you find interesting, then you can filter on the star and whatever the latest updates are, you'll get it for that particular story. Okay. It brings up another point about Qdrant, which is at the moment we're not doing it yet, but we have plans to use the recommendation system for letting a user kind of create their profile by just saying what they like, what they don't like, and then using the recommender to start recommending stories that they may or may not like. And that's us outsourcing the Qdrant almost entirely. Right. It's just us building around it. So that's nice. Demetrios: Yeah. That makes life a lot easier, especially knowing recommender systems. Yeah, that's excellent. Robert Caulk: Thanks. I appreciate that. For sure. And I'll try to make the slides available. I don't know if I can send them to the two Qdrant or something. They could post them in the discord maybe, for sure. Demetrios: And we can post them in the link in the description of this talk. So this has been excellent. Rob, I really appreciate you coming on here and chatting with me about this, and thanks for breaking down everything that you're doing. I also love the VllM project. It's blowing up. It's cool to see so much usage and all the good stuff that you're doing with it. And yeah, man, for anybody that wants to follow along on your journey, we'll drop a link to your LinkedIn so that they can connect with you and. Robert Caulk: Cool. Demetrios: Thank you. Robert Caulk: Thanks for having me. Demetrios, talk to you later. Demetrios: Catch you later, man. Take care.
qdrant-landing/content/blog/qdrant-1.9.x.md
--- title: "Qdrant 1.9.0 - Heighten Your Security With Role-Based Access Control Support" draft: false slug: qdrant-1.9.x short_description: "Granular access control. Optimized shard transfers. Support for byte embeddings." description: "New access control options for RBAC, a much faster shard transfer procedure, and direct support for byte embeddings. " preview_image: /blog/qdrant-1.9.x/social_preview.png social_preview_image: /blog/qdrant-1.9.x/social_preview.png date: 2024-04-24T00:00:00-08:00 author: David Myriel featured: false tags: - vector search - role based access control - byte vectors - binary vectors - quantization - new features --- [Qdrant 1.9.0 is out!](https://github.com/qdrant/qdrant/releases/tag/v1.9.0) This version complements the release of our new managed product [Qdrant Hybrid Cloud](/hybrid-cloud/) with key security features valuable to our enterprise customers, and all those looking to productionize large-scale Generative AI. **Data privacy, system stability and resource optimizations** are always on our mind - so let's see what's new: - **Granular access control:** You can further specify access control levels by using JSON Web Tokens. - **Optimized shard transfers:** The synchronization of shards between nodes is now significantly faster! - **Support for byte embeddings:** Reduce the memory footprint of Qdrant with official `uint8` support. ## New access control options via JSON Web Tokens Historically, our API key supported basic read and write operations. However, recognizing the evolving needs of our user base, especially large organizations, we've implemented additional options for finer control over data access within internal environments. Qdrant now supports [granular access control using JSON Web Tokens (JWT)](/documentation/guides/security/#granular-access-control-with-jwt). JWT will let you easily limit a user's access to the specific data they are permitted to view. Specifically, JWT-based authentication leverages tokens with restricted access to designated data segments, laying the foundation for implementing role-based access control (RBAC) on top of it. **You will be able to define permissions for users and restrict access to sensitive endpoints.** **Dashboard users:** For your convenience, we have added a JWT generation tool the Qdrant Web UI under the 🔑 tab. If you're using the default url, you will find it at `http://localhost:6333/dashboard#/jwt`. ![jwt-web-ui](/blog/qdrant-1.9.x/jwt-web-ui.png) We highly recommend this feature to enterprises using [Qdrant Hybrid Cloud](/hybrid-cloud/), as it is tailored to those who need additional control over company data and user access. RBAC empowers administrators to define roles and assign specific privileges to users based on their roles within the organization. In combination with [Hybrid Cloud's data sovereign architecture](/documentation/hybrid-cloud/), this feature reinforces internal security and efficient collaboration by granting access only to relevant resources. > **Documentation:** [Read the access level breakdown](/documentation/guides/security/#table-of-access) to see which actions are allowed or denied. ## Faster shard transfers on node recovery We now offer a streamlined approach to [data synchronization between shards](/documentation/guides/distributed_deployment/#shard-transfer-method) during node upgrades or recovery processes. Traditional methods used to transfer the entire dataset, but our new `wal_delta` method focuses solely on transmitting the difference between two existing shards. By leveraging the Write-Ahead Log (WAL) of both shards, this method selectively transmits missed operations to the target shard, ensuring data consistency. In some cases, where transfers can take hours, this update **reduces transfers down to a few minutes.** The advantages of this approach are twofold: 1. **It is faster** since only the differential data is transmitted, avoiding the transfer of redundant information. 2. It upholds robust **ordering guarantees**, crucial for applications reliant on strict sequencing. For more details on how this works, check out the [shard transfer documentation](/documentation/guides/distributed_deployment/#shard-transfer-method). > **Note:** There are limitations to consider. First, this method only works with existing shards. Second, while the WALs typically retain recent operations, their capacity is finite, potentially impeding the transfer process if exceeded. Nevertheless, for scenarios like rapid node restarts or upgrades, where the WAL content remains manageable, WAL delta transfer is an efficient solution. Overall, this is a great optional optimization measure and serves as the **auto-recovery default for shard transfers**. It's safe to use everywhere because it'll automatically fall back to streaming records transfer if no difference can be resolved. By minimizing data redundancy and expediting transfer processes, it alleviates the strain on the cluster during recovery phases, enabling faster node catch-up. ## Native support for uint8 embeddings Our latest version introduces [support for uint8 embeddings within Qdrant collections](/documentation/concepts/collections/#vector-datatypes). This feature supports embeddings provided by companies in a pre-quantized format. Unlike previous iterations where indirect support was available via [quantization methods](/documentation/guides/quantization/), this update empowers users with direct integration capabilities. In the case of `uint8`, elements within the vector are represented as unsigned 8-bit integers, encompassing values ranging from 0 to 255. Using these embeddings gives you a **4x memory saving and about a 30% speed-up in search**, while keeping 99.99% of the response quality. As opposed to the original quantization method, with this feature you can spare disk usage if you directly implement pre-quantized embeddings. The configuration is simple. To create a collection with uint8 embeddings, simply add the following `datatype`: ```bash PUT /collections/{collection_name} { "vectors": { "size": 1024, "distance": "Dot", "datatype": "uint8" } } ``` > **Note:** When using Quantization to optimize vector search, you can use this feature to `rescore` binary vectors against new byte vectors. With double the speedup, you will be able to achieve a better result than if you rescored with float vectors. With each byte vector quantized at the binary level, the result will deliver unparalleled efficiency and savings. To learn more about this optimization method, read our [Quantization docs](/documentation/guides/quantization/). ## Minor improvements and new features - Greatly improve write performance while creating a snapshot of a large collection - [#3420](https://github.com/qdrant/qdrant/pull/3420), [#3938](https://github.com/qdrant/qdrant/pull/3938) - Report pending optimizations awaiting an update operation in collection info - [#3962](https://github.com/qdrant/qdrant/pull/3962), [#3971](https://github.com/qdrant/qdrant/pull/3971) - Improve `indexed_only` reliability on proxy shards - [#3998](https://github.com/qdrant/qdrant/pull/3998) - Make shard diff transfer fall back to streaming records - [#3798](https://github.com/qdrant/qdrant/pull/3798) - Cancel shard transfers when the shard is deleted - [#3784](https://github.com/qdrant/qdrant/pull/3784) - Improve sparse vectors search performance by another 7% - [#4037](https://github.com/qdrant/qdrant/pull/4037) - Build Qdrant with a single codegen unit to allow better compile-time optimizations - [#3982](https://github.com/qdrant/qdrant/pull/3982) - Remove `vectors_count` from collection info because it is unreliable. **Check if you use this field before upgrading** - [#4052](https://github.com/qdrant/qdrant/pull/4052) - Remove shard transfer method field from abort shard transfer operation - [#3803](https://github.com/qdrant/qdrant/pull/3803)
qdrant-landing/content/blog/qdrant-cloud-on-microsoft-azure.md
--- draft: false title: Introducing Qdrant Cloud on Microsoft Azure slug: qdrant-cloud-on-microsoft-azure short_description: Qdrant Cloud is now available on Microsoft Azure description: "Learn the benefits of Qdrant Cloud on Azure." preview_image: /blog/from_cms/qdrant-azure-2-1.png date: 2024-01-17T08:40:42Z author: Manuel Meyer featured: false tags: - Data Science - Vector Database - Machine Learning - Information Retrieval - Cloud - Azure --- Great news! We've expanded Qdrant's managed vector database offering — [Qdrant Cloud](https://cloud.qdrant.io/) — to be available on Microsoft Azure. You can now effortlessly set up your environment on Azure, which reduces deployment time, so you can hit the ground running. [Get started](https://cloud.qdrant.io/) What this means for you: - **Rapid application development**: Deploy your own cluster through the Qdrant Cloud Console within seconds and scale your resources as needed. - **Billion vector scale**: Seamlessly grow and handle large-scale datasets with billions of vectors. Leverage Qdrant features like horizontal scaling and binary quantization with Microsoft Azure's scalable infrastructure. **"With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform at enterprise scale."** -- Jeremy Teichmann (AI Squad Technical Lead & Generative AI Expert), Daly Singh (AI Squad Lead & Product Owner) - Bosch Digital. Get started by [signing up for a Qdrant Cloud account](https://cloud.qdrant.io). And learn more about Qdrant Cloud in our [docs](/documentation/cloud/). <video autoplay="true" loop="true" width="100%" controls><source src="/blog/qdrant-cloud-on-azure/azure-cluster-deployment-short.mp4" type="video/mp4"></video>
qdrant-landing/content/blog/qdrant-cpu-intel-benchmark.md
--- title: "Intel’s New CPU Powers Faster Vector Search" draft: false slug: qdrant-cpu-intel-benchmark short_description: "New generation silicon is a game-changer for AI/ML applications." description: "Intel’s 5th gen Xeon processor is made for enterprise-scale operations in vector space. " preview_image: /blog/qdrant-cpu-intel-benchmark/social_preview.jpg social_preview_image: /blog/qdrant-cpu-intel-benchmark/social_preview.jpg date: 2024-05-10T00:00:00-08:00 author: David Myriel, Kumar Shivendu featured: false tags: - vector search - intel benchmark - next gen cpu - vector database --- #### New generation silicon is a game-changer for AI/ML applications ![qdrant cpu intel benchmark report](/blog/qdrant-cpu-intel-benchmark/qdrant-cpu-intel-benchmark.png) > *Intel’s 5th gen Xeon processor is made for enterprise-scale operations in vector space.* Vector search is surging in popularity with institutional customers, and Intel is ready to support the emerging industry. Their latest generation CPU performed exceptionally with Qdrant, a leading vector database used for enterprise AI applications. Intel just released the latest Xeon processor (**codename: Emerald Rapids**) for data centers, a market which is expected to grow to $45 billion. Emerald Rapids offers higher-performance computing and significant energy efficiency over previous generations. Compared to the 4th generation Sapphire Rapids, Emerald boosts AI inference performance by up to 42% and makes vector search 38% faster. ## The CPU of choice for vector database operations The latest generation CPU performed exceptionally in tests carried out by Qdrant’s R&D division. Intel’s CPU was stress-tested for query speed, database latency and vector upload time against massive-scale datasets. Results showed that machines with 32 cores were 1.38x faster at running queries than their previous generation counterparts. In this range, Qdrant’s latency also dropped 2.79x when compared to Sapphire. Qdrant strongly recommends the use of Intel’s next-gen chips in the 8-64 core range. In addition to being a practical number of cores for most machines in the cloud, this compute capacity will yield the best results with mass-market use cases. The CPU affects vector search by influencing the speed and efficiency of mathematical computations. As of recently, companies have started using GPUs to carry large workloads in AI model training and inference. However, for vector search purposes, studies show that CPU architecture is a great fit because it can handle concurrent requests with great ease. > *“Vector search is optimized for CPUs. Intel’s new CPU brings even more performance improvement and makes vector operations blazing fast for AI applications. Customers should consider deploying more CPUs instead of GPU compute power to achieve best performance results and reduce costs simultaneously.”* > > - André Zayarni, Qdrant CEO ## **Why does vector search matter?** ![qdrant cpu intel benchmark report](/blog/qdrant-cpu-intel-benchmark/qdrant-cpu-intel-benchmark-future.png) Vector search engines empower AI to look deeper into stored data and retrieve strong relevant responses. Qdrant’s vector database is key to modern information retrieval and machine learning systems. Those looking to run massive-scale Retrieval Augmented Generation (RAG) solutions need to leverage such semantic search engines in order to generate the best results with their AI products. Qdrant is purpose-built to enable developers to store and search for high-dimensional vectors efficiently. It easily integrates with a host of AI/ML tools: Large Language Models (LLM), frameworks such as LangChain, LlamaIndex or Haystack, and service providers like Cohere, OpenAI, and Ollama. ## Supporting enterprise-scale AI/ML The market is preparing for a host of artificial intelligence and machine learning cases, pushing compute to the forefront of the innovation race. The main strength of a vector database like Qdrant is that it can consistently support the user way past the prototyping and launch phases. Qdrant’s product is already being used by large enterprises with billions of data points. Such users can go from testing to production almost instantly. Those looking to host large applications might only need up to 18GB RAM to support 1 million OpenAI Vectors. This makes Qdrant the best option for maximizing resource usage and data connection. Intel’s latest development is crucial to the future of vector databases. Vector search operations are very CPU-intensive. Therefore, Qdrant relies on the innovations made by chip makers like Intel to offer large-scale support. > *“Vector databases are a mainstay in today’s AI/ML toolchain, powering the latest generation of RAG and other Gen AI Applications. In teaming with Qdrant, Intel is helping enterprises deliver cutting-edge Gen-AI solutions and maximize their ROI by leveraging Qdrant’s high-performant and cost-efficient vector similarity search capabilities running on latest Intel Architecture based infrastructure across deployment models.”* > > - Arijit Bandyopadhyay, CTO - Enterprise Analytics & AI, Head of Strategy – Cloud and Enterprise, CSV Group, Intel Corporation ## Advancing vector search and the role of next-gen CPUs Looking ahead, the vector database market is on the cusp of significant growth, particularly for the enterprise market. Developments in CPU technologies, such as those from Intel, are expected to enhance vector search operations by 1) improving processing speeds and 2) boosting retrieval efficiency and quality. This will allow enterprise users to easily manage large and more complex datasets and introduce AI on a global scale. As large companies continue to integrate sophisticated AI and machine learning tools, the reliance on robust vector databases is going to increase. This evolution in the market underscores the importance of continuous hardware innovation in meeting the expanding demands of data-intensive applications, with Intel's contributions playing a notable role in shaping the future of enterprise-scale AI/ML solutions. ## Next steps Qdrant is open source and offers a complete SaaS solution, hosted on AWS, GCP, and Azure. Getting started is easy, either spin up a [container image](https://hub.docker.com/r/qdrant/qdrant) or start a [free Cloud instance](https://cloud.qdrant.io/login). The documentation covers [adding the data](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [creating your indices](/documentation/tutorials/optimize/). We would love to hear about what you are building and please connect with our engineering team on [Github](https://github.com/qdrant/qdrant), [Discord](https://discord.com/invite/tdtYvXjC4h), or [LinkedIn](https://www.linkedin.com/company/qdrant).
qdrant-landing/content/blog/qdrant-has-joined-nvidia-inception-program.md
--- draft: false preview_image: /blog/from_cms/inception.png sitemapExclude: true title: Qdrant has joined NVIDIA Inception Program slug: qdrant-joined-nvidia-inception-program short_description: Recently Qdrant has become a member of the NVIDIA Inception. description: Along with the various opportunities it gives, we are the most excited about GPU support since it is an essential feature in Qdrant's roadmap. Stay tuned for our new updates. date: 2022-04-04T12:06:36.819Z author: Alyona Kavyerina featured: false author_link: https://www.linkedin.com/in/alyona-kavyerina/ tags: - Corporate news - NVIDIA categories: - News --- Recently we've become a member of the NVIDIA Inception. It is a program that helps boost the evolution of technology startups through access to their cutting-edge technology and experts, connects startups with venture capitalists, and provides marketing support. Along with the various opportunities it gives, we are the most excited about GPU support since it is an essential feature in Qdrant's roadmap. Stay tuned for our new updates.
qdrant-landing/content/blog/qdrant-n8n.md
--- title: "Chat with a codebase using Qdrant and N8N" draft: false slug: qdrant-n8n short_description: Integration demo description: Building a RAG-based chatbot using Qdrant and N8N to chat with a codebase on GitHub preview_image: /blog/qdrant-n8n/preview.jpg date: 2024-01-06T04:09:05+05:30 author: Anush Shetty featured: false tags: - integration - n8n - blog --- n8n (pronounced n-eight-n) helps you connect any app with an API. You can then manipulate its data with little or no code. With the Qdrant node on n8n, you can build AI-powered workflows visually. Let's go through the process of building a workflow. We'll build a chat with a codebase service. ## Prerequisites - A running Qdrant instance. If you need one, use our [Quick start guide](/documentation/quick-start/) to set it up. - An OpenAI API Key. Retrieve your key from the [OpenAI API page](https://platform.openai.com/account/api-keys) for your account. - A GitHub access token. If you need to generate one, start at the [GitHub Personal access tokens page](https://github.com/settings/tokens/). ## Building the App Our workflow has two components. Refer to the [n8n quick start guide](https://docs.n8n.io/workflows/create/) to get acquainted with workflow semantics. - A workflow to ingest a GitHub repository into Qdrant - A workflow for a chat service with the ingested documents #### Workflow 1: GitHub Repository Ingestion into Qdrant ![GitHub to Qdrant workflow](/blog/qdrant-n8n/load-demo.gif) For this workflow, we'll use the following nodes: - [Qdrant Vector Store - Insert](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#insert-documents): Configure with [Qdrant credentials](https://docs.n8n.io/integrations/builtin/credentials/qdrant/) and a collection name. If the collection doesn't exist, it's automatically created with the appropriate configurations. - [GitHub Document Loader](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.documentgithubloader/): Configure the GitHub access token, repository name, and branch. In this example, we'll use [qdrant/demo-food-discovery@main](https://github.com/qdrant/demo-food-discovery). - [Embeddings OpenAI](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsopenai/): Configure with OpenAI credentials and the embedding model options. We use the [text-embedding-ada-002](https://platform.openai.com/docs/models/embeddings) model. - [Recursive Character Text Splitter](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.textsplitterrecursivecharactertextsplitter/): Configure the [text splitter options](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.textsplitterrecursivecharactertextsplitter/#node-parameters ). We use the defaults in this example. Connect the workflow to a manual trigger. Click "Test Workflow" to run it. You should be able to see the progress in real-time as the data is fetched from GitHub, transformed into vectors and loaded into Qdrant. #### Workflow 2: Chat Service with Ingested Documents ![Chat workflow](/blog/qdrant-n8n/chat.png) The workflow use the following nodes: - [Qdrant Vector Store - Retrieve](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#retrieve-documents-for-agentchain): Configure with [Qdrant credentials](https://docs.n8n.io/integrations/builtin/credentials/qdrant/) and the name of the collection the data was loaded into in workflow 1. - [Retrieval Q&A Chain](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/): Configure with default values. - [Embeddings OpenAI](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsopenai/): Configure with OpenAI credentials and the embedding model options. We use the [text-embedding-ada-002](https://platform.openai.com/docs/models/embeddings) model. - [OpenAI Chat Model](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/): Configure with OpenAI credentials and the chat model name. We use [gpt-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5) for the demo. Once configured, hit the "Chat" button to initiate the chat interface and begin a conversation with your codebase. ![Chat demo](/blog/qdrant-n8n/chat-demo.png) To embed the chat in your applications, consider using the [@n8n/chat](https://www.npmjs.com/package/@n8n/chat) package. Additionally, N8N supports scheduled workflows and can be triggered by events across various applications. ## Further reading - [n8n Documentation](https://docs.n8n.io/) - [n8n Qdrant Node documentation](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#qdrant-vector-store)
qdrant-landing/content/blog/qdrant-stars-announcement.md
--- title: "Introducing Qdrant Stars: Join Our Ambassador Program!" draft: false slug: qdrant-stars-announcement # Change this slug to your page slug if needed short_description: Qdrant Stars recognizes and supports key contributors to the Qdrant ecosystem through content creation and community leadership. # Change this description: Say hello to the first Qdrant Stars and learn more about our new ambassador program! preview_image: /blog/qdrant-stars-announcement/preview-image.png social_preview_image: /blog/qdrant-stars-announcement/preview-image.png date: 2024-05-19T11:57:37-03:00 author: Sabrina Aquino featured: true tags: - news - vector search - qdrant - ambassador program - community --- We're excited to introduce **Qdrant Stars**, our new ambassador program created to recognize and support Qdrant users making a strong impact in the AI and vector search space. Whether through innovative content, real-world applications tutorials, educational events, or engaging discussions, they are constantly making vector search more accessible and interesting to explore. ### 👋 Say hello to the first Qdrant Stars! Our inaugural Qdrant Stars are a diverse and talented lineup who have shown exceptional dedication to our community. You might recognize some of their names: <div style="display: flex; flex-direction: column;"> <div class="qdrant-stars"> <div style="display: flex; align-items: center;"> <h5>Robert Caulk</h5> <a href="https://www.linkedin.com/in/rcaulk/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Robert LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/robert-caulk-profile.jpeg" alt="Robert Caulk" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Robert is working with a team on <a href="https://asknews.app">AskNews</a> to adaptively enrich, index, and report on over 1 million news articles per day. His team maintains an open-source tool geared toward cluster orchestration <a href="https://flowdapt.ai">Flowdapt</a>, which moves data around highly parallelized production environments. This is why Robert and his team rely on Qdrant for low-latency, scalable, hybrid search across dense and sparse vectors in asynchronous environments.</p> </div> </div> <blockquote> I am interested in brainstorming innovative ways to interact with Qdrant vector databases and building presentations that show the power of coupling Flowdapt with Qdrant for large-scale production GenAI applications. I look forward to networking with Qdrant experts and users so that I can learn from their experience. </blockquote> <div style="display: flex; align-items: center;"> <h5>Joshua Mo</h5> <a href="https://www.linkedin.com/in/joshua-mo-4146aa220/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Josh LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/Josh-Mo-profile.jpg" alt="Josh" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Josh is a Rust developer and DevRel Engineer at <a href="https://shuttle.rs">Shuttle</a>, assisting with user engagement and being a point of contact for first-line information within the community. He's often writing educational content that combines Javascript with Rust and is a coach at Codebar, which is a charity that runs free programming workshops for minority groups within tech.</p> </div> </div> <blockquote> I am excited about getting access to Qdrant's new features and contributing to the AI community by demonstrating how those features can be leveraged for production environments. </blockquote> <div style="display: flex; align-items: center;"> <h5>Nicholas Khami</h5> <a href="https://www.linkedin.com/in/nicholas-khami-5a0a7a135/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Nick LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/ai-headshot-Nick-K.jpg" alt="Nick" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Nick is a founder and product engineer at <a href="https://trieve.ai/">Trieve</a> and has been using Qdrant since late 2022. He has a low level understanding of the Qdrant API, especially the Rust client, and knows a lot about how to make the most of Qdrant on an application level.</p> </div> </div> <blockquote> I'm looking forward to be helping folks use lesser known features to enhance and make their projects better! </blockquote> <div style="display: flex; align-items: center;"> <h5>Owen Colegrove</h5> <a href="https://www.linkedin.com/in/owencolegrove/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Owen LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/Prof-Owen-Colegrove.jpeg" alt="Owen Colegrove" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Owen Colegrove is the Co-Founder of <a href="https://www.sciphi.ai/">SciPhi</a>, making it easy build, deploy, and scale RAG systems using Qdrant vector search tecnology. He has Ph.D. in Physics and was previously a Quantitative Strategist at Citadel and a Researcher at CERN.</p> </div> </div> <blockquote> I'm excited about working together with Qdrant! </blockquote> <div style="display: flex; align-items: center;"> <h5>Kameshwara Pavan Kumar Mantha</h5> <a href="https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Pavan LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/pic-Kameshwara-Pavan-Kumar-Mantha2.jpeg" alt="Kameshwara Pavan" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Kameshwara Pavan is a expert with 14 years of extensive experience in full stack development, cloud solutions, and AI. Specializing in Generative AI and LLMs. Pavan has established himself as a leader in these cutting-edge domains. He holds a Master's in Data Science and a Master's in Computer Applications, and is currently pursuing his PhD.</p> </div> </div> <blockquote> Outside of my professional pursuits, I'm passionate about sharing my knowledge through technical blogging, engaging in technical meetups, and staying active with cycling. I admire the groundbreaking work Qdrant is doing in the industry, and I'm eager to collaborate and learn from the team that drives such exceptional advancements. </blockquote> <div style="display: flex; align-items: center;"> <h5>Niranjan Akella</h5> <a href="https://www.linkedin.com/in/niranjanakella/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Niranjan LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/nj-Niranjan-Akella.png" alt="Niranjan Akella" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Niranjan is an AI/ML Engineer at <a href="https://www.genesys.com/">Genesys</a> who specializes in building and deploying AI models such as LLMs, Diffusion Models, and Vision Models at scale. He actively shares his projects through content creation and is passionate about applied research, developing custom real-time applications that that serve a greater purpose. </p> </div> </div> <blockquote> I am a scientist by heart and an AI engineer by profession. I'm always armed to take a leap of faith into the impossible to be come the impossible. I'm excited to explore and venture into Qdrant Stars with some support to build a broader community and develop a sense of completeness among like minded people. </blockquote> <div style="display: flex; align-items: center;"> <h5>Bojan Jakimovski</h5> <a href="https://www.linkedin.com/in/bojan-jakimovski/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Bojan LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/Bojan-preview.jpeg" alt="Bojan Jakimovski" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Bojan is an Advanced Machine Learning Engineer at <a href="https://www.loka.com/">Loka</a> currently pursuing a Master’s Degree focused on applying AI in Heathcare. He is specializing in Dedicated Computer Systems, with a passion for various technology fields. </p> </div> </div> <blockquote> I'm really excited to show the power of the Qdrant as vector database. Especially in some fields where accessing the right data by very fast and efficient way is a must, in fields like Healthcare and Medicine. </blockquote> </div> We are happy to welcome this group of people who are deeply committed to advancing vector search technology. We look forward to supporting their vision, and helping them make a bigger impact on the community. You can find and chat with them at our [Discord Community](discord.gg/qdrant). ### Why become a Qdrant Star? There are many ways you can benefit from the Qdrant Star Program. Here are just a few: ##### Exclusive rewards programs Celebrate top contributors monthly with special rewards, including exclusive swag and monetary prizes. Quarterly awards for 'Most Innovative Content' and 'Best Tutorial' offer additional prizes. ##### Early access to new features Be the first to explore and write about our latest features and beta products. Participate in product meetings where your ideas and suggestions can directly influence our roadmap. ##### Conference support We love seeing our stars on stage! If you're planning to attend and speak about Qdrant at conferences, we've got you covered. Receive presentation templates, mentorship, and educational materials to help deliver standout conference presentations, with travel expenses covered. ##### Qdrant Certification End the program as a certified Qdrant ambassador and vector search specialist, with provided training resources and a certification test to showcase your expertise. ### What do Qdrant Stars do? As a Qdrant Star, you'll share your knowledge with the community through articles, blogs, tutorials, or demos that highlight the power and versatility of vector search technology - in your own creative way. You'll be a friendly face and a trusted expert in the community, sparking discussions on topics you love and keeping our community active and engaged. Love organizing events? You'll have the chance to host meetups, workshops, and other educational gatherings, with all the promotional and logistical support you need to make them a hit. But if large conferences are your thing, we’ll provide the resources and cover your travel expenses so you can focus on delivering an outstanding presentation. You'll also have a say in the Qdrant roadmap by giving feedback on new features and participating in product meetings. Qdrant Stars are constantly contributing to the growth and value of the vector search ecosystem. ### How to join the Qdrant Stars Program Are you interested in becoming a Qdrant Star? We're on the lookout for individuals who are passionate about vector search technology and looking to make an impact in the AI community. If you have a strong understanding of vector search technologies, enjoy creating content, speaking at conferences, and actively engage with our community. If this sounds like you, don't hesitate to apply. We look forward to potentially welcoming you as our next Qdrant Star. [Apply here!](https://forms.gle/q4fkwudDsy16xAZk8) Share your journey with vector search technologies and how you plan to contribute further. #### Nominate a Qdrant Star Do you know someone who could be our next Qdrant Star? Please submit your nomination through our [nomination form](https://forms.gle/n4zv7JRkvnp28qv17), explaining why they're a great fit. Your recommendation could help us find the next standout ambassador. #### Learn More For detailed information about the program's benefits, activities, and perks, refer to the [Qdrant Stars Handbook](https://qdrant.github.io/qdrant-stars-handbook/). To connect with current Stars, ask questions, and stay updated on the latest news and events at Qdrant, [join our Discord community](http://discord.gg/qdrant).
qdrant-landing/content/blog/qdrant-supports-arm-architecture.md
--- draft: false title: Qdrant supports ARM architecture! slug: qdrant-supports-arm-architecture short_description: Qdrant announces ARM architecture support, expanding accessibility and performance for their advanced data indexing technology. description: Qdrant's support for ARM architecture marks a pivotal step in enhancing accessibility and performance. This development optimizes data indexing and retrieval. preview_image: /blog/from_cms/docker-preview.png date: 2022-09-21T09:49:53.352Z author: Kacper Łukawski featured: false tags: - Vector Search - Vector Search Engine - Embedding - Neural Networks - Database --- The processor architecture is a thing that the end-user typically does not care much about, as long as all the applications they use run smoothly. If you use a PC then chances are you have an x86-based device, while your smartphone rather runs on an ARM processor. In 2020 Apple introduced their ARM-based M1 chip which is used in modern Mac devices, including notebooks. The main differences between those two architectures are the set of supported instructions and energy consumption. ARM’s processors have a way better energy efficiency and are cheaper than their x86 counterparts. That’s why they became available as an affordable alternative in the hosting providers, including the cloud. ![](/blog/from_cms/1_seaglc6jih2qknoshqbf1q.webp "An image generated by Stable Diffusion with a query “two computer processors fightning against each other”") In order to make an application available for ARM users, it has to be compiled for that platform. Otherwise, it has to be emulated by the device, which gives an additional overhead and reduces its performance. We decided to provide the [Docker images](https://hub.docker.com/r/qdrant/qdrant/) targeted especially at ARM users. Of course, using a limited set of processor instructions may impact the performance of your vector search, and that’s why we decided to test both architectures using a similar setup. ## Test environments AWS offers ARM-based EC2 instances that are 20% cheaper than the x86 corresponding alternatives with a similar configuration. That estimate has been done for the eu-central-1 region (Frankfurt) and R6g/R6i instance families. For the purposes of this comparison, we used an r6i.large instance (Intel Xeon) and compared it to r6g.large one (AWS Graviton2). Both setups have 2 vCPUs and 16 GB of memory available and these were the smallest comparable instances available. ## The results For the purposes of this test, we created some random vectors which were compared with cosine distance. ### Vector search During our experiments, we performed 1000 search operations for both ARM64 and x86-based setups. We didn’t measure the network overhead, only the time measurements returned by the engine in the API response. The chart below shows the distribution of that time, separately for each architecture. ![](/blog/from_cms/1_zvuef4ri6ztqjzbsocqj_w.webp "The latency distribution of search requests: arm vs x86") It seems that ARM64 might be an interesting alternative if you are on a budget. It is 10% slower on average, and 20% slower on the median, but the performance is more consistent. It seems like it won’t be randomly 2 times slower than the average, unlike x86. That makes ARM64 a cost-effective way of setting up vector search with Qdrant, keeping in mind it’s 20% cheaper on AWS. You do get less for less, but surprisingly more than expected.
qdrant-landing/content/blog/qdrant-unstructured.md
--- title: Loading Unstructured.io Data into Qdrant from the Terminal slug: qdrant-unstructured short_description: Loading Unstructured Data into Qdrant from the Terminal description: Learn how to simplify the process of loading unstructured data into Qdrant using the Qdrant Unstructured destination. preview_image: /blog/qdrant-unstructured/preview.jpg date: 2024-01-09T00:41:38+05:30 author: Anush Shetty tags: - integrations - qdrant - unstructured --- Building powerful applications with Qdrant starts with loading vector representations into the system. Traditionally, this involves scraping or extracting data from sources, performing operations such as cleaning, chunking, and generating embeddings, and finally loading it into Qdrant. While this process can be complex, Unstructured.io includes Qdrant as an ingestion destination. In this blog post, we'll demonstrate how to load data into Qdrant from the channels of a Discord server. You can use a similar process for the [20+ vetted data sources](https://unstructured-io.github.io/unstructured/ingest/source_connectors.html) supported by Unstructured. ### Prerequisites - A running Qdrant instance. Refer to our [Quickstart guide](/documentation/quick-start/) to set up an instance. - A Discord bot token. Generate one [here](https://discord.com/developers/applications) after adding the bot to your server. - Unstructured CLI with the required extras. For more information, see the Discord [Getting Started guide](https://discord.com/developers/docs/getting-started). Install it with the following command: ```bash pip install unstructured[discord,local-inference,qdrant] ``` Once you have the prerequisites in place, let's begin the data ingestion. ### Retrieving Data from Discord To generate structured data from Discord using the Unstructured CLI, run the following command with the [channel IDs](https://www.pythondiscord.com/pages/guides/pydis-guides/contributing/obtaining-discord-ids/): ```bash unstructured-ingest \ discord \ --channels <CHANNEL_IDS> \ --token "<YOUR_BOT_TOKEN>" \ --output-dir "discord-output" ``` This command downloads and structures the data in the `"discord-output"` directory. For a complete list of options supported by this source, run: ```bash unstructured-ingest discord --help ``` ### Ingesting into Qdrant Before loading the data, set up a collection with the information you need for the following REST call. In this example we use a local Huggingface model generating 384-dimensional embeddings. You can create a Qdrant [API key](/documentation/cloud/authentication/#create-api-keys) and set names for your Qdrant [collections](/documentation/concepts/collections/). We set up the collection with the following command: ```bash curl -X PUT \ <QDRANT_URL>/collections/<COLLECTION_NAME> \ -H 'Content-Type: application/json' \ -H 'api-key: <QDRANT_API_KEY>' \ -d '{ "vectors": { "size": 384, "distance": "Cosine" } }' ``` You should receive a response similar to: ```console {"result":true,"status":"ok","time":0.196235768} ``` To ingest the Discord data into Qdrant, run: ```bash unstructured-ingest \ local \ --input-path "discord-output" \ --embedding-provider "langchain-huggingface" \ qdrant \ --collection-name "<COLLECTION_NAME>" \ --api-key "<QDRANT_API_KEY>" \ --location "<QDRANT_URL>" ``` This command loads structured Discord data into Qdrant with sensible defaults. You can configure the data fields for which embeddings are generated in the command options. Qdrant ingestion also supports partitioning and chunking of your data, configurable directly from the CLI. Learn more about it in the [Unstructured documentation](https://unstructured-io.github.io/unstructured/core.html). To list all the supported options of the Qdrant ingestion destination, run: ```bash unstructured-ingest local qdrant --help ``` Unstructured can also be used programmatically or via the hosted API. Refer to the [Unstructured Reference Manual](https://unstructured-io.github.io/unstructured/introduction.html). For more information about the Qdrant ingest destination, review how Unstructured.io configures their [Qdrant](https://unstructured-io.github.io/unstructured/ingest/destination_connectors/qdrant.html) interface.
qdrant-landing/content/blog/qdrant-updated-benchmarks-2024.md
--- title: "Qdrant Updated Benchmarks 2024" draft: false slug: qdrant-benchmarks-2024 # Change this slug to your page slug if needed short_description: Qdrant Updated Benchmarks 2024 # Change this description: We've compared how Qdrant performs against the other vector search engines to give you a thorough performance analysis # Change this preview_image: /benchmarks/social-preview.png # Change this categories: - News # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-01-15T09:29:33-03:00 author: Sabrina Aquino # Change this featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - qdrant - benchmarks - performance --- It's time for an update to Qdrant's benchmarks! We've compared how Qdrant performs against the other vector search engines to give you a thorough performance analysis. Let's get into what's new and what remains the same in our approach. ### What's Changed? #### All engines have improved Since the last time we ran our benchmarks, we received a bunch of suggestions on how to run other engines more efficiently, and we applied them. This has resulted in significant improvements across all engines. As a result, we have achieved an impressive improvement of nearly four times in certain cases. You can view the previous benchmark results [here](/benchmarks/single-node-speed-benchmark-2022/). #### Introducing a New Dataset To ensure our benchmark aligns with the requirements of serving RAG applications at scale, the current most common use-case of vector databases, we have introduced a new dataset consisting of 1 million OpenAI embeddings. ![rps vs precision benchmark - up and to the right is better](/blog/qdrant-updated-benchmarks-2024/rps-bench.png) #### Separation of Latency vs RPS Cases Different applications have distinct requirements when it comes to performance. To address this, we have made a clear separation between latency and requests-per-second (RPS) cases. For example, a self-driving car's object recognition system aims to process requests as quickly as possible, while a web server focuses on serving multiple clients simultaneously. By simulating both scenarios and allowing configurations for 1 or 100 parallel readers, our benchmark provides a more accurate evaluation of search engine performance. ![mean-time vs precision benchmark - down and to the right is better](/blog/qdrant-updated-benchmarks-2024/latency-bench.png) ### What Hasn't Changed? #### Our Principles of Benchmarking At Qdrant all code stays open-source. We ensure our benchmarks are accessible for everyone, allowing you to run them on your own hardware. Your input matters to us, and contributions and sharing of best practices are welcome! Our benchmarks are strictly limited to open-source solutions, ensuring hardware parity and avoiding biases from external cloud components. We deliberately don't include libraries or algorithm implementations in our comparisons because our focus is squarely on vector databases. Why? Because libraries like FAISS, while useful for experiments, don’t fully address the complexities of real-world production environments. They lack features like real-time updates, CRUD operations, high availability, scalability, and concurrent access – essentials in production scenarios. A vector search engine is not only its indexing algorithm, but its overall performance in production. We use the same benchmark datasets as the [ann-benchmarks](https://github.com/erikbern/ann-benchmarks/#data-sets) project so you can compare our performance and accuracy against it. ### Detailed Report and Access For an in-depth look at our latest benchmark results, we invite you to read the [detailed report](/benchmarks/). If you're interested in testing the benchmark yourself or want to contribute to its development, head over to our [benchmark repository](https://github.com/qdrant/vector-db-benchmark). We appreciate your support and involvement in improving the performance of vector databases.
qdrant-landing/content/blog/qdrant-v-0-6-0-engine-with-grpc-released.md
--- draft: true title: Qdrant v0.6.0 engine with gRPC interface has been released short_description: We’ve released a new engine, version 0.6.0. description: We’ve released a new engine, version 0.6.0. The main feature of the release in the gRPC interface. preview_image: /blog/qdrant-v-0-6-0-engine-with-grpc-released/upload_time.png date: 2022-03-10T01:36:43+03:00 author: Alyona Kavyerina author_link: https://medium.com/@alyona.kavyerina featured: true categories: - News tags: - gRPC - release sitemapExclude: True --- We’ve released a new engine, version 0.6.0. The main feature of the release in the gRPC interface — it is much faster than the REST API and ensures higher app performance due to the following features: - re-use of connection; - binarity protocol; - separation schema from data. This results in 3 times faster data uploading on our benchmarks: ![REST API vs gRPC upload time, sec](/blog/qdrant-v-0-6-0-engine-with-grpc-released/upload_time.png) Read more about the gRPC interface and whether you should use it by this [link](/documentation/quick_start/#grpc). The release v0.6.0 includes several bug fixes. More information is available in a [changelog](https://github.com/qdrant/qdrant/releases/tag/v0.6.0). New version was provided in addition to the REST API that the company keeps supporting due to its easy debugging.
qdrant-landing/content/blog/qdrant-x-dust-how-vector-search-helps-make-work-work-better-stan-polu-vector-space-talk-010.md
--- draft: false title: "Qdrant x Dust: How Vector Search helps make work work better - Stan Polu | Vector Space Talks" slug: qdrant-x-dust-vector-search short_description: Stanislas shares insights from his experiences at Stripe and founding his own company, Dust, focusing on AI technology's product layer. description: Stanislas Polu shares insights on integrating SaaS platforms into workflows, reflects on his experiences at Stripe and OpenAI, and discusses his company Dust's focus on enhancing enterprise productivity through tailored AI assistants and their recent switch to Qdrant for database management. preview_image: /blog/from_cms/stan-polu-cropped.png date: 2024-01-26T16:22:37.487Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - OpenAI --- > *"We ultimately chose Qdrant due to its open-source nature, strong performance, being written in Rust, comprehensive documentation, and the feeling of control.”*\ -- Stanislas Polu > Stanislas Polu is the Co-Founder and an Engineer at Dust. He had previously sold a company to Stripe and spent 5 years there, seeing them grow from 80 to 3000 people. Then pivoted to research at OpenAI on large language models and mathematical reasoning capabilities. He started Dust 6 months ago to make work work better with LLMs. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2YgcSFjP7mKE0YpDGmSiq5?si=6BhlAMveSty4Yt7umPeHjA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/1vKoiFAdorE).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/toIgkJuysQ4?si=uzlzQtOiSL5Kcpk5" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Qdrant-x-Dust-How-Vector-Search-Helps-Make-Work-Work-Better---Stan-Polu--Vector-Space-Talk-010-e2ep9u8/a-aasgqb8" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Curious about the interplay of SaaS platforms and AI in improving productivity? Stanislas Polu dives into the intricacies of enterprise data management, the selective use of SaaS tools, and the role of customized AI assistants in streamlining workflows, all while sharing insights from his experiences at Stripe, OpenAI, and his latest venture, Dust. Here are 5 golden nuggets you'll unearth from tuning in: 1. **The SaaS Universe**: Stan will give you the lowdown on why jumping between different SaaS galaxies like Salesforce and Slack is crucial for your business data's gravitational pull. 2. **API Expansions**: Learn how pushing the boundaries of APIs to include global payment methods can alter the orbit of your company's growth. 3. **A Bot for Every Star**: Discover how creating targeted assistants over general ones can skyrocket team productivity across various use cases. 4. **Behind the Tech Telescope**: Stan discusses the decision-making behind opting for Qdrant for their database cosmos, including what triggered their switch. 5. **Integrating AI Stardust**: They're not just talking about Gen AI; they're actively guiding companies on how to leverage it effectively, placing practicality over flashiness. > Fun Fact: Stanislas Polu co-founded a company that was acquired by Stripe, providing him with the opportunity to work with Greg Brockman at Stripe. > ## Show notes: 00:00 Interview about an exciting career in AI technology.\ 06:20 Most workflows involve multiple SaaS applications.\ 09:16 Inquiring about history with Stripe and AI.\ 10:32 Stripe works on expanding worldwide payment methods.\ 14:10 Document insertion supports hierarchy for user experience.\ 18:29 Competing, yet friends in the same field.\ 21:45 Workspace solutions, marketplace, templates, and user feedback.\ 25:24 Avoid giving false hope; be accountable.\ 26:06 Model calls, external API calls, structured data.\ 30:19 Complex knobs, but powerful once understood. Excellent support.\ 33:01 Companies hire someone to support teams and find use cases. ## More Quotes from Stan: *"You really want to narrow the data exactly where that information lies. And that's where we're really relying hard on Qdrant as well. So the kind of indexing capabilities on top of the vector search."*\ -- Stanislas Polu *"I think the benchmarking was really about quality of models, answers in the context of ritual augmented generation. So it's not as much as performance, but obviously, performance matters and that's why we love using Qdrant.”*\ -- Stanislas Polu *"The workspace assistant are like the admin vetted the assistant, and it's kind of pushed to everyone by default.”*\ -- Stanislas Polu ## Transcript: Demetrios: All right, so, my man, I think people are going to want to know all about you. This is a conversation that we have had planned for a while. I'm excited to chat about what you have been up to. You've had quite the run around when it comes to doing some really cool stuff. You spent a lot of time at Stripe in the early days and I imagine you were doing, doing lots of fun ML initiatives and then you started researching on llms at OpenAI. And recently you are doing the entrepreneurial thing and following the trend of starting a company and getting really cool stuff out the door with AI. I think we should just start with background on yourself. What did I miss in that quick introduction? Stanislas Polu: Okay, sounds good. Yeah, perfect. Now you didn't miss too much. Maybe the only point is that starting the current company, Dust, with Gabrielle, my co founder, with whom we started a Company together twelve years or maybe 14 years ago. Stanislas Polu: I'm very bad with years that eventually got acquired to stripe. So that's how we joined Stripe, the both of us, pretty early. Stripe was 80 people when we joined, all the way to 2500 people and got to meet with and walk with Greg Brockman there. And that's how I found my way to OpenAI after stripe when I started interested in myself, in research at OpenAI, even if I'm not a trained researcher. Stanislas Polu: I did research on fate, doing research. On larger good models, reasoning capabilities, and in particular larger models mathematical reasoning capabilities. And from there. 18 months ago, kind of decided to leave OpenAI with the motivation. That is pretty simple. It's that basically the hypothesis is that. It was pre chattivity, but basically those large language models, they're already extremely capable and yet they are completely under deployed compared to the potential they have. And so while research remains a very active subject and it's going to be. A tailwind for the whole ecosystem, there's. Stanislas Polu: Probably a lot of to be done at the product layer, and most of the locks between us and deploying that technology in the world is probably sitting. At the product layer as it is sitting at the research layer. And so that's kind of the hypothesis behind dust, is we try to explore at the product layer what it means to interface between models and humans, try to make them happier and augment them. With superpowers in their daily jobs. Demetrios: So you say product layer, can you go into what you mean by that a little bit more? Stanislas Polu: Well, basically we have a motto at dust, which is no gpu before PMF. And so the idea is that while it's extremely exciting to train models. It's extremely exciting to fine tune and align models. There is a ton to be done. Above the model, not only to use. Them as best as possible, but also to really find the interaction interfaces that make sense for humans to leverage that technology. And so we basically don't train any models ourselves today. There's many reasons to that. The first one is as an early startup. It's a fascinating subject and fascinating exercise. As an early startup, it's actually a very big investment to go into training. Models because even if the costs are. Not necessarily big in terms of compute. It'S still research and development and pretty. Hard research and development. It's basically research. We understand pretraining pretty well. We don't understand fine tuning that well. We believe it's a better idea to. Stanislas Polu: Really try to explore the product layer. The image I use generally is that training a model is very sexy and it's exciting, but really you're building a small rock that will get submerged by the waves of bigger models coming in the future. And iterating and positioning yourself at the interface between humans and those models at. The product layer is more akin to. Building a surfboard that you will be. Able to use to surf those same waves. Demetrios: I like that because I am a big surfer and I have a lot. Stanislas Polu: Of fun doing it. Demetrios: Now tell me about are you going after verticals? Are you going after different areas in a market, a certain subset of the market? Stanislas Polu: How do you look at that? Yeah. Basically the idea is to look at productivity within the enterprise. So we're first focusing on internal use. By teams, internal teams of that technology. We're not at all going after external use. So backing products that embed AI or having on projects maybe exposed through our users to actual end customers. So we really focused on the internal use case. So the first thing you want to. Do is obviously if you're interested in. Productivity within enterprise, you definitely want to have the enterprise data, right? Because otherwise there's a ton that can be done with Chat GPT as an example. But there is so much more that can be done when you have context. On the data that comes from the company you're in. That's pretty much kind of the use. Case we're focusing on, and we're making. A bet, which is a crazy bet to answer your question, that there's actually value in being quite horizontal for now. So that comes with a lot of risks because an horizontal product is hard. Stanislas Polu: To read and it's hard to figure. Out how to use it. But at the same time, the reality is that when you are somebody working in a team, even if you spend. A lot of time on one particular. Application, let's say Salesforce for sales, or GitHub for engineers, or intercom for customer support, the reality of most of your workflows do involve many SaaS, meaning that you spend a lot of time in Salesforce, but you also spend a lot of time in slack and notion. Maybe, or we all spend as engineers a lot of time in GitHub, but we also use notion and slack a ton or Google Drive or whatnot. Jira. Demetrios: Good old Jira. Everybody loves spending time in Jira. Stanislas Polu: Yeah. And so basically, following our users where. They are requires us to have access to those different SaaS, which requires us. To be somewhat horizontal. We had a bunch of signals that. Kind of confirms that position, and yet. We'Re still very conscious that it's a risky position. As an example, when we are benchmarked against other solutions that are purely verticalized, there is many instances where we actually do a better job because we have. Access to all the data that matters within the company. Demetrios: Now, there is something very difficult when you have access to all of the data, and that is the data leakage issue and the data access. Right. How are you trying to conquer that hard problem? Stanislas Polu: Yeah, so we're basically focusing to continue. Answering your questions through that other question. I think we're focusing on tech companies. That are less than 1000 people. And if you think about most recent tech companies, less than 1000 people. There's been a wave of openness within. Stanislas Polu: Companies in terms of data access, meaning that it's becoming rare to see people actually relying on complex ACL for the internal data. You basically generally have silos. You have the exec silo with remuneration and ladders and whatnot. And this one is definitely not the. Kind of data we're touching. And then for the rest, you generally have a lot of data that is. Accessible by every employee within your company. So that's not a perfect answer, but that's really kind of the approach we're taking today. We give a lot of control on. Stanislas Polu: Which data comes into dust, but once. It'S into dust, and that control is pretty granular, meaning that you can select. Specific slack channels, or you can select. Specific notion pages, or you can select specific Google Drive subfolders. But once you decide to put it in dust, every dust user has access to this. And so we're really taking the silo. Vision of the granular ACL story. Obviously, if we were to go higher enterprise, that would become a very big issue, because I think larger are the enterprise, the more they rely on complex ackles. Demetrios: And I have to ask about your history with stripe. Have you been focusing on specific financial pieces to this? First thing that comes to mind is what about all those e commerce companies that are living and breathing with stripe? Feels like they've got all kinds of use cases that they could leverage AI for, whether it is their supply chain or just getting better numbers, or getting answers that they have across all this disparate data. Have you looked at that at all? Is that informing any of your decisions that you're making these days? Stanislas Polu: No, not quite. Not really. At stripe, when we joined, it was. Very early, it was the quintessential curlb onechargers number 42. 42, 42. And that's pretty much what stripe was almost, I'm exaggerating, but not too much. So what I've been focusing at stripe. Was really driven by my and our. Perspective as european funders joining a quite. Us centric company, which is, no, there. Stanislas Polu: Is not credit card all over the world. Yes, there is also payment methods. And so most of my time spent at stripe was spent on trying to expand the API to not a couple us payment methods, but a variety of worldwide payment methods. So that requires kind of a change of paradigm from an API design, and that's where I spent most of my cycles What I want to try. Demetrios: Okay, the next question that I had is you talked about how benchmarking with the horizontal solution, surprisingly, has been more effective in certain use cases. I'm guessing that's why you got a little bit of love for Qdrant and what we're doing here. Stanislas Polu: Yeah I think the benchmarking was really about quality of models, answers in the. Context of ritual augmented generation. So it's not as much as performance, but obviously performance matters, and that's why we love using Qdrants. But I think the main idea of. Stanislas Polu: What I mentioned is that it's interesting because today the retrieval is noisy, because the embedders are not perfect, which is an interesting point. Sorry, I'm double clicking, but I'll come back. The embedded are really not perfect. Are really not perfect. So that's interesting. When Qdrant release kind of optimization for storage of vectors, they come with obviously warnings that you may have a loss. Of precision because of the compression, et cetera, et cetera. And that's funny, like in all kind of retrieval and mental generation world, it really doesn't matter. We take all the performance we can because the loss of precision coming from compression of those vectors at the vector DB level are completely negligible compared to. The holon fuckness of the embedders in. Stanislas Polu: Terms of capability to correctly embed text, because they're extremely powerful, but they're far from being perfect. And so that's an interesting thing where you can really go as far as you want in terms of performance, because your error is dominated completely by the. Quality of your embeddings. Going back up. I think what's interesting is that the. Retrieval is noisy, mostly because of the embedders, and the models are not perfect. And so the reality is that more. Data in a rack context is not. Necessarily better data because the retrievals become noisy. The model kind of gets confused and it starts hallucinating stuff, et cetera. And so the right trade off is that you want to access to as. Much data as possible, but you want To give the ability to our users. To select very narrowly the data required for a given task. Stanislas Polu: And so that's kind of what our product does, is the ability to create assistants that are specialized to a given task. And most of the specification of an assistant is obviously a prompt, but also. Saying, oh, I'm working on helping sales find interesting next leads. And you really want to narrow the data exactly where that information lies. And that's where there, we're really relying. Hard on Qdrants as well. So the kind of indexing capabilities on. Top of the vector search, where whenever. Stanislas Polu: We insert the documents, we kind of try to insert an array of parents that reproduces the hierarchy of whatever that document is coming from, which lets us create a very nice user experience where when you create an assistant, you can say, oh, I'm going down two levels within notion, and I select that page and all of those children will come together. And that's just one string in our specification, because then rely on those parents that have been injected in Qdrant, and then the Qdrant search really works well with a simple query like this thing has to be in parents. Stanislas Polu: And you filter by that and it. Demetrios: Feels like there's two levels to the evaluation that you can be doing with rags. One is the stuff you're retrieving and evaluating the retrieval, and then the other is the output that you're giving to the end user. How are you attacking both of those evaluation questions? Stanislas Polu: Yeah, so the truth in whole transparency. Is that we don't, we're just too early. Demetrios: Well, I'm glad you're honest with us, Alicia. Stanislas Polu: This is great, we should, but the rate is that we have so many other product priorities that I think evaluating the quality of retrievals, evaluating the quality. Of retrieval, augmented generation. Good sense but good sense is hard to define, because good sense with three. Years doing research in that domain is probably better sense. Better good sense than good sense with no clue on the domain. But basically with good sense I think. You can get very far and then. You'Ll be optimizing at the margin. And the reality is that if you. Get far enough with good sense, and that everything seems to work reasonably well, then your priority is not necessarily on pushing 5% performance, whatever is the metric. Stanislas Polu: But more like I have a million other products questions to solve. That is the kind of ten people answer to your question. And as we grow, we'll probably make a priority, of course, of benchmarking that better. In terms of benchmarking that better. Extremely interesting question as well, because the. Embedding benchmarks are what they are, and. I think they are not necessarily always a good representation of the use case you'll have in your products. And so that's something you want to be cautious of. And. It'S quite hard to benchmark your use case. The kind of solutions you have and the ones that seems more plausible, whether it's spending like full years on that. Stanislas Polu: Is probably to. Evaluate the retrieval with another model, right? It's like you take five different embedding models, you record a bunch of questions. That comes from your product, you use your product data and you run those retrievals against those five different embedders, and. Then you ask GPT four to raise. That would be something that seems sensible and probably will get you another step forward and is not perfect, but it's. Probably really strong enough to go quite far. Stanislas Polu: And then the second question is evaluating. The end to end pipeline, which includes. Both the retrieval and the generation. And to be honest, again, it's a. Known question today because GPT four is. Just so much above all the models. Stanislas Polu: That there's no point evaluating them. If you accept using GPD four, just use GP four. If you want to use open source models, then the questions is more important. But if you are okay with using GPD four for many reasons, then there. Is no questions at this stage. Demetrios: So my next question there, because sounds like you got a little bit of a french accent, you're somewhere in Europe. Are you in France? Stanislas Polu: Yes, we're based in France and billion team from Paris. Demetrios: So I was wondering if you were going to lean more towards the history of you working at OpenAI or the fraternity from your french group and go for your amiz in. Stanislas Polu: Mean, we are absolute BFF with Mistral. The fun story is that Guillaume Lamp is a friend, because we were working on exactly the same subjects while I was at OpenAI and he was at Meta. So we were basically frenemies. We're competing against the same metrics and same goals, but grew a friendship out of that. Our platform is quite model agnostic, so. We support Mistral there. Then we do decide to set the defaults for our users, and we obviously set the defaults to GP four today. I think it's the question of where. Today there's no question, but when the. Time comes where open source or non open source, it's not the question, but where Ozo models kind of start catching. Up with GPT four, that's going to. Stanislas Polu: Be an interesting product question, and hopefully. Mistral will get there. I think that's definitely their goal, to be within reach of GPT four this year. And so that's going to be extremely exciting. Yeah. Demetrios: So then you mentioned how you have a lot of other product considerations that you're looking at before you even think about evaluation. What are some of the other considerations? Stanislas Polu: Yeah, so as I mentioned a bit. The main hypothesis is we're going to do company productivity or team productivity. We need the company data. That was kind of hypothesis number zero. It's not even an hypothesis, almost an axiom. And then our first product was a conversational assistance, like chat. GPT, that is general, and has access. To everything, and realized that didn't work. Quite well enough on a bunch of use cases, was kind of good on some use cases, but not great on many others. And so that's where we made that. First strong product, the hypothesis, which is. So we want to have many assistants. Not one assistant, but many assistants, targeted to specific tasks. And that's what we've been exploring since the end of the summer. And that hypothesis has been very strongly confirmed with our users. And so an example of issue that. We have is, obviously, you want to. Activate your product, so you want to make sure that people are creating assistance. So one thing that is much more important than the quality of rag is. The ability of users to create personal assistance. Before, it was only workspace assistance, and so only the admin or the builder could build it. And now we've basically, as an example, worked on having anybody can create the assistant. The assistant is scoped to themselves, they can publish it afterwards, et cetera. That's the kind of product questions that. Are, to be honest, more important than rack rarity, at least for us. Demetrios: All right, real quick, publish it for a greater user base or publish it for the internal company to be able to. Stanislas Polu: Yeah, within the workspace. Okay. Demetrios: It's not like, oh, I could publish this for. Stanislas Polu: We'Re not going there yet. And there's plenty to do internally to each workspace. Before going there, though it's an interesting case because that's basically another big problem, is you have an horizontal platform, you can create an assistance, you're not an. Expert and you're like, okay, what should I do? And so that's the kind of white blank page issue. Stanislas Polu: And so there having templates, inspiration, you can sit that within workspace, but you also want to have solutions for the new workspace that gets created. And maybe a marketplace is a good idea. Or having templates, et cetera, are also product questions that are much more important than the rack performance. And finally, the users where dust works really well, one example is Alan in. France, there are 600, and dust is. Running there pretty healthily, and they've created. More than 200 assistants. And so another big product question is like, when you get traction within a company, people start getting flooded with assistance. And so how do they discover them? How did they, and do they know which one to use, et cetera? So that's kind of the kind of. Many examples of product questions that are very first order compared to other things. Demetrios: Because out of these 200 assistants, are you seeing a lot of people creating the same assistance? Stanislas Polu: That's a good question. So far it's been kind of driven by somebody internally that was responsible for trying to push gen AI within the company. And so I think there's not that. Much redundancy, which is interesting, but I. Think there's a long tail of stuff that are mostly explorations, but from our perspective, it's very hard to distinguish the two. Obviously, usage is a very strong signal. But yeah, displaying assistance by usage, pushing. The right assistance to the right user. This problem seems completely trivial compared to building an LLM, obviously. But still, when you add the product layer requires a ton of work, and as a startup, that's where a lot of our resources go, and I think. It'S the right thing to do. Demetrios: Yeah, I wonder if, and you probably have thought about this, but if it's almost like you can tag it with this product, or this assistant is in beta or alpha or this is in production, you can trust that this one is stable, that kind of thing. Stanislas Polu: Yeah. So we have the concept of shared. Assistant and the concept of workspace assistant. The workspace assistant are like the admin vetted the assistant, and it's kind of pushed to everyone by default. And then the published assistant is like, there's a gallery of assistant that you can visit, and there, the strongest signal is probably the usage metric. Right? Demetrios: Yeah. So when you're talking about assistance, just so that I'm clear, it's not autonomous agents, is it? Stanislas Polu: No. Stanislas Polu: Yeah. So it's a great question. We are really focusing on the one. Step, trying to solve very nicely the one step thing. I have one granular task to achieve. And I can get accelerated on that. Task and maybe save a few minutes or maybe save a few tens of minutes on one specific thing, because the identity version of that is obviously the future. But the reality is that current models, even GB four, are not that great at kind of chaining decisions of tool use in a way that is sustainable. Beyond the demo effect. So while we are very hopeful for the future, it's not our core focus, because I think there's a lot of risk that it creates more deception than anything else. But it's obviously something that we are. Targeting in the future as models get better. Demetrios: Yeah. And you don't want to burn people by making them think something's possible. And then they go and check up on it and they leave it in the agent's hands, and then next thing they know they're getting fired because they don't actually do the work that they said they were going to do. Stanislas Polu: Yeah. One thing that we don't do today. Is we have kind of different ways. To bring data into the assistant before it creates generation. And we're expanding that. One of the domain use case is the one based on Qdrant, which is. The kind of retrieval one. We also have kind of a workflow system where you can create an app. An LLM app, where you can make. Stanislas Polu: Multiple calls to a model, you can call external APIs and search. And another thing we're digging into our structured data use case, which this time doesn't use Qdrants, which the idea is that semantic search is great, but it's really atrociously bad for quantitative questions. Basically, the typical use case is you. Have a big CSV somewhere and it gets chunked and then you do retrieval. And you get kind of disordered partial. Chunks, all of that. And on top of that, the moles. Are really bad at counting stuff. And so you really get bullshit, you. Demetrios: Know better than anybody. Stanislas Polu: Yeah, exactly. Past life. And so garbage in, garbage out. Basically, we're looking into being able, whenever the data is structured, to actually store. It in a structured way and as needed. Just in time, generate an in memory SQL database so that the model can generate a SQL query to that data and get kind of a SQL. Answer and as a consequence hopefully be able to answer quantitative questions better. And finally, obviously the next step also is as we integrated with those platform notion, Google Drive, slack, et cetera, basically. There'S some actions that we can take there. We're not going to take the actions, but I think it's interesting to have. The model prepare an action, meaning that here is the email I prepared, send. It or iterate with me on it, or here is the slack message I prepare, or here is the edit to the notion doc that I prepared. Stanislas Polu: This is still not agentic, it's closer. To taking action, but we definitely want. To keep the human in the loop. But obviously some stuff that are on our roadmap. And another thing that we don't support, which is one type of action would. Be the first we will be working on is obviously code interpretation, which is I think is one of the things that all users ask because they use. It on Chat GPT. And so we'll be looking into that as well. Demetrios: What made you choose Qdrant? Stanislas Polu: So the decision was made, if I. Remember correctly, something like February or March last year. And so the alternatives I looked into. Were pine cone wavy eight, some click owls because Chroma was using click owls at the time. But Chroma was. 2000 lines of code. At the time as well. And so I was like, oh, Chroma, we're part of AI grant. And Chroma is as an example also part of AI grant. So I was like, oh well, let's look at Chroma. And however, what I'm describing is last. Year, but they were very early. And so it was definitely not something. That seemed like to make sense for us. So at the end it was between pine cone wavev eight and Qdrant wave v eight. You look at the doc, you're like, yeah, not possible. And then finally it's Qdrant and Pinecone. And I think we really appreciated obviously the open source nature of Qdrants.From. Playing with it, the very strong performance, the fact that it's written in rust, the sanity of the documentation, and basically the feeling that because it's an open source, we're using the osted Qdrant cloud solution. But it's not a question of paying. Or not paying, it's more a question. Of being able to feel like you have more control. And at the time, I think it was the moment where Pinecon had their massive fuck up, where they erased gazillion database from their users and so we've been on Qdrants and I think it's. Been a two step process, really. Stanislas Polu: It's very smooth to start, but also Qdrants at this stage comes with a. Lot of knobs to turns. And so as you start scaling, you at some point reach a point where. You need to start tweaking the knobs. Which I think is great because the knobs, there's a lot of knobs, so they are hard to understand, but once you understand them, you see the power of them. And the Qdrant team has been excellent there supporting us. And so I think we've reached that first level of scale where you have. To tweak the nodes, and we've reached. The second level of scale where we. Have to have multiple nodes. But so far it's been extremely smooth. And I think we've been able to. Do with Qdrant some stuff that really are possible only because of the very good performance of the database. As an example, we're not using your clustered setup. We have n number of independent nodes. And as we scale, we kind of. Reshuffle which users go on which nodes. As we need, trying to keep our largest users and most paying users on. Very well identified nodes. We have a kind of a garbage. Node for all the free users, as an example, migrating even a very big collection from one node. One capability that we build is say, oh, I have that collection over there. It's pretty big. I'm going to initiate on another node. I'm going to set up shadow writing on both, and I'm going to migrate live the data. And that has been incredibly easy to do with Qdrant because crawling is fast, writing is fucking fast. And so even a pretty large collection. You can migrate it in a minute. Stanislas Polu: And so it becomes really within the realm of being able to administrate your cluster with that in mind, which I. Think would have probably not been possible with the different systems. Demetrios: So it feels like when you are helping companies build out their assistants, are you going in there and giving them ideas on what they can do? Stanislas Polu: Yeah, we are at a stage where obviously we have to do that because. I think the product basically starts to. Have strong legs, but I think it's still very early and so there's still a lot to do on activation, as an example. And so we are in a mode today where we do what doesn't scale. Basically, and we do spend some time. Stanislas Polu: With companies, obviously, because there's nowhere around that. But what we've seen also is that the users where it works the best and being on dust or anything else. That is relative to having people adopt gen AI. Within the company are companies where they. Actually allocate resources to the problem, meaning that the companies where it works best. Are the companies where there's somebody. Their role is really to go around the company, find, use cases, support the teams, et cetera. And in the case of companies using dust, this is kind of type of interface that is perfect for us because we provide them full support and we help them build whatever they think is. Valuable for their team. Demetrios: Are you also having to be the bearer of bad news and tell them like, yeah, I know you saw that demo on Twitter, but that is not actually possible or reliably possible? Stanislas Polu: Yeah, that's an interesting question. That's a good question. Not that much, because I think one of the big learning is that you take any company, even a pretty techy. Company, pretty young company, and the reality. Is that most of the people, they're not necessarily in the ecosystem, they just want shit done. And so they're really glad to have some shit being done by a computer. But they don't really necessarily say, oh, I want the latest shiniest thingy that. I saw on Twitter. So we've been safe from that so far. Demetrios: Excellent. Well, man, this has been incredible. I really appreciate you coming on here and doing this. Thanks so much. And if anyone wants to check out dust, I encourage that they do. Stanislas Polu: It's dust. Demetrios: It's a bit of an interesting website. What is it? Stanislas Polu: Dust TT. Demetrios: That's it. That's what I was missing, dust. There you go. So if anybody wants to look into it, I encourage them to. And thanks so much for coming on here. Stanislas Polu: Yeah. Stanislas Polu: And Qdrant is the shit. Demetrios: There we go. Awesome, dude. Well, this has been great. Stanislas Polu: Yeah, thanks, Vintu. Have a good one.
qdrant-landing/content/blog/qdrant_and_jina_integration.md
--- draft: false preview_image: /blog/from_cms/docarray.png sitemapExclude: true title: "Qdrant and Jina integration: storage backend support for DocArray" slug: qdrant-and-jina-integration short_description: "One more way to use Qdrant: Jina's DocArray is now supporting Qdrant as a storage backend." description: We are happy to announce that Jina.AI integrates Qdrant engine as a storage backend to their DocArray solution. date: 2022-03-15T15:00:00+03:00 author: Alyona Kavyerina featured: false author_link: https://medium.com/@alyona.kavyerina tags: - jina integration - docarray categories: - News --- We are happy to announce that [Jina.AI](https://jina.ai/) integrates Qdrant engine as a storage backend to their [DocArray](https://docarray.jina.ai/) solution. Now you can experience the convenience of Pythonic API and Rust performance in a single workflow. DocArray library defines a structure for the unstructured data and simplifies processing a collection of documents, including audio, video, text, and other data types. Qdrant engine empowers scaling of its vector search and storage. Read more about the integration by this [link](/documentation/install/#docarray)
qdrant-landing/content/blog/qsoc24-interns-announcement.md
--- title: "QSoC 2024: Announcing Our Interns!" draft: false slug: qsoc24-interns-announcement # Change this slug to your page slug if needed short_description: We are pleased to announce the selection of interns for the inaugural Qdrant Summer of Code (QSoC) program. # Change this description: We are pleased to announce the selection of interns for the inaugural Qdrant Summer of Code (QSoC) program. # Change this preview_image: /blog/qsoc24-interns-announcement/qsoc.jpg # Change this social_preview_image: /blog/qsoc24-interns-announcement/qsoc.jpg # Optional image used for link previews title_preview_image: /blog/qsoc24-interns-announcement/qsoc.jpg # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-05-08T16:44:22-03:00 author: Sabrina Aquino # Change this featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - QSoC - Qdrant Summer of Code - Google Summer of Code - vector search --- We are excited to announce the interns selected for the inaugural Qdrant Summer of Code (QSoC) program! After receiving many impressive applications, we have chosen two talented individuals to work on the following projects: **[Jishan Bhattacharya](https://www.linkedin.com/in/j16n/): WASM-based Dimension Reduction Visualization** Jishan will be implementing a dimension reduction algorithm in Rust, compiling it to WebAssembly (WASM), and integrating it with the Qdrant Web UI. This project aims to provide a more efficient and smoother visualization experience, enabling the handling of more data points and higher dimensions efficiently. **[Celine Hoang](https://www.linkedin.com/in/celine-h-hoang/): ONNX Cross Encoders in Python** Celine Hoang will focus on porting advanced ranking models—specifically Sentence Transformers, ColBERT, and BGE—to the ONNX (Open Neural Network Exchange) format. This project will enhance Qdrant's model support, making it more versatile and efficient in handling complex ranking tasks that are critical for applications such as recommendation engines and search functionalities. We look forward to working with Jishan and Celine over the coming months and are excited to see their contributions to the Qdrant project. Stay tuned for more updates on the QSoC program and the progress of these projects!
qdrant-landing/content/blog/semantic-cache-ai-data-retrieval.md
--- title: "Semantic Cache: Accelerating AI with Lightning-Fast Data Retrieval" draft: false slug: short_description: "Semantic Cache for Best Results and Optimization." description: "Semantic cache is reshaping AI applications by enabling rapid data retrieval. Discover how its implementation benefits your RAG setup." preview_image: /blog/semantic-cache-ai-data-retrieval/social_preview.png social_preview_image: /blog/semantic-cache-ai-data-retrieval/social_preview.png date: 2024-05-07T00:00:00-08:00 author: Daniel Romero, David Myriel featured: false tags: - vector search - vector database - semantic cache - gpt cache - semantic cache llm - AI applications - data retrieval - efficient data storage --- ## What is Semantic Cache? **Semantic cache** is a method of retrieval optimization, where similar queries instantly retrieve the same appropriate response from a knowledge base. Semantic cache differs from traditional caching methods. In computing, **cache** refers to high-speed memory that efficiently stores frequently accessed data. In the context of vector databases, a **semantic cache** improves AI application performance by storing previously retrieved results along with the conditions under which they were computed. This allows the application to reuse those results when the same or similar conditions occur again, rather than finding them from scratch. > The term **"semantic"** implies that the cache takes into account the meaning or semantics of the data or computation being cached, rather than just its syntactic representation. This can lead to more efficient caching strategies that exploit the structure or relationships within the data or computation. ![semantic-cache-question](/blog/semantic-cache-ai-data-retrieval/semantic-cache-question.png) Traditional caches operate on an exact match basis, while semantic caches search for the meaning of the key rather than an exact match. For example, **"What is the capital of Brazil?"** and **"Can you tell me the capital of Brazil?"** are semantically equivalent, but not exact matches. A semantic cache recognizes such semantic equivalence and provides the correct result. In this blog and video, we will walk you through how to use Qdrant to implement a basic semantic cache system. You can also try the [notebook example](https://github.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb) for this implementation. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb) ## Semantic Cache in RAG: the Key-Value Mechanism Semantic cache is increasingly used in Retrieval-Augmented Generation (RAG) applications. In RAG, when a user asks a question, we embed it and search our vector database, either by using keyword, semantic, or hybrid search methods. The matched context is then passed to a Language Model (LLM) along with the prompt and user question for response generation. Qdrant is recommended for setting up semantic cache as semantically evaluates the response. When semantic cache is implemented, we store common questions and their corresponding answers in a key-value cache. This way, when a user asks a question, we can retrieve the response from the cache if it already exists. **Diagram:** Semantic cache improves RAG by directly retrieving stored answers to the user. **Follow along with the gif** and see how semantic cache stores and retrieves answers. ![Alt Text](/blog/semantic-cache-ai-data-retrieval/semantic-cache.gif) When using a key-value cache, it's important to consider that slight variations in question wording can lead to different hash values. The two questions convey the same query but differ in wording. A naive cache search might fail due to distinct hashed versions of the questions. Implementing a more nuanced approach is necessary to accommodate phrasing variations and ensure accurate responses. To address this challenge, a semantic cache can be employed instead of relying solely on exact matches. This entails storing questions, answers, and their embeddings in a key-value structure. When a user poses a question, a semantic search by Qdrant is conducted across all cached questions to identify the most similar one. If the similarity score surpasses a predefined threshold, the system assumes equivalence between the user's question and the matched one, providing the corresponding answer accordingly. ## Benefits of Semantic Cache for AI Applications Semantic cache contributes to scalability in AI applications by making it simpler to retrieve common queries from vast datasets. The retrieval process can be computationally intensive and implementing a cache component can reduce the load. For instance, if hundreds of users repeat the same question, the system can retrieve the precomputed answer from the cache rather than re-executing the entire process. This cache stores questions as keys and their corresponding answers as values, providing an efficient means to handle repeated queries. > There are **potential cost savings** associated with utilizing semantic cache. Using a semantic cache eliminates the need for repeated searches and generation processes for similar or duplicate questions, thus saving time and LLM API resources, especially when employing costly language model calls like OpenAI's. ## When to Use Semantic Cache? For applications like question-answering systems where facts are retrieved from documents, caching is beneficial due to the consistent nature of the queries. *However, for text generation tasks requiring varied responses, caching may not be ideal as it returns previous responses, potentially limiting variation.* Thus, the decision to use caching depends on the specific use case. Using a cache might not be ideal for applications where diverse responses are desired across multiple queries. However, in question-answering systems, caching is advantageous since variations are insignificant. It serves as an effective performance optimization tool for chatbots by storing frequently accessed data. One strategy involves creating ad-hoc patches for chatbot dialogues, where commonly asked questions are pre-mapped to prepared responses in the cache. This allows the chatbot to swiftly retrieve and deliver responses without relying on a Language Model (LLM) for each query. ## Implement Semantic Cache: A Step-by-Step Guide The first part of this video explains how caching works. In the second part, you can follow along with the code with our [notebook example](https://github.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb) <p align="center"><iframe src="https://www.youtube.com/embed/H53L_yHs9jE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p> ## Embrace the Future of AI Data Retrieval [Qdrant](https://github.com/qdrant/qdrant) offers the most flexible way to implement vector search for your RAG and AI applications. You can test out semantic cache on your free Qdrant Cloud instance today! Simply sign up for or log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and follow our [documentation](/documentation/cloud/). You can also deploy Qdrant locally and manage via our UI. To do this, check our [Hybrid Cloud](/blog/hybrid-cloud/)! [![hybrid-cloud-get-started](/blog/hybrid-cloud-launch-partners/hybrid-cloud-get-started.png)](https://cloud.qdrant.io/login)
qdrant-landing/content/blog/series-A-funding-round.md
--- draft: false title: "Announcing Qdrant's $28M Series A Funding Round" slug: series-A-funding-round short_description: description: preview_image: /blog/series-A-funding-round/series-A.png social_preview_image: /blog/series-A-funding-round/series-A.png date: 2024-01-23T09:00:00.000Z author: Andre Zayarni, CEO & Co-Founder featured: true tags: - Funding - Series-A - Announcement --- Today, we are excited to announce our $28M Series A funding round, which is led by Spark Capital with participation from our existing investors Unusual Ventures and 42CAP. We have seen incredible user growth and support from our open-source community in the past two years - recently exceeding 5M downloads. This is a testament to our mission to build the most efficient, scalable, high-performance vector database on the market. We are excited to further accelerate this trajectory with our new partner and investor, Spark Capital, and the continued support of Unusual Ventures and 42CAP. This partnership uniquely positions us to empower enterprises with cutting edge vector search technology to build truly differentiating, next-gen AI applications at scale. ## The Emergence and Relevance of Vector Databases A paradigm shift is underway in the field of data management and information retrieval. Today, our world is increasingly dominated by complex, unstructured data like images, audio, video, and text. Traditional ways of retrieving data based on keyword matching are no longer sufficient. Vector databases are designed to handle complex high-dimensional data, unlocking the foundation for pivotal AI applications. They represent a new frontier in data management, in which complexity is not a barrier but an opportunity for innovation. The rise of generative AI in the last few years has shone a spotlight on vector databases, prized for their ability to power retrieval-augmented generation (RAG) applications. What we are seeing now, both within AI and beyond, is only the beginning of the opportunity for vector databases. Within our Qdrant community, we already see a multitude of unique solutions and applications leveraging our technology for multimodal search, anomaly detection, recommendation systems, complex data analysis, and more. ## What sets Qdrant apart? To meet the needs of the next generation of AI applications, Qdrant has always been built with four keys in mind: efficiency, scalability, performance, and flexibility. Our goal is to give our users unmatched speed and reliability, even when they are building massive-scale AI applications requiring the handling of billions of vectors. We did so by building Qdrant on Rust for performance, memory safety, and scale. Additionally, [our custom HNSW search algorithm](/articles/filtrable-hnsw/) and unique [filtering](/documentation/concepts/filtering/) capabilities consistently lead to [highest RPS](/benchmarks/), minimal latency, and high control with accuracy when running large-scale, high-dimensional operations. Beyond performance, we provide our users with the most flexibility in cost savings and deployment options. A combination of cutting-edge efficiency features, like [built-in compression options](/documentation/guides/quantization/), [multitenancy](/documentation/guides/multiple-partitions/) and the ability to [offload data to disk](/documentation/concepts/storage/), dramatically reduce memory consumption. Committed to privacy and security, crucial for modern AI applications, Qdrant now also offers on-premise and hybrid SaaS solutions, meeting diverse enterprise needs in a data-sensitive world. This approach, coupled with our open-source foundation, builds trust and reliability with engineers and developers, making Qdrant a game-changer in the vector database domain. ## What's next? We are incredibly excited about our next chapter to power the new generation of enterprise-grade AI applications. The support of our open-source community has led us to this stage and we’re committed to continuing to build the most advanced vector database on the market, but ultimately it’s up to you to decide! We invite you to [test out](https://cloud.qdrant.io/) Qdrant for your AI applications today.
qdrant-landing/content/blog/soc2-type2-report.md
--- title: "Qdrant Attains SOC 2 Type II Audit Report" draft: false slug: qdrant-soc2-type2-audit # Change this slug to your page slug if needed short_description: We're proud to announce achieving SOC 2 Type II compliance for Security, Availability, Processing Integrity, Confidentiality, and Privacy. description: We're proud to announce achieving SOC 2 Type II compliance for Security, Availability, and Confidentiality. preview_image: /blog/soc2-type2-report/soc2-preview.jpeg # social_preview_image: /blog/soc2-type2-report/soc2-preview.jpeg date: 2024-05-23T20:26:20-03:00 author: Sabrina Aquino # Change this featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - soc2 - audit - security - confidenciality - data privacy - soc2 type 2 --- At Qdrant, we are happy to announce the successful completion our the SOC 2 Type II Audit. This achievement underscores our unwavering commitment to upholding the highest standards of security, availability, and confidentiality for our services and our customers’ data. ## SOC 2 Type II: What Is It? SOC 2 Type II certification is an examination of an organization's controls in reference to the American Institute of Certified Public Accountants [(AICPA) Trust Services criteria](https://www.aicpa-cima.com/content/dam/aicpa/interestareas/frc/assuranceadvisoryservices/downloadabledocuments/trust-services-criteria.pdf). It evaluates not only our written policies but also their practical implementation, ensuring alignment between our stated objectives and operational practices. Unlike Type I, which is a snapshot in time, Type II verifies over several months that the company has lived up to those controls. The report represents thorough auditing of our security procedures throughout this examination period: January 1, 2024 to April 7, 2024. ## Key Audit Findings The audit ensured with no exceptions noted the effectiveness of our systems and controls on the following Trust Service Criteria: * Security * Confidentiality * Availability These certifications are available today and automatically apply to your existing workloads. The full SOC 2 Type II report is available to customers and stakeholders upon request through the [Trust Center](https://app.drata.com/trust/9cbbb75b-0c38-11ee-865f-029d78a187d9). ## Future Compliance Going forward, Qdrant will maintain SOC 2 Type II compliance by conducting continuous, annual audits to ensure our security practices remain aligned with industry standards and evolving risks. Recognizing the critical importance of data security and the trust our clients place in us, achieving SOC 2 Type II compliance underscores our ongoing commitment to prioritize data protection with the utmost integrity and reliability. ## About Qdrant Qdrant is a vector database designed to handle large-scale, high-dimensional data efficiently. It allows for fast and accurate similarity searches in complex datasets. Qdrant strives to achieve seamless and scalable vector search capabilities for various applications. For more information about Qdrant and our security practices, please visit our [website](http://qdrant.tech) or [reach out to our team directly](https://qdrant.tech/contact-us/).
qdrant-landing/content/blog/storing-multiple-vectors-per-object-in-qdrant.md
--- draft: false title: Storing multiple vectors per object in Qdrant slug: storing-multiple-vectors-per-object-in-qdrant short_description: Qdrant's approach to storing multiple vectors per object, unraveling new possibilities in data representation and retrieval. description: Discover how Qdrant continues to push the boundaries of data indexing, providing insights into the practical applications and benefits of this novel vector storage strategy. preview_image: /blog/from_cms/andrey.vasnetsov_a_space_station_with_multiple_attached_modules_853a27c7-05c4-45d2-aebc-700a6d1e79d0.png date: 2022-10-05T10:05:43.329Z author: Kacper Łukawski featured: false tags: - Data Science - Neural Networks - Database - Search - Similarity Search --- In a real case scenario, a single object might be described in several different ways. If you run an e-commerce business, then your items will typically have a name, longer textual description and also a bunch of photos. While cooking, you may care about the list of ingredients, and description of the taste but also the recipe and the way your meal is going to look. Up till now, if you wanted to enable semantic search with multiple vectors per object, Qdrant would require you to create separate collections for each vector type, even though they could share some other attributes in a payload. However, since Qdrant 0.10 you are able to store all those vectors together in the same collection and share a single copy of the payload! In a real case scenario, a single object might be described in several different ways. If you run an e-commerce business, then your items will typically have a name, longer textual description and also a bunch of photos. While cooking, you may care about the list of ingredients, and description of the taste but also the recipe and the way your meal is going to look. Up till now, if you wanted to enable semantic search with multiple vectors per object, Qdrant would require you to create separate collections for each vector type, even though they could share some other attributes in a payload. However, since Qdrant 0.10 you are able to store all those vectors together in the same collection and share a single copy of the payload! Running the new version of Qdrant is as simple as it always was. By running the following command, you are able to set up a single instance that will also expose the HTTP API: ``` docker run -p 6333:6333 qdrant/qdrant:v0.10.1 ``` ## Creating a collection Adding new functionalities typically requires making some changes to the interfaces, so no surprise we had to do it to enable the multiple vectors support. Currently, if you want to create a collection, you need to define the configuration of all the vectors you want to store for each object. Each vector type has its own name and the distance function used to measure how far the points are. ```python from qdrant_client import QdrantClient from qdrant_client.http.models import VectorParams, Distance client = QdrantClient() client.recreate_collection( collection_name="multiple_vectors", vectors_config={ "title": VectorParams( size=100, distance=Distance.EUCLID, ), "image": VectorParams( size=786, distance=Distance.COSINE, ), } ) ``` In case you want to keep a single vector per collection, you can still do it without putting a name though. ```python client.recreate_collection( collection_name="single_vector", vectors_config=VectorParams( size=100, distance=Distance.COSINE, ) ) ``` All the search-related operations have slightly changed their interfaces as well, so you can choose which vector to use in a specific request. However, it might be easier to see all the changes by following an end-to-end Qdrant usage on a real-world example. ## Building service with multiple embeddings Quite a common approach to building search engines is to combine semantic textual capabilities with image search as well. For that purpose, we need a dataset containing both images and their textual descriptions. There are several datasets available with [MS_COCO_2017_URL_TEXT](https://huggingface.co/datasets/ChristophSchuhmann/MS_COCO_2017_URL_TEXT) being probably the simplest available. And because it’s available on HuggingFace, we can easily use it with their [datasets](https://huggingface.co/docs/datasets/index) library. ```python from datasets import load_dataset dataset = load_dataset("ChristophSchuhmann/MS_COCO_2017_URL_TEXT") ``` Right now, we have a dataset with a structure containing the image URL and its textual description in English. For simplicity, we can convert it to the DataFrame, as this structure might be quite convenient for future processing. ```python import pandas as pd dataset_df = pd.DataFrame(dataset["train"]) ``` The dataset consists of two columns: *TEXT* and *URL*. Thus, each data sample is described by two separate pieces of information and each of them has to be encoded with a different model. ## Processing the data with pretrained models Thanks to [embetter](https://github.com/koaning/embetter), we can reuse some existing pretrained models and use a convenient scikit-learn API, including pipelines. This library also provides some utilities to load the images, but only supports the local filesystem, so we need to create our own class that will download the file, given its URL. ```python from pathlib import Path from urllib.request import urlretrieve from embetter.base import EmbetterBase class DownloadFile(EmbetterBase): def __init__(self, out_dir: Path): self.out_dir = out_dir def transform(self, X, y=None): output_paths = [] for x in X: output_file = self.out_dir / Path(x).name urlretrieve(x, output_file) output_paths.append(str(output_file)) return output_paths ``` Now we’re ready to define the pipelines to process our images and texts using *all-MiniLM-L6-v2* and *vit_base_patch16_224* models respectively. First of all, let’s start with Qdrant configuration. ## Creating Qdrant collection We’re going to put two vectors per object (one for image and another one for text), so we need to create a collection with a configuration allowing us to do so. ```python from qdrant_client import QdrantClient from qdrant_client.http.models import VectorParams, Distance client = QdrantClient(timeout=None) client.recreate_collection( collection_name="ms-coco-2017", vectors_config={ "text": VectorParams( size=384, distance=Distance.EUCLID, ), "image": VectorParams( size=1000, distance=Distance.COSINE, ), }, ) ``` ## Defining the pipelines And since we have all the puzzles already in place, we can start the processing to convert raw data into the embeddings we need. The pretrained models come in handy. ```python from sklearn.pipeline import make_pipeline from embetter.grab import ColumnGrabber from embetter.vision import ImageLoader, TimmEncoder from embetter.text import SentenceEncoder output_directory = Path("./images") image_pipeline = make_pipeline( ColumnGrabber("URL"), DownloadFile(output_directory), ImageLoader(), TimmEncoder("vit_base_patch16_224"), ) text_pipeline = make_pipeline( ColumnGrabber("TEXT"), SentenceEncoder("all-MiniLM-L6-v2"), ) ``` Thanks to the scikit-learn API, we can simply call each pipeline on the created DataFrame and put created vectors into Qdrant to enable fast vector search. For convenience, we’re going to put the vectors as other columns in our DataFrame. ```python sample_df = dataset_df.sample(n=2000, random_state=643) image_vectors = image_pipeline.transform(sample_df) text_vectors = text_pipeline.transform(sample_df) sample_df["image_vector"] = image_vectors.tolist() sample_df["text_vector"] = text_vectors.tolist() ``` The created vectors might be easily put into Qdrant. For the sake of simplicity, we’re going to skip it, but if you are interested in details, please check out the [Jupyter notebook](https://gist.github.com/kacperlukawski/961aaa7946f55110abfcd37fbe869b8f) going step by step. ## Searching with multiple vectors If you decided to describe each object with several neural embeddings, then at each search operation you need to provide the vector name along with the embedding, so the engine knows which one to use. The interface of the search operation is pretty straightforward and requires an instance of NamedVector. ```python from qdrant_client.http.models import NamedVector text_results = client.search( collection_name="ms-coco-2017", query_vector=NamedVector( name="text", vector=row["text_vector"], ), limit=5, with_vectors=False, with_payload=True, ) ``` If we, on the other hand, decided to search using the image embedding, then we just provide the vector name we have chosen while creating the collection, so instead of “text”, we would provide “image”, as this is how we configured it at the very beginning. ## The results: image vs text search Since we have two different vectors describing each object, we can perform the search query using any of those. That shouldn’t be surprising then, that the results are different depending on the chosen embedding method. The images below present the results returned by Qdrant for the image/text on the left-hand side. ### Image search If we query the system using image embedding, then it returns the following results: ![](/blog/from_cms/0_5nqlmjznjkvdrjhj.webp "Image search results") ### Text search However, if we use textual description embedding, then the results are slightly different: ![](/blog/from_cms/0_3sdgctswb99xtexl.webp "Text search However, if we use textual description embedding, then the results are slightly different:") It is not surprising that a method used for creating neural encoding plays an important role in the search process and its quality. If your data points might be described using several vectors, then the latest release of Qdrant gives you an opportunity to store them together and reuse the payloads, instead of creating several collections and querying them separately. If you’d like to check out some other examples, please check out our [full notebook](https://gist.github.com/kacperlukawski/961aaa7946f55110abfcd37fbe869b8f) presenting the search results and the whole pipeline implementation.
qdrant-landing/content/blog/superpower-your-semantic-search-using-vector-database-nicolas-mauti-vector-space-talk-007.md
--- draft: false title: Superpower your Semantic Search using Vector Database - Nicolas Mauti | Vector Space Talks slug: semantic-search-vector-database short_description: Nicolas Mauti and his team at Malt discusses how they revolutionize the way freelancers connect with projects. description: Nicolas Mauti discusses the improvements to Malt's semantic search capabilities to enhance freelancer and project matching, highlighting the transition to retriever-ranker architecture, implementation of a multilingual encoder model, and the deployment of Qdrant to significantly reduce latency. preview_image: /blog/from_cms/nicolas-mauti-cropped.png date: 2024-01-09T12:27:18.659Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Retriever-Ranker Architecture - Semantic Search --- > *"We found a trade off between performance and precision in Qdrant’s that were better for us than what we can found on Elasticsearch.”*\ > -- Nicolas Mauti > Want precision & performance in freelancer search? Malt's move to the Qdrant database is a masterstroke, offering geospatial filtering & seamless scaling. How did Nicolas Mauti and the team at Malt identify the need to transition to a retriever-ranker architecture for their freelancer matching app? Nicolas Mauti, a computer science graduate from INSA Lyon Engineering School, transitioned from software development to the data domain. Joining Malt in 2021 as a data scientist, he specialized in recommender systems and NLP models within a freelancers-and-companies marketplace. Evolving into an MLOps Engineer, Nicolas adeptly combines data science, development, and ops knowledge to enhance model development tools and processes at Malt. Additionally, he has served as a part-time teacher in a French engineering school since 2020. Notably, in 2023, Nicolas successfully deployed Qdrant at scale within Malt, contributing to the implementation of a new matching system. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/5aTPXqa7GMjekUfD8aAXWG?si=otJ_CpQNScqTK5cYq2zBow), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/OSZSingUYBM).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/OSZSingUYBM?si=1PHIRm5K5Q-HKIiS" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Superpower-Your-Semantic-Search-Using-Vector-Database---Nicolas-Mauti--Vector-Space-Talk-007-e2d9lrs/a-aaoae5a" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top Takeaways:** Dive into the intricacies of semantic search enhancement with Nicolas Mauti, MLOps Engineer at Malt. Discover how Nicolas and his team at Malt revolutionize the way freelancers connect with projects. In this episode, Nicolas delves into enhancing semantics search at Malt by implementing a retriever-ranker architecture with multilingual transformer-based models, improving freelancer-project matching through a transition to Qdrant that reduced latency from 10 seconds to 1 second and bolstering the platform's overall performance and scaling capabilities. 5 Keys to Learning from the Episode: 1. **Performance Enhancement Tactics**: Understand the technical challenges Malt faced due to increased latency brought about by their expansion to over half a million freelancers and the solutions they enacted. 2. **Advanced Matchmaking Architecture**: Learn about the retriever-ranker model adopted by Malt, which incorporates semantic searching alongside a KNN search for better efficacy in pairing projects with freelancers. 3. **Cutting-Edge Model Training**: Uncover the deployment of a multilingual transformer-based encoder that effectively creates high-fidelity embeddings to streamline the matchmaking process. 4. **Database Selection Process**: Mauti discusses the factors that shaped Malt's choice of database systems, facilitating a balance between high performance and accurate filtering capabilities. 5. **Operational Improvements**: Gain knowledge of the significant strides Malt made post-deployment, including a remarkable reduction in application latency and its positive effects on scalability and matching quality. > Fun Fact: Malt employs a multilingual transformer-based encoder model to generate 384-dimensional embeddings, which improved their semantic search capability. > ## Show Notes: 00:00 Matching app experiencing major performance issues.\ 04:56 Filtering freelancers and adopting retriever-ranker architecture.\ 09:20 Multilingual encoder model for adapting semantic space.\ 10:52 Review, retrain, categorize, and organize freelancers' responses.\ 16:30 Trouble with geospatial filtering databases\ 17:37 Benchmarking performance and precision of search algorithms.\ 21:11 Deployed in Kubernetes. Stored in Git repository, synchronized with Argo CD.\ 27:08 Improved latency quickly, validated architecture, aligned steps.\ 28:46 Invitation to discuss work using specific methods. ## More Quotes from Nicolas: *"And so GitHub's approach is basic idea that your git repository is your source of truth regarding what you must have in your Kubernetes clusters.”*\ -- Nicolas Mauti *"And so we can see that our space seems to be well organized, where the tech freelancer are close to each other and the graphic designer for example, are far from the tech family.”*\ -- Nicolas Mauti *"And also one thing that interested us is that it's multilingual. And as Malt is a European company, we have to have to model a multilingual model.”*\ -- Nicolas Mauti ## Transcript: Demetrios: We're live. We are live in the flesh. Nicholas, it's great to have you here, dude. And welcome to all those vector space explorers out there. We are back with another vector space talks. Today we're going to be talking all about how to superpower your semantics search with my man Nicholas, an ML ops engineer at Malt, in case you do not know what Malt is doing. They are pairing up, they're making a marketplace. They are connecting freelancers and companies. Demetrios: And Nicholas, you're doing a lot of stuff with recommender systems, right? Nicolas Mauti: Yeah, exactly. Demetrios: I love that. Well, as I mentioned, I am in an interesting spot because I'm trying to take in all the vitamin D I can while I'm listening to your talk. Everybody that is out there listening with us, get involved. Let us know where you're calling in from or watching from. And also feel free to drop questions in the chat as we go along. And if need be, I will jump in and stop Nicholas. But I know you got a little presentation for us, man you want to get into. Nicolas Mauti: Thanks for the, thanks for the introduction and hello, everyone. And thanks for the invitation to this talk, of course. So let's start. Let's do it. Demetrios: I love it. Superpowers. Nicolas Mauti: Yeah, we will have superpowers at the end of this presentation. So, yeah, hello, everyone. So I think the introduction was already done and perfectly done by Dimitrios. So I'm Nicola and yeah, I'm working as an Mlaps engineer at Malt. And also I'm a part time teacher in a french engineering school where I teach some mlaps course. So let's dig in today's subjects. So in fact, as Dimitrio said, malt is a marketplace and so our goal is to match on one side freelancers. And those freelancers have a lot of attributes, for example, a description, some skills and some awesome skills. Nicolas Mauti: And they also have some preferences and also some attributes that are not specifically semantics. And so it will be a key point of our topics today. And on other sides we have what we call projects that are submitted by companies. And this project also have a lot of attributes, for example, description, also some skills and need to find and also some preferences. And so our goal at the end is to perform a match between these two entities. And so for that we add a matching app in production already. And so in fact, we had a major issue with this application is performance of this application because the application becomes very slow. The p 50 latency was around 10 seconds. Nicolas Mauti: And what you have to keep from this is that if your latency, because became too high, you won't be able to perform certain scenarios. Sometimes you want some synchronous scenario where you fill your project and then you want to have directly your freelancers that match this project. And so if it takes too much time, you won't be able to have that. And so you will have to have some asynchronous scenario with email or stuff like that. And it's not very a good user experience. And also this problem were amplified by the exponential growth of the platform. Absolutely, we are growing. And so to give you some numbers, when I arrived two years ago, we had two time less freelancers. Nicolas Mauti: And today, and today we have around 600,000 freelancers in your base. So it's growing. And so with this grow, we had some, several issue. And something we have to keep in mind about this matching app. And so it's not only semantic app, is that we have two things in these apps that are not semantic. We have what we call art filters. And so art filters are art rules defined by the project team at Malt. And so these rules are hard and we have to respect them. Nicolas Mauti: For example, the question is hard rule at malt we have a local approach, and so we want to provide freelancers that are next to the project. And so for that we have to filter the freelancers and to have art filters for that and to be sure that we respect these rules. And on the other side, as you said, demetrius, we are talking about Rexis system here. And so in a rexy system, you also have to take into account some other parameters, for example, the preferences of the freelancers and also the activity on the platform of the freelancer, for example. And so in our system, we have to keep this in mind and to have this working. And so if we do a big picture of how our system worked, we had an API with some alphilter at the beginning, then ML model that was mainly semantic and then some rescoring function with other parameters. And so we decided to rework this architecture and to adopt a retriever ranker architecture. And so in this architecture, you will have your pool of freelancers. Nicolas Mauti: So here is your wall databases, so your 600,000 freelancers. And then you will have a first step that is called the retrieval, where we will constrict a subsets of your freelancers. And then you can apply your wrong kill algorithm. That is basically our current application. And so the first step will be, semantically, it will be fast, and it must be fast because you have to perform a quick selection of your more interesting freelancers and it's built for recall, because at this step you want to be sure that you have all your relevant freelancers selected and you don't want to exclude at this step some relevant freelancer because the ranking won't be able to take back these freelancers. And on the other side, the ranking can contain more features, not only semantics, it less conference in time. And if your retrieval part is always giving you a fixed size of freelancers, your ranking doesn't have to scale because you will always have the same number of freelancers in inputs. And this one is built for precision. Nicolas Mauti: At this point you don't want to keep non relevant freelancers and you have to be able to rank them and you have to be state of the art for this part. So let's focus on the first part. That's what will interesting us today. So for the first part, in fact, we have to build this semantic space where freelancers that are close regarding their skills or their jobs are closed in this space too. And so for that we will build this semantic space. And so then when we receive a project, we will have just to project this project in our space. And after that you will have just to do a search and a KNN search for knee arrest neighbor search. And in practice we are not doing a KNN search because it's too expensive, but inn search for approximate nearest neighbors. Nicolas Mauti: Keep this in mind, it will be interesting in our next slides. And so, to get this semantic space and to get this search, we need two things. The first one is a model, because we need a model to compute some vectors and to project our opportunity and our project and our freelancers in this space. And on another side, you will have to have a tool to operate this semantic step page. So to store the vector and also to perform the search. So for the first part, for the model, I will give you some quick info about how we build it. So for this part, it was more on the data scientist part. So the data scientist started from an e five model. Nicolas Mauti: And so the e five model will give you a common knowledge about the language. And also one thing that interested us is that it's multilingual. And as Malt is an european company, we have to have to model a multilingual model. And on top of that we built our own encoder model based on a transformer architecture. And so this model will be in charge to be adapted to Malchus case and to transform this very generic semantic space into a semantic space that is used for skills and jobs. And this model is also able to take into account the structure of a profile of a freelancer profile because you have a description and job, some skills, some experiences. And so this model is capable to take this into account. And regarding the training, we use some past interaction on the platform to train it. Nicolas Mauti: So when a freelancer receives a project, he can accept it or not. And so we use that to train this model. And so at the end we get some embeddings with 384 dimensions. Demetrios: One question from my side, sorry to stop you right now. Do you do any type of reviews or feedback and add that into the model? Nicolas Mauti: Yeah. In fact we continue to have some response about our freelancers. And so we also review them, sometimes manually because sometimes the response are not so good or we don't have exactly what we want or stuff like that, so we can review them. And also we are retraining the model regularly, so this way we can include new feedback from our freelancers. So now we have our model and if we want to see how it looks. So here I draw some ponds and color them by the category of our freelancer. So on the platform the freelancer can have category, for example tech or graphic or soon designer or this kind of category. And so we can see that our space seems to be well organized, where the tech freelancer are close to each other and the graphic designer for example, are far from the tech family. Nicolas Mauti: So it seems to be well organized. And so now we have a good model. So okay, now we have our model, we have to find a way to operate it, so to store this vector and to perform our search. And so for that, Vectordb seems to be the good candidate. But if you follow the news, you can see that vectordb is very trendy and there is plenty of actor on the market. And so it could be hard to find your loved one. And so I will try to give you the criteria we had and why we choose Qdrant at the end. So our first criteria were performances. Nicolas Mauti: So I think I already talked about this ponds, but yeah, we needed performances. The second ones was about inn quality. As I said before, we cannot do a KnN search, brute force search each time. And so we have to find a way to approximate but to be close enough and to be good enough on these points. And so otherwise we won't be leveraged the performance of our model. And the last one, and I didn't talk a lot about this before, is filtering. Filtering is a big problem for us because we have a lot of filters, of art filters, as I said before. And so if we think about my architecture, we can say, okay, so filtering is not a problem. Nicolas Mauti: You can just have a three step process and do filtering, semantic search and then ranking, or do semantic search, filtering and then ranking. But in both cases, you will have some troubles if you do that. The first one is if you want to apply prefiltering. So filtering, semantic search, ranking. If you do that, in fact, you will have, so we'll have this kind of architecture. And if you do that, you will have, in fact, to flag each freelancers before asking the vector database and performing a search, you will have to flag each freelancer whether there could be selected or not. And so with that, you will basically create a binary mask on your freelancers pool. And as the number of freelancers you have will grow, your binary namask will also grow. Nicolas Mauti: And so it's not very scalable. And regarding the performance, it will be degraded as your freelancer base grow. And also you will have another problem. A lot of vector database and Qdrants is one of them using hash NSW algorithm to do your inn search. And this kind of algorithm is based on graph. And so if you do that, you will deactivate some nodes in your graph, and so your graph will become disconnected and you won't be able to navigate in your graph. And so your quality of your matching will degrade. So it's definitely not a good idea to apply prefiltering. Nicolas Mauti: So, no, if we go to post filtering here, I think the issue is more clear. You will have this kind of architecture. And so, in fact, if you do that, you will have to retrieve a lot of freelancer for your vector database. If you apply some very aggressive filtering and you exclude a lot of freelancer with your filtering, you will have to ask for a lot of freelancer in your vector database and so your performances will be impacted. So filtering is a problem. So we cannot do pre filtering or post filtering. So we had to find a database that do filtering and matching and semantic matching and search at the same time. And so Qdrant is one of them, you have other one in the market. Nicolas Mauti: But in our case, we had one filter that caused us a lot of troubles. And this filter is the geospatial filtering and a few of databases under this filtering, and I think Qdrant is one of them that supports it. But there is not a lot of databases that support them. And we absolutely needed that because we have a local approach and we want to be sure that we recommend freelancer next to the project. And so now that I said all of that, we had three candidates that we tested and we benchmarked them. We had elasticsearch PG vector, that is an extension of PostgreSQL and Qdrants. And on this slide you can see Pycon for example, and Pycon was excluded because of the lack of geospatial filtering. And so we benchmark them regarding the qps. Nicolas Mauti: So query per second. So this one is for performance, and you can see that quadron was far from the others, and we also benchmark it regarding the precision, how we computed the precision, for the precision we used a corpus that it's called textmax, and Textmax corpus provide 1 million vectors and 1000 queries. And for each queries you have your grown truth of the closest vectors. They used brute force knn for that. And so we stored this vector in our databases, we run the query and we check how many vectors we found that were in the ground truth. And so they give you a measure of your precision of your inn algorithm. For this metric, you could see that elasticsearch was a little bit better than Qdrants, but in fact we were able to tune a little bit the parameter of the AsHNSW algorithm and indexes. And at the end we found a better trade off, and we found a trade off between performance and precision in Qdrants that were better for us than what we can found on elasticsearch. Nicolas Mauti: So at the end we decided to go with Qdrant. So we have, I think all know we have our model and we have our tool to operate them, to operate our model. So a final part of this presentation will be about the deployment. I will talk about it a little bit because I think it's interesting and it's also part of my job as a development engineer. So regarding the deployment, first we decided to deploy a Qdrant in a cluster configuration. We decided to start with three nodes and so we decided to get our collection. So collection are where all your vector are stored in Qdrant, it's like a table in SQL or an index in elasticsearch. And so we decided to split our collection between three nodes. Nicolas Mauti: So it's what we call shards. So you have a shard of a collection on each node, and then for each shard you have one replica. So the replica is basically a copy of a shard that is living on another node than the primary shard. So this way you have a copy on another node. And so this way if we operate normal conditions, your query will be split across your three nodes, and so you will have your response accordingly. But what is interesting is that if we lose one node, for example, this one, for example, because we are performing a rolling upgrade or because kubernetes always kill pods, we will be still able to operate because we have the replica to get our data. And so this configuration is very robust and so we are very happy with it. And regarding the deployment. Nicolas Mauti: So as I said, we deployed it in kubernetes. So we use the Qdrant M chart, the official M chart provided by Qdrants. In fact we subcharted it because we needed some additional components in your clusters and some custom configuration. So I didn't talk about this, but M chart are just a bunch of file of Yaml files that will describe the Kubernetes object you will need in your cluster to operate your databases in your case, and it's collection of file and templates to do that. And when you have that at malt we are using what we called a GitHub's approach. And so GitHub's approach is basic idea that your git repository is your groom truth regarding what you must have in your Kubernetes clusters. And so we store these files and these M charts in git, and then we have a tool that is called Argo CD that will pull our git repository at some time and it will check the differences between what we have in git and what we have in our cluster and what is living in our cluster. And it will then synchronize what we have in git directly in our cluster, either automatically or manually. Nicolas Mauti: So this is a very good approach to collaborate and to be sure that what we have in git is what you have in your cluster. And to be sure about what you have in your cluster by just looking at your git repository. And I think that's pretty all I have one last slide, I think that will interest you. It's about the outcome of the project, because we did that at malt. We built this architecture with our first phase with Qdrants that do the semantic matching and that apply all the filtering we have. And in the second part we keep our all drunking system. And so if we look at the latency of our apps, at the P 50 latency of our apps, so it's a wall app with the two steps and with the filters, the semantic matching and the ranking. As you can see, we started in a debate test in mid October. Nicolas Mauti: Before that it was around 10 seconds latency, as I said at the beginning of the talk. And so we already saw a huge drop in the application and we decided to go full in December and we can see another big drop. And so we were around 10 seconds and now we are around 1 second and alpha. So we divided the latency of more than five times. And so it's a very good news for us because first it's more scalable because the retriever is very scalable and with the cluster deployment of Qdrants, if we need, we can add more nodes and we will be able to scale this phase. And after that we have a fixed number of freelancers that go into the matching part. And so the matching part doesn't have to scale. No. Nicolas Mauti: And the other good news is that now that we are able to scale and we have a fixed size, after our first parts, we are able to build more complex and better matching model and we will be able to improve the quality of our matching because now we are able to scale and to be able to handle more freelancers. Demetrios: That's incredible. Nicolas Mauti: Yeah, sure. It was a very good news for us. And so that's all. And so maybe you have plenty of question and maybe we can go with that. Demetrios: All right, first off, I want to give a shout out in case there are freelancers that are watching this or looking at this, now is a great time to just join Malt, I think. It seems like it's getting better every day. So I know there's questions that will come through and trickle in, but we've already got one from Luis. What's happening, Luis? He's asking what library or service were you using for Ann before considering Qdrant, in fact. Nicolas Mauti: So before that we didn't add any library or service or we were not doing any inn search or semantic search in the way we are doing it right now. We just had one model when we passed the freelancers and the project at the same time in the model, and we got relevancy scoring at the end. And so that's why it was also so slow because you had to constrict each pair and send each pair to your model. And so right now we don't have to do that and so it's much better. Demetrios: Yeah, that makes sense. One question from my side is it took you, I think you said in October you started with the A B test and then in December you rolled it out. What was that last slide that you had? Nicolas Mauti: Yeah, that's exactly that. Demetrios: Why the hesitation? Why did it take you from October to December to go down? What was the part that you weren't sure about? Because it feels like you saw a huge drop right there and then why did you wait until December? Nicolas Mauti: Yeah, regarding the latency and regarding the drop of the latency, the result was very clear very quickly. I think maybe one week after that, we were convinced that the latency was better. First, our idea was to validate the architecture, but the second reason was to be sure that we didn't degrade the quality of the matching because we have a two step process. And the risk is that the two model doesn't agree with each other. And so if the intersection of your first step and the second step is not good enough, you will just have some empty result at the end because your first part will select a part of freelancer and the second step, you select another part and so your intersection is empty. And so our goal was to assess that the two steps were aligned and so that we didn't degrade the quality of the matching. And regarding the volume of projects we have, we had to wait for approximately two months. Demetrios: It makes complete sense. Well, man, I really appreciate this. And can you go back to the slide where you show how people can get in touch with you if they want to reach out and talk more? I encourage everyone to do that. And thanks so much, Nicholas. This is great, man. Nicolas Mauti: Thanks. Demetrios: All right, everyone. By the way, in case you want to join us and talk about what you're working on and how you're using Qdrant or what you're doing in the semantic space or semantic search or vector space, all that fun stuff, hit us up. We would love to have you on here. One last question for you, Nicola. Something came through. What indexing method do you use? Is it good for using OpenAI embeddings? Nicolas Mauti: So in our case, we have our own model to build the embeddings. Demetrios: Yeah, I remember you saying that at the beginning, actually. All right, cool. Well, man, thanks a lot and we will see everyone next week for another one of these vector space talks. Thank you all for joining and take care. Care. Thanks.
qdrant-landing/content/blog/talk-with-youtube-without-paying-a-cent-francesco-saverio-zuppichini-vector-space-talks.md
--- draft: false title: Talk with YouTube without paying a cent - Francesco Saverio Zuppichini | Vector Space Talks slug: youtube-without-paying-cent short_description: A sneak peek into the tech world as Francesco shares his ideas and processes on coding innovative solutions. description: Francesco Zuppichini outlines the process of converting YouTube video subtitles into searchable vector databases, leveraging tools like YouTube DL and Hugging Face, and addressing the challenges of coding without conventional frameworks in machine learning engineering. preview_image: /blog/from_cms/francesco-saverio-zuppichini-bp-cropped.png date: 2024-03-27T12:37:55.643Z author: Demetrios Brinkmann featured: false tags: - embeddings - LLMs - Retrieval Augmented Generation - Ollama --- > *"Now I do believe that Qdrant, I'm not sponsored by Qdrant, but I do believe it's the best one for a couple of reasons. And we're going to see them mostly because I can just run it on my computer so it's full private and I'm in charge of my data.”*\ -- Francesco Saverio Zuppichini > Francesco Saverio Zuppichini is a Senior Full Stack Machine Learning Engineer at Zurich Insurance with experience in both large corporations and startups of various sizes. He is passionate about sharing knowledge, and building communities, and is known as a skilled practitioner in computer vision. He is proud of the community he built because of all the amazing people he got to know. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7kVd5a64sz2ib26IxyUikO?si=mrOoVP3ISQ22kXrSUdOmQA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/56mFleo06LI).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/56mFleo06LI?si=P4vF9jeQZEZzjb32" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Talk-with-YouTube-without-paying-a-cent---Francesco-Saverio-Zuppichini--Vector-Space-Talks-016-e2ggt6d/a-ab17u5q" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Curious about transforming YouTube content into searchable elements? Francesco Zuppichini unpacks the journey of coding a RAG by using subtitles as input, harnessing technologies like YouTube DL, Hugging Face, and Qdrant, while debating framework reliance and the fine art of selecting the right software tools. Here are some insights from this episode: 1. **Behind the Code**: Francesco unravels how to create a RAG using YouTube videos. Get ready to geek out on the nuts and bolts that make this magic happen. 2. **Vector Voodoo**: Ever wonder how embedding vectors carry out their similarity searches? Francesco's got you covered with his brilliant explanation of vector databases and the mind-bending distance method that seeks out those matches. 3. **Function over Class**: The debate is as old as stardust. Francesco shares why he prefers using functions over classes for better code organization and demonstrates how this approach solidifies when running language models with Ollama. 4. **Metadata Magic**: Find out how metadata isn't just a sidekick but plays a pivotal role in the realm of Qdrant and RAGs. Learn why Francesco values metadata as payload and the challenges it presents in developing domain-specific applications. 5. **Tool Selection Tips**: Deciding on the right software tool can feel like navigating an asteroid belt. Francesco shares his criteria—ease of installation, robust documentation, and a little help from friends—to ensure a safe landing. > Fun Fact: Francesco confessed that his code for chunking subtitles was "a little bit crappy" because of laziness—proving that even pros take shortcuts to the stars now and then. > ## Show notes: 00:00 Intro to Francesco\ 05:36 Create YouTube rack for data retrieval.\ 09:10 Local web dev showcase without frameworks effectively.\ 11:12 Qdrant: converting video text to vectors.\ 13:43 Connect to vectordb, specify config, keep it simple.\ 17:59 Recreate, compare vectors, filter for right matches.\ 21:36 Use functions and share states for simpler coding.\ 29:32 Gemini Pro generates task-based outputs effectively.\ 32:36 Good documentation shows pride in the product.\ 35:38 Organizing different data types in separate collections.\ 38:36 Proactive approach to understanding code and scalability.\ 42:22 User feedback and statistics evaluation is crucial.\ 44:09 Consider user needs for chatbot accuracy and relevance. ## More Quotes from Francesco: *"So through Docker, using Docker compose, very simple here I just copy and paste the configuration for the Qdrant documentation. I run it and when I run it I also get a very nice looking interface.*”\ -- Francesco Saverio Zuppichini *"It's a very easy way to debug stuff because if you see a lot of vectors from the same document in the same place, maybe your chunking is not doing a great job because maybe you have some too much kind of overlapping on the recent bug in your code in which you have duplicate chunks. Okay, so we have our vector DB running. Now we need to do some setup stuff. So very easy to do with Qdrant. You just need to get the Qdrant client.”*\ -- Francesco Saverio Zuppichini *"So straightforward, so useful. A lot of people, they don't realize that types are very useful. So kudos to the Qdrant team to actually make all the types very nice.”*\ -- Francesco Saverio Zuppichini ## Transcript: Demetrios: Folks, welcome to another vector space talks. I'm excited to be here and it is a special day because I've got a co host with me today. Sabrina, what's going on? How you doing? Sabrina Aquino: Let's go. Thank you so much, Demetrios, for having me here. I've always wanted to participate in vector space talks. Now it's finally my chance. So thank you so much. Demetrios: Your dream has come true and what a day for it to come true because we've got a special guest today. While we've got you here, Sabrina, I know you've been doing some excellent stuff on the Internet when it comes to other ways to engage with the Qdrant community. Can you break that down real fast before we jump into this? Sabrina Aquino: Absolutely. I think an announcement here is we're hosting our first discord office hours. We're going to be answering all your questions about Qdrant with Qdrant team members, where you can interact with us, with our community as well. And we're also going to be dropping a few insights on the next Qdrant release 1.8. So that's super exciting and also, we are. Sorry, I just have another thing going on here on the live. Demetrios: Music got in your ear. Sabrina Aquino: We're also having the vector voices on Twitter, the X Spaces roundtable, where we bring experts to talk about a topic with our team. And you can also jump in and ask questions on the AMA. So that's super exciting as well. And, yeah, see you guys there. And I'll drop a link of the discord in the comments so you guys can join our community and be a part of it. Demetrios: Exactly what I was about to say. So without further ado, let's bring on our guest of honor, Mr. Where are you at, dude? Francesco Zuppichini: Hi. Hello. How are you? Demetrios: I'm great. How are you doing? Francesco Zuppichini: Great. Demetrios: I've been seeing you all around the Internet and I am very excited to be able to chat with you today. I know you've got a bit of stuff planned for us. You've got a whole presentation, right? Francesco Zuppichini: Correct. Demetrios: But for those that do not know you, you're a full stack machine learning engineer at Zurich Insurance. I think you also are very vocal and you are fun to follow on LinkedIn is what I would say. And we're going to get to that at the end after you give your presentation. But once again, reminder for everybody, if you want to ask questions, hit us up with questions in the chat. As far as going through his presentation today, you're going to be talking to us all about some really cool stuff about rags. I'm going to let you get into it, man. And while you're sharing your screen, I'm going to tell people a little bit of a fun fact about you. That you put ketchup on your pizza, which I think is a little bit sacrilegious. Francesco Zuppichini: Yes. So that's 100% true. And I hope that the italian pizza police is not listening to this call or I can be in real trouble. Demetrios: I think we just lost a few viewers there, but it's all good. Sabrina Aquino: Italy viewers just dropped out. Demetrios: Yeah, the Italians just dropped, but it's all good. We will cut that part out in post production, my man. I'm going to share your screen and I'm going to let you get after it. I'll be hanging around in case any questions pop up with Sabrina in the background. And here you go, bro. Francesco Zuppichini: Wonderful. So you can see my screen, right? Demetrios: Yes, for sure. Francesco Zuppichini: That's perfect. Okay, so today we're going to talk about talk with YouTube without paying a cent, no framework bs. So the goal of today is to showcase how to code a RAG given as an input a YouTube video without using any framework like language, et cetera, et cetera. And I want to show you that it's straightforward, using a bunch of technologies and Qdrants as well. And you can do all of this without actually pay to any service. Right. So we are going to run our PEDro DB locally and also the language model. We are going to run our machines. Francesco Zuppichini: And yeah, it's going to be a technical talk, so I will kind of guide you through the code. Feel free to interrupt me at any time if you have questions, if you want to ask why I did that, et cetera, et cetera. So very quickly, before we get started, I just want you not to introduce myself. So yeah, senior full stack machine engineer. That's just a bunch of funny work to basically say that I do a little bit of everything. Start. So when I was working, I start as computer vision engineer, I work at PwC, then a bunch of startups, and now I sold my soul to insurance companies working at insurance. And before I was doing computer vision, now I'm doing due to Chat GPT, hyper language model, I'm doing more of that. Francesco Zuppichini: But I'm always involved in bringing the full product together. So from zero to something that is deployed and running. So I always be interested in web dev. I can also do website servers, a little bit of infrastructure as well. So now I'm just doing a little bit of everything. So this is why there is full stack there. Yeah. Okay, let's get started to something a little bit more interesting than myself. Francesco Zuppichini: So our goal is to create a full local YouTube rack. And if you don't want a rack, is, it's basically a system in which you take some data. In this case, we are going to take subtitles from YouTube videos and you're able to basically q a with your data. So you're able to use a language model, you ask questions, then we retrieve the relevant parts in the data that you provide, and hopefully you're going to get the right answer to your. So let's talk about the technologies that we're going to use. So to get the subtitles from a video, we're going to use YouTube DL and YouTube DL. It's a library that is available through Pip. So Python, I think at some point it was on GitHub and then I think it was removed because Google, they were a little bit beach about that. Francesco Zuppichini: So then they realized it on GitHub. And now I think it's on GitHub again, but you can just install it through Pip and it's very cool. Demetrios: One thing, man, are you sharing a slide? Because all I see is your. I think you shared a different screen. Francesco Zuppichini: Oh, boy. Demetrios: I just see the video of you. There we go. Francesco Zuppichini: Entire screen. Yeah. I'm sorry. Thank you so much. Demetrios: There we go. Francesco Zuppichini: Wonderful. Okay, so in order to get the embedding. So to translate from text to vectors, right, so we're going to use hugging face just an embedding model so we can actually get some vectors. Then as soon as we got our vectors, we need to store and search them. So we're going to use our beloved Qdrant to do so. We also need to keep a little bit of stage right because we need to know which video we have processed so we don't redo the old embeddings and the storing every time we see the same video. So for this part, I'm just going to use SQLite, which is just basically an SQL database in just a file. So very easy to use, very kind of lightweight, and it's only your computer, so it's safe to run the language model. Francesco Zuppichini: We're going to use Ollama. That is a very simple way and very well done way to just get a language model that is running on your computer. And you can also call it using the OpenAI Python library because they have implemented the same endpoint as. It's like, it's super convenient, super easy to use. If you already have some code that is calling OpenAI, you can just run a different language model using Ollama. And you just need to basically change two lines of code. So what we're going to do, basically, I'm going to take a video. So here it's a video from Fireship IO. Francesco Zuppichini: We're going to run our command line and we're going to ask some questions. Now, if you can still, in theory, you should be able to see my full screen. Yeah. So very quickly to showcase that to you, I already processed this video from the good sound YouTube channel and I have already here my command line. So I can already kind of see, you know, I can ask a question like what is the contact size of Germany? And we're going to get the reply. Yeah. And here we're going to get a reply. And now I want to walk you through how you can do something similar. Francesco Zuppichini: Now, the goal is not to create the best rack in the world. It's just to showcase like show zero to something that is actually working. How you can do that in a fully local way without using any framework so you can really understand what's going on under the hood. Because I think a lot of people, they try to copy, to just copy and paste stuff on Langchain and then they end up in a situation when they need to change something, but they don't really know where the stuff is. So this is why I just want to just show like Windfield zero to hero. So the first step will be I get a YouTube video and now I need to get the subtitle. So you could actually use a model to take the audio from the video and get the text. Like a whisper model from OpenAI, for example. Francesco Zuppichini: In this case, we are taking advantage that YouTube allow people to upload subtitles and YouTube will automatically generate the subtitles. So here using YouTube dial, I'm just going to get my video URL. I'm going to set up a bunch of options like the format they want, et cetera, et cetera. And then basically I'm going to download and get the subtitles. And they look something like this. Let me show you an example. Something similar to this one, right? We have the timestamps and we do have all text inside. Now the next step. Francesco Zuppichini: So we got our source of data, we have our text key. Next step is I need to translate my text to vectors. Now the easiest way to do so is just use sentence transformers for backing phase. So here I've installed it. I load in a model. In this case I'm using this model here. I have no idea what tat model is. I just default one tatted find and it seems to work fine. Francesco Zuppichini: And then in order to use it, I'm just providing a query and I'm getting back a list of vectors. So we have a way to take a video, take the text from the video, convert that to vectors with a semantic meaningful representation. And now we need to store them. Now I do believe that Qdrant, I'm not sponsored by Qdrant, but I do believe it's the best one for a couple of reasons. And we're going to see them mostly because I can just run it on my computer so it's full private and I'm in charge of my data. So the way I'm running it is through Docker compose. So through Docker, using Docker compose, very simple here I just copy and paste the configuration for the Qdrant documentation. I run it and when I run it I also get a very nice looking interface. Francesco Zuppichini: I'm going to show that to you because I think it's very cool. So here I've already some vectors inside here so I can just look in my collection, it's called embeddings, an original name. And we can see all the chunks that were embed with the metadata, in this case just the video id. A super cool thing, super useful to debug is go in the visualize part and see the embeddings, the projected embeddings. You can actually do a bounce of stuff. You can actually also go here and color them by some metadata. Like I can say I want to have a different color based on the video id. In this case I just have one video. Francesco Zuppichini: I will show that as soon as we add more videos. This is so cool, so useful. I will use this at work as well in which I have a lot of documents. And it's a very easy way to debug stuff because if you see a lot of vectors from the same document in the same place, maybe your chunking is not doing a great job because maybe you have some too much kind of overlapping on the recent bug in your code in which you have duplicate chunks. Okay, so we have our vector DB running. Now we need to do some setup stuff. So very easy to do with Qdrant. You just need to get the Qdrant client. Francesco Zuppichini: So you have a connection with a vectordb, you create a connection, you specify a name, you specify some configuration stuff. In this case I just specify the vector size because Qdrant, it needs to know how big the vectors are going to be and the distance I want to use. So I'm going to use the cosite distance in Qdrant documentation there are a lot of parameters. You can do a lot of crazy stuff here and just keep it very simple. And yeah, another important thing is that since we are going to embed more videos, when I ask a question to a video, I need to know which embedded are from that video. So we're going to create an index. So it's very efficient to filter my embedded based on that index, an index on the metadata video because when I store a chunk in Qdrant, I also going to include from which video is coming from. Very simple, very simple to set up. Francesco Zuppichini: You just need to do this once. I was very lazy so I just assumed that if this is going to fail, it means that it's because I've already created a collection. So I'm just going to pass it and call it a day. Okay, so this is basically all the preprocess this setup you need to do to have your Qdrant ready to store and search vectors. To store vectors. Straightforward, very straightforward as well. Just need again the client. So the connection to the database here I'm passing my embedding so sentence transformer model and I'm passing my chunks as a list of documents. Francesco Zuppichini: So documents in my code is just a type that will contain just this metadata here. Very simple. It's similar to Lang chain here. I just have attacked it because it's lightweight. To store them we call the upload records function. We encode them here. There is a little bit of bad variable names from my side which I replacing that. So you shouldn't do that. Francesco Zuppichini: Apologize about that and you just send the records. Another very cool thing about Qdrant. So the second things that I really like is that they have types for what you send through the library. So this models record is a Qdrant type. So you use it and you know immediately. So what you need to put inside. So let me give you an example. Right? So assuming that I'm programming, right, I'm going to say model record bank. Francesco Zuppichini: I know immediately. So what I have to put inside, right? So straightforward, so useful. A lot of people, they don't realize that types are very useful. So kudos to the Qdrant team to actually make all the types very nice. Another cool thing is that if you're using fast API to build a web server, if you are going to return a Qdrant models type, it's actually going to be serialized automatically through pydantic. So you don't need to do weird stuff. It's all handled by the Qdrant APIs, by the product SDK. Super cool. Francesco Zuppichini: Now we have a way to store our chunks to embed them. So this is how they look like in the interface. I can see them, I can go to them, et cetera, et Cetera. Very nice. Now the missing part, right. So video subtitles. I chunked the subtitles. I haven't show you the chunking code. Francesco Zuppichini: It's a little bit crappy because I was very lazy. So I just like chunking by characters count and a little bit of overlapping. We have a way to store and embed our chunks and now we need a way to search. That's basically one of the missing steps. Now search straightforward as well. This is also a good example because I can show you how effective is to create filters using Qdrant. So what do we need to search with again the vector client, the embeddings, because we have a query, right. We need to run the query with the same embedding models. Francesco Zuppichini: We need to recreate to embed in a vector and then we need to compare with the vectors in the vector Db using a distance method, in this case considered similarity in order to get the right matches right, the closest one in our vector DB, in our vector search base. So passing a query string, I'm passing a video id and I pass in a label. So how many hits I want to get from the metadb. Now to create a filter again you're going to use the model package from the Qdrant framework. So here I'm just creating a filter class for the model and I'm saying okay, this filter must match this key, right? So metadata video id with this video id. So when we search, before we do the similarity search, we are going to filter away all the vectors that are not from that video. Wonderful. Now super easy as well. Francesco Zuppichini: We just call the DB search, right pass. Our collection name here is star coded. Apologies about that, I think I forgot to put the right global variable our coded, we create a query, we set the limit, we pass the query filter, we get the it back as a dictionary in the payload field of each it and we recreate our document a dictionary. I have types, right? So I know what this function is going to return. Now if you were to use a framework, right this part, it will be basically the same thing. If I were to use langchain and I want to specify a filter, I would have to write the same amount of code. So most of the times you don't really need to use a framework. One thing that is nice about not using a framework here is that I add control on the indexes. Francesco Zuppichini: Lang chain, for instance, will create the indexes only while you call a classmate like from document. And that is kind of cumbersome because sometimes I wasn't quoting bugs in which I was not understanding why one index was created before, after, et cetera, et cetera. So yes, just try to keep things simple and not always write on frameworks. Wonderful. Now I have a way to ask a query to get back the relative parts from that video. Now we need to translate this list of chunks to something that we can read as human. Before we do that, I was almost going to forget we need to keep state. Now, one of the last missing part is something in which I can store data. Francesco Zuppichini: Here I just have a setup function in which I'm going to create an SQL lite database, create a table called videos in which I have an id and a title. So later I can check, hey, is this video already in my database? Yes. I don't need to process that. I can just start immediately to QA on that video. If not, I'm going to do the chunking and embeddings. Got a couple of functions here to get video from Db to save video from and to save video to Db. So notice now I only use functions. I'm not using classes here. Francesco Zuppichini: I'm not a fan of object writing programming because it's very easy to kind of reach inheritance health in which we have like ten levels of inheritance. And here if a function needs to have state, here we do need to have state because we need a connection. So I will just have a function that initialize that state. I return tat to me, and me as a caller, I'm just going to call it and pass my state. Very simple tips allow you really to divide your code properly. You don't need to think about is my class to couple with another class, et cetera, et cetera. Very simple, very effective. So what I suggest when you're coding, just start with function and share states across just pass down state. Francesco Zuppichini: And when you realize that you can cluster a lot of function together with a common behavior, you can go ahead and put state in a class and have key function as methods. So try to not start first by trying to understand which class I need to use around how I connect them, because in my opinion it's just a waste of time. So just start with function and then try to cluster them together if you need to. Okay, last part, the juicy part as well. Language models. So we need the language model. Why do we need the language model? Because I'm going to ask a question, right. I'm going to get a bunch of relevant chunks from a video and the language model. Francesco Zuppichini: It needs to answer that to me. So it needs to get information from the chunks and reply that to me using that information as a context. To run language model, the easiest way in my opinion is using Ollama. There are a lot of models that are available. I put a link here and you can also bring your own model. There are a lot of videos and tutorial how to do that. You run this command as soon as you install it on Linux. It's a one line to install Ollama. Francesco Zuppichini: You run this command here, it's going to download Mistral 7B very good model and run it on your gpu if you have one, or your cpu if you don't have a gpu, run it on GPU. Here you can see it yet. It's around 6gb. So even with a low tier gpu, you should be able to run a seven minute model on your gpu. Okay, so this is the prompt just for also to show you how easy is this, this prompt was just very lazy. Copy and paste from langchain source code here prompt use the following piece of context to answer the question at the end. Blah blah blah variable to inject the context inside question variable to get question and then we're going to get an answer. How do we call it? Is it easy? I have a function here called getanswer passing a bunch of stuff, passing also the OpenAI from the OpenAI Python package model client passing a question, passing a vdb, my DB client, my embeddings, reading my prompt, getting my matching documents, calling the search function we have just seen before, creating my context. Francesco Zuppichini: So just joining the text in the chunks on a new line, calling the format function in Python. As simple as that. Just calling the format function in Python because the format function will look at a string and kitty will inject variables that match inside these parentheses. Passing context passing question using the OpenAI model client APIs and getting a reply back. Super easy. And here I'm returning the reply from the language model and also the list of documents. So this should be documents. I think I did a mistake. Francesco Zuppichini: When I copy and paste this to get this image and we are done right. We have a way to get some answers from a video by putting everything together. This can seem scary because there is no comment here, but I can show you tson code. I think it's easier so I can highlight stuff. I'm creating my embeddings, I'm getting my database, I'm getting my vector DB login, some stuff I'm getting my model client, I'm getting my vid. So here I'm defining the state that I need. You don't need comments because I get it straightforward. Like here I'm getting the vector db, good function name. Francesco Zuppichini: Then if I don't have the vector db, sorry. If I don't have the video id in a database, I'm going to get some information to the video. I'm going to download the subtitles, split the subtitles. I'm going to do the embeddings. In the end I'm going to save it to the betterDb. Finally I'm going to get my video back, printing something and start a while loop in which you can get an answer. So this is the full pipeline. Very simple, all function. Francesco Zuppichini: Also here fit function is very simple to divide things. Around here I have a file called RAG and here I just do all the RAG stuff. Right. It's all here similar. I have my file called crude. Here I'm doing everything I need to do with my database, et cetera, et cetera. Also a file called YouTube. So just try to split things based on what they do instead of what they are. Francesco Zuppichini: I think it's easier than to code. Yeah. So I can actually show you a demo in which we kind of embed a video from scratch. So let me kill this bad boy here. Let's get a juicy YouTube video from Sam. We can go with Gemma. We can go with Gemma. I think I haven't embedded that yet. Francesco Zuppichini: I'm sorry. My Eddie block is doing weird stuff over here. Okay, let me put this here. Demetrios: This is the moment that we need to all pray to the demo gods that this will work. Francesco Zuppichini: Oh yeah. I'm so sorry. I'm so sorry. I think it was already processed. So let me. I don't know this one. Also I noticed I'm seeing this very weird thing which I've just not seen that yesterday. So that's going to be interesting. Francesco Zuppichini: I think my poor Linux computer is giving up to running language models. Okay. Downloading ceramic logs, embeddings and we have it now before I forgot because I think that you guys spent some time doing this. So let's go on the visualize page and let's actually do the color by and let's do metadata, video id. Video id. Let's run it. Metadata, metadata, video meta. Oh my God. Francesco Zuppichini: Data video id. Why don't see the other one? I don't know. This is the beauty of live section. Demetrios: This is how we know it's real. Francesco Zuppichini: Yeah, I mean, this is working, right? This is called Chevroni Pro. That video. Yeah, I don't know about that. I don't know about that. It was working before. I can touch for sure. So probably I'm doing something wrong, probably later. Let's try that. Francesco Zuppichini: Let's see. I must be doing something wrong, so don't worry about that. But we are ready to ask questions, so maybe I can just say I don't know, what is Gemini pro? So let's see, Mr. Running on GPU is kind of fast, it doesn't take too much time. And here we can see we are 6gb, 1gb is for the embedding model. So 4gb, 5gb running the language model here it says Gemini pro is a colonized tool that can generate output based on given tasks. Blah, blah, blah, blah, blah, blah. Yeah, it seems to work. Francesco Zuppichini: Here you have it. Thanks. Of course. And I don't know if there are any questions about it. Demetrios: So many questions. There's a question that came through the chat that is a simple one that we can answer right away, which is can we access this code anywhere? Francesco Zuppichini: Yeah, so it's on my GitHub. Can I share a link with you in the chat? Maybe? So that should be YouTube. Can I put it here maybe? Demetrios: Yes, most definitely can. And we'll drop that into all of the spots so that we have it. Now. Next question from my side, while people are also asking, and you've got some fans in the chat right now, so. Francesco Zuppichini: Nice to everyone by the way. Demetrios: So from my side, I'm wondering, do you have any specific design decisions criteria that you use when you are building out your stack? Like you chose Mistral, you chose Ollama, you chose Qdrant. It sounds like with Qdrant you did some testing and you appreciated the capabilities. With Qdrant, was it similar with Ollama and Mistral? Francesco Zuppichini: So my test is how long it's going to take to install that tool. If it's taking too much time and it's hard to install because documentation is bad, so that it's a red flag, right? Because if it's hard to install and documentation is bad for the installation, that's the first thing people are going to read. So probably it's not going to be great for something down the road to use Olama. It took me two minutes, took me two minutes, it was incredible. But just install it, run it and it was done. Same thing with Qualent as well and same thing with the hacking phase library. So to me, usually as soon as if I see that something is easy to install, that's usually means that is good. And if the documentation to install it, it's good. Francesco Zuppichini: It means that people thought about it and they care about writing good documentation because they want people to use their tools. A lot of times for enterprises tools like cloud enterprise services, documentation is terrible because they know you're going to pay because you're an enterprise. And some manager has decided five years ago to use TatCloud provider, not the other. So I think know if you see recommendation that means that the people's company, startup enterprise behind that want you to use their software because they know and they're proud of it. Like they know that is good. So usually this is my way of going. And then of course I watch a lot of YouTube videos so I see people talking about different texts, et cetera. And if some youtuber which I trust say like I tried this seems to work well, I will note it down. Francesco Zuppichini: So then in the future I know hey, for these things I think I use ABC and this has already be tested by someone. I don't know I'm going to use it. Another important thing is reach out to your friends networks and say hey guys, I need to do this. Do you know if you have a good stock that you're already trying to experience with that? Demetrios: Yeah. With respect to the enterprise software type of tools, there was something that I saw that was hilarious. It was something along the lines of custom customer and user is not the same thing. Customer is the one who pays, user is the one who suffers. Francesco Zuppichini: That's really true for enterprise software, I need to tell you. So that's true. Demetrios: Yeah, we've all been through it. So there's another question coming through in the chat about would there be a collection for each embedded video based on your unique view video id? Francesco Zuppichini: No. What you want to do, I mean you could do that of course, but collection should encapsulate the project that you're doing more or less in my mind. So in this case I just call it embeddings. Maybe I should have called videos. So they are just going to be inside the same collection, they're just going to have different metadata. I think you need to correct me if I'm wrong that from your side, from the Qdrant code, searching things in the same collection, probably it's more effective to some degree. And imagine that if you have 1000 videos you need to create 1000 collection. And then I think cocoa wise collection are meant to have data coming from the same source, semantic value. Francesco Zuppichini: So in my case I have all videos. If I were to have different data, maybe from pdfs. Probably I would just create another collection, right, if I don't want them to be in the same part and search them. And one cool thing of having all the videos in the same collection is that I can just ask a question to all the videos at the same time if I want to, or I can change my filter and ask questions to two free videos. Specifically, you can do that if you have one collection per video, right? Like for instance at work I was embedding PDF and using qualitative and sometimes you need to talk with two pdf at the same time free, or just one, or maybe all the PDF in that folder. So I was just changing the filter, right? And that can only be done if they're all in the same collection. Sabrina Aquino: Yeah, that's a great explanation of collections. And I do love your approach of having everything locally and having everything in a structured way that you can really understand what you're doing. And I know you mentioned sometimes frameworks are not necessary. And I wonder also from your side, when do you think a framework would be necessary and does it have to do with scaling? What do you think? Francesco Zuppichini: So that's a great question. So what frameworks in theory should give you is good interfaces, right? So a good interface means that if I'm following that interface, I know that I can always call something that implements that interface in the same way. Like for instance in Langchain, if I call a betterdb, I can just swap the betterdb and I can call it in the same way. If the interfaces are good, the framework is useful. If you know that you are going to change stuff. In my case, I know from the beginning that I'm going to use Qdrant, I'm going to use Ollama, and I'm going to use SQL lite. So why should I go to the hello reading framework documentation? I install libraries, and then you need to install a bunch of packages from the framework that you don't even know why you need them. Maybe you have a conflict package, et cetera, et cetera. Francesco Zuppichini: If you know ready. So what you want to do then just code it and call it a day? Like in this case, I know I'm not going to change the vector DB. If you think that you're going to change something, even if it's a simple approach, it's fair enough, simple to change stuff. Like I will say that if you know that you want to change your vector DB providers, either you define your own interface or you use a framework with an already defined interface. But be careful because right too much on framework will. First of all, basically you don't know what's going on inside the hood for launching because it's so kudos to them. They were the first one. They are very smart people, et cetera, et cetera. Francesco Zuppichini: But they have inheritance held in that code. And in order to understand how to do certain stuff I had to look at in the source code, right. And try to figure it out. So which class is inherited from that? And going straight up in order to understand what behavior that class was supposed to have. If I pass this parameter, and sometimes defining an interface is straightforward, just maybe you want to define a couple of function in a class. You call it, you just need to define the inputs and the outputs and if you want to scale and you can just implement a new class called that interface. Yeah, that is at least like my take. I try to first try to do stuff and then if I need to scale, at least I have already something working and I can scale it instead of kind of try to do the perfect thing from the beginning. Francesco Zuppichini: Also because I hate reading documentation, so I try to avoid doing that in general. Sabrina Aquino: Yeah, I totally love this. It's about having like what's your end project? Do you actually need what you're going to build and understanding what you're building behind? I think it's super nice. We're also having another question which is I haven't used Qdrant yet. The metadata is also part of the embedding, I. E. Prepended to the chunk or so basically he's asking if the metadata is also embedded in the answer for that. Go ahead. Francesco Zuppichini: I think you have a good article about another search which you also probably embed the title. Yeah, I remember you have a good article in which you showcase having chunks with the title from, I think the section, right. And you first do a search, find the right title and then you do a search inside. So all the chunks from that paragraph, I think from that section, if I'm not mistaken. It really depends on the use case, though. If you have a document full of information, splitting a lot of paragraph, very long one, and you need to very be precise on what you want to fetch, you need to take advantage of the structure of the document, right? Sabrina Aquino: Yeah, absolutely. The metadata goes as payload in Qdrant. So basically it's like a JSON type of information attached to your data that's not embedded. We also have documentation on it. I will answer on the comments as well, I think another question I have for you, Franz, about the sort of evaluation and how would you perform a little evaluation on this rag that you created. Francesco Zuppichini: Okay, so that is an interesting question, because everybody talks about metrics and evaluation. Most of the times you don't really have that, right? So you have benchmarks, right. And everybody can use a benchmark to evaluate their pipeline. But when you have domain specific documents, like at work, for example, I'm doing RAG on insurance documents now. How do I create a data set from that in order to evaluate my RAG? It's going to be very time consuming. So what we are trying to do, so we get a bunch of people who knows these documents, catching some paragraph, try to ask a question, and that has the reply there and having basically a ground truth from their side. A lot of time the reply has to be composed from different part of the document. So, yeah, it's very hard. Francesco Zuppichini: It's very hard. So what I will kind of suggest is try to use no benchmark, or then you empirically try that. If you're building a RAG that users are going to use, always include a way to collect feedback and collect statistics. So collect the conversation, if that is okay with your privacy rules. Because in my opinion, it's always better to put something in production till you wait too much time, because you need to run all your metrics, et cetera, et cetera. And as soon as people start using that, you kind of see if it is good enough, maybe for language model itself, so that it's a different task, because you need to be sure that they don't say, we're stuck to the users. I don't really have the source of true answer here. It's very hard to evaluate them. Francesco Zuppichini: So what I know people also try to do, like, so they get some paragraph or some chunks, they ask GPD four to generate a question and the answer based on the paragraph, and they use that as an auto labeling way to create a data set to evaluate your RAG. That can also be effective, I guess 100%, yeah. Demetrios: And depending on your use case, you probably need more rigorous evaluation or less, like in this case, what you're doing, it might not need that rigor. Francesco Zuppichini: You can see, actually, I think was Canada Airlines, right? Demetrios: Yeah. Francesco Zuppichini: If you have something that is facing paying users, then think one of the times before that. In my case at all, I have something that is used by internal users and we communicate with them. So if my chat bot is saying something wrong, so they will tell me. And the worst thing that can happen is that they need to manually look for the answer. But as soon as your chatbot needs to do something that had people that are going to pay or medical stuff. You need to understand that for some use cases, you need to apply certain rules for others and you can be kind of more relaxed, I would say, based on the arm that your chatbot is going to generate. Demetrios: Yeah, I think that's all the questions we've got for now. Appreciate you coming on here and chatting with us. And I also appreciate everybody listening in. Anyone who is not following Fran, go give him a follow, at least for the laughs, the chuckles, and huge thanks to you, Sabrina, for joining us, too. It was a pleasure having you here. I look forward to doing many more of these. Sabrina Aquino: The pleasure is all mine, Demetrios, and it was a total pleasure. Fran, I learned a lot from your session today. Francesco Zuppichini: Thank you so much. Thank you so much. And also go ahead and follow the Qdrant on LinkedIn. They post a lot of cool stuff and read the Qdrant blogs. They're very good. They're very good. Demetrios: That's it. The team is going to love to hear that, I'm sure. So if you are doing anything cool with good old Qdrant, give us a ring so we can feature you in the vector space talks. Until next time, don't get lost in vector space. We will see you all later. Have a good one, y'all.
qdrant-landing/content/blog/teaching-vector-databases-at-scale-alfredo-deza-vector-space-talks-019-2.md
--- draft: false title: Teaching Vector Databases at Scale - Alfredo Deza | Vector Space Talks slug: teaching-vector-db-at-scale short_description: Alfredo Deza tackles AI teaching, the intersection of technology and academia, and the value of consistent learning. description: Alfredo Deza discusses the practicality of machine learning operations, highlighting how personal interest in topics like wine datasets enhances engagement, while reflecting on the synergies between his professional sportsman discipline and the persistent, straightforward approach required for effectively educating on vector databases and large language models. preview_image: /blog/from_cms/alfredo-deza-bp-cropped.png date: 2024-04-09T03:06:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Search - Retrieval Augmented Generation - Vector Space Talks - Coursera --- > *"So usually I get asked, why are you using Qdrant? What's the big deal? Why are you picking these over all of the other ones? And to me it boils down to, aside from being renowned or recognized, that it works fairly well. There's one core component that is critical here, and that is it has to be very straightforward, very easy to set up so that I can teach it, because if it's easy, well, sort of like easy to or straightforward to teach, then you can take the next step and you can make it a little more complex, put other things around it, and that creates a great development experience and a learning experience as well.”*\ — Alfredo Deza > Alfredo is a software engineer, speaker, author, and former Olympic athlete working in Developer Relations at Microsoft. He has written several books about programming languages and artificial intelligence and has created online courses about the cloud and machine learning. He currently is an Adjunct Professor at Duke University, and as part of his role, works closely with universities around the world like Georgia Tech, Duke University, Carnegie Mellon, and Oxford University where he often gives guest lectures about technology. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4HFSrTJWxl7IgQj8j6kwXN?si=99H-p0fKQ0WuVEBJI9ugUw), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/3l6F6A_It0Q?feature=shared).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/3l6F6A_It0Q?si=cFZGAh7995iHilcY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Teaching-Vector-Databases-at-Scale---Alfredo-Deza--Vector-Space-Talks-019-e2hhjlo/a-ab3qp7u" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** How does a former athlete such as Alfredo Deza end up in this AI and Machine Learning industry? That’s what we’ll find out in this episode of Vector Space Talks. Let’s understand how his background as an olympian offers a unique perspective on consistency and discipline that's a real game-changer in this industry. Here are some things you’ll discover from this episode: 1. **The Intersection of Teaching and Tech:** Alfredo discusses on how to effectively bridge the gap between technical concepts and student understanding, especially when dealing with complex topics like vector databases. 2. **Simplified Learning:** Dive into Alfredo's advocacy for simplicity in teaching methods, mirroring his approach with Qdrant and the potential for a Rust in-memory implementation aimed at enhancing learning experiences. 3. **Beyond the Titanic Dataset:** Discover why Alfredo prefers to teach with a wine dataset he developed himself, underscoring the importance of using engaging subject matter in education. 4. **AI Learning Acceleration:** Alfredo discusses the struggle universities face to keep pace with AI advancements and how online platforms can offer a more up-to-date curriculum. 5. **Consistency is Key:** Alfredo draws parallels between the discipline required in high-level athletics and the ongoing learning journey in AI, zeroing in on his mantra, “There is no secret” to staying consistent. > Fun Fact: Alfredo tells the story of athlete Dick Fosbury's invention of the Fosbury Flop to highlight the significance of teaching simplicity. > ## Show notes: 00:00 Teaching machine learning, Python to graduate students.\ 06:03 Azure AI search service simplifies teaching, Qdrant facilitates learning.\ 10:49 Controversy over high jump style.\ 13:18 Embracing past for inspiration, emphasizing consistency.\ 15:43 Consistent learning and practice lead to success.\ 20:26 Teaching SQL uses SQLite, Rust has limitations.\ 25:21 Online platforms improve and speed up education.\ 29:24 Duke and Coursera offer specialized language courses.\ 31:21 Passion for wines, creating diverse dataset.\ 35:00 Encouragement for vector db discussion, wrap up.\ ## More Quotes from Alfredo: *"Qdrant makes it straightforward. We use it in-memory for my classes and I would love to see something similar setup in Rust to make teaching even easier.”*\ — Alfredo Deza *"Retrieval augmented generation is kind of like having an open book test. So the large language model is the student, and they have an open book so they can see the answers and then repackage that into their own words and provide an answer.”*\ — Alfredo Deza *"With Qdrant, I appreciate that the use of the Python API is so simple. It avoids the complexity that comes from having a back-end system like in Rust where you need an actual instance of the database running.”*\ — Alfredo Deza ## Transcript: Demetrios: What is happening? Everyone, welcome back to another vector space talks. I am Demetrios, and I am joined today by good old Sabrina. Where you at, Sabrina? Hello? Sabrina Aquino: Hello, Demetrios. I'm from Brazil. I'm in Brazil right now. I know that you are traveling currently. Demetrios: Where are you? At Kubecon in Paris. And it has been magnificent. But I could not wait to join the session today because we've got Alfredo coming at us. Alfredo Deza: What's up, dude? Hi. How are you? Demetrios: I'm good, man. It's been a while. I think the last time that we chatted was two years ago, maybe right before your book came out. When did the book come out? Alfredo Deza: Yeah, something like that. I would say a couple of years ago. Yeah. I wrote, co authored practical machine learning operations with no gift. And it was published on O'Reilly. Demetrios: Yeah. And that was, I think, two years ago. So you've been doing a lot of stuff since then. Let's be honest, you are maybe one of the most active men on the Internet. I always love seeing what you're doing. You're bringing immense value to everything that you touch. I'm really excited to be able to chat with you for this next 30 minutes. Alfredo Deza: Yeah, of course. Demetrios: Maybe just, we'll start it off. We're going to get into it when it comes to what you're doing and really what the space looks like right now. Right. But I would love to hear a little bit of what you've been up to since, for the last two years, because I haven't talked to you. Alfredo Deza: Yeah, that's right. Well, several different things, actually. Right after we chatted last time, I joined Microsoft to work in developer relations. Microsoft has a big group of folks working in developer relations. And basically, for me, it signaled my shift away from regular software engineering. I was primarily doing software engineering and thought that perhaps with the books and some of the courses that I had published, it was time for me to get into more teaching and providing useful content, which is really something very rewarding. And in developer relations, in advocacy in general, it's kind of like a way of teaching. We demonstrate technology, how it works from a technical point of view. Alfredo Deza: So aside from that, started working really closely with several different universities. I work with Georgia Tech, Oxford University, Carnegie Mellon University, and Duke University, where I've been working as an adjunct professor for a couple of years as well. So at Duke, what I do is I teach a couple of classes a year. One is on machine learning. Last year was machine learning operations, and this year it's going to, I think, hopefully I'm not messing anything up. I think we're going to shift a little bit to doing operations with large language models. And in the fall I teach a programming class for graduate students that want to join one of the graduate programs and they want to get a primer on Python. So I teach a little bit of that. Alfredo Deza: And in the meantime, also in partnership with Duke, getting a lot of courses out on Coursera, and from large language models to doing stuff with Azure, to machine learning operations, to rust, I've been doing a lot of rust lately, which I really like. So, yeah, so a lot of different things, but I think the core pillar for me remains being able to teach and spread the knowledge. Demetrios: Love it, man. And I know you've been diving into vector databases. Can you tell us more? Alfredo Deza: Yeah, well, the thing is that when you're trying to teach, and yes, one of the courses that we had out for large language models was applying retrieval augmented generation, which is the basis for vector databases, to see how it works. This is how it works. These are the components that you need. Let's create an application from scratch and see how it works. And for those that don't know, retrieval augmented generation is kind of like having. The other day I saw a description about this, which I really like, which is a way of, it's kind of like having an open book test. So the large language model is the student, and they have an open book so they can see the answers and then repackage that into their own words and provide an answer, which is kind of like what we do with vector databases in the retrieval augmented generation pattern. We've been putting a lot of examples on how to do these, and in the case of Azure, you're enabling certain services. Alfredo Deza: There's the Azure AI search service, which is really good. But sometimes when you're trying to teach specifically, it is useful to have a very straightforward way to do this and applying or creating a retrieval augmented generation pattern, it's kind of tricky, I think. We're not there yet to do it in a nice, straightforward way. So there are several different options, Qdrant being one of them. So usually I get asked, why are you using Qdrant? What's the big deal? Why are you picking these over all of the other ones? And to me it boils down to, aside from being renowned or recognized, that it works fairly well. There's one core component that is critical here, and that is it has to be very straightforward, very easy to set up so that I can teach it, because if it's easy, well, sort of like easy to or straightforward to teach, then you can take the next step and you can make it a little more complex, put other things around it, and that creates a great development experience and a learning experience as well. If something is very complex, if the list of requirements is very long, you're not going to be very happy, you're going to spend all this time trying to figure, and when you have, similar to what happens with automation, when you have a list of 20 different things that you need to, in order to, say, deploy a website, you're going to get things out of order, you're going to forget one thing, you're going to have a typo, you're going to mess it up, you're going to have to start from scratch, and you're going to get into a situation where you can't get out of it. And Qdrant does provide a very straightforward way to run the database, and that one is the in memory implementation with Python. Alfredo Deza: So you can actually write a little bit of python once you install the libraries and say, I want to instantiate a vector database and I wanted to run it in memory. So for teaching, this is great. It's like, hey, of course it's not for production, but just write these couple of lines and let's get right into it. Let's just start populating these and see how it works. And it works. It's great. You don't need to have all of these, like, wow, let's launch Kubernetes over here and let's have all of these dynamic. No, why? I mean, sure, you want to create a business model and you want to launch to production eventually, and you want to have all that running perfect. Alfredo Deza: But for this setup, like for understanding how it works, for trying baby steps into understanding vector databases, this is perfect. My one requirement, or my one wish list item is to have that in memory thing for rust. That would be pretty sweet, because I think it'll make teaching rust and retrieval augmented generation with rust much easier. I wouldn't have to worry about bringing up containers or external services. So that's the deal with rust. And I'll tell you one last story about why I think specifically making it easy to get started with so that I can teach it, so that others can learn from it, is crucial. I would say almost 50 years ago, maybe a little bit more, my dad went to Italy to have a course on athletics. My dad was involved in sports and he was going through this, I think it was like a six month specialization on athletics. Alfredo Deza: And he was in class and it had been recent that the high jump had transitioned from one style to the other. The previous style, the old style right now is the old style. It's kind of like, it was kind of like over the bar. It was kind of like a weird style. And it had recently transitioned to a thing called the Fosbury flop. This person, his last name is Dick Fosbury, invented the Fosbury flop. He said, no, I'm just going to go straight at it, then do a little curve and then jump over it. And then he did, and then he started winning everything. Alfredo Deza: And everybody's like, what this guy? Well, first they thought he was crazy, and they thought that dismissive of what he was trying to do. And there were people that sticklers that wanted to stay with the older style, but then he started beating records and winning medals, and so people were like, well, is this a good thing? Let's try it out. So there was a whole. They were casting doubt. It's like, is this really the thing? Is this really what we should be doing? So one of the questions that my dad had to answer in this specialization he did in Italy was like, which style is better, it's the old style or the new style? And so my dad said, it's the new style. And they asked him, why is the new style better? And he didn't choose the path of answering the, well, because this guy just won the Olympics or he just did a record over here that at the end is meaningless. What he said was, it is the better style because it's easier to teach and it is 100% correct. When you're teaching high jump, it is much easier to teach the Fosbury flop than the other style. Alfredo Deza: It is super hard. So you start seeing this parallel in teaching and learning where, but with this one, you have all of these world records and things are going great. Well, great. But is anybody going to try, are you going to have more people looking into it or are you going to have less? What is it that we're trying to do here? Right. Demetrios: Not going to lie, I did not see how you were going to land the plane on coming from the high jump into the vector database space, but you did it gracefully. That was well done. So, basically, the easier it is to teach, the more people are going to be able to jump on board and the more people are going to be able to get value out of it. Sabrina Aquino: I absolutely love it, by the way. It's a pleasure to meet you, Alfredo. And I was actually about to ask you. I love your background as an olympic athlete. Right. And I was wondering, do you make any connections or how do we interact this background with your current teaching and AI? And do you see any similarities or something coming from that approach into what you've applied? Alfredo Deza: Well, you're bringing a great point. It's taken me a very long time to feel comfortable talking about my professional sports past. I don't want to feel like I'm overwhelming anyone or trying to be like a show off. So I usually try not to mention, although I'm feeling more comfortable mentioning my professional past. But the only situations where I think it's good to talk about it is when I feel like there's a small chance that I might get someone thinking about the possibilities of what they can actually do and what they can try. And things that are seemingly complex might be achievable. So you mentioned similarities, but I think there are a couple of things that happen when you're an athlete in any sport, really, that you're trying to or you're operating at the very highest level and there's several things that happen there. You have to be consistent. Alfredo Deza: And it's something that I teach my kids as well. I have one of my kids, he's like, I did really a lot of exercise today and then for a week he doesn't do anything else. And he's like, now I'm going to do exercise again. And she's going to do 4 hours. And it's like, wait a second, wait a second. It's okay. You want to do it. This is great. Alfredo Deza: But no intensity. You need to be consistent. Oh, dad, you don't let me work out and it's like, no work out. Good, I support you, but you have to be consistent and slowly start ramping up and slowly start getting better. And it happens a lot with learning. We are in an era that concepts and things are advancing so fast that things are getting obsolete even faster. So you're always in this motion of trying to learn. So what I would say is the similarities are in the consistency. Alfredo Deza: You have to keep learning, you have to keep applying yourself. But it can be like, oh, today I'm going to read this whole book from start to end and you're just going to learn everything about, I don't know, rust. It's like, well, no, try applying rust a little bit every day and feel comfortable with it. And at the very end you will do better. Like, you can't go with high intensity because you're going to get burned out, you're going to overwhelmed and it's not going to work out. You don't go to the Olympics by working out for like a few months. Actually, a very long time ago, a reporter asked me, how many months have you been working out preparing for the Olympics? It's like, what do you mean with how many months? I've been training my whole life for this. What are we talking about? Demetrios: We're not talking in months or years. We're talking in lifetimes, right? Alfredo Deza: So you have to take it easy. You can't do that. And beyond that, consistency. Consistency goes hand in hand with discipline. I came to the US in 2006. I don't live like I was born in Peru and I came to the US with no degree. I didn't go to college. Well, I went to college for a few months and then I dropped out and I didn't have a career, I didn't have experience. Alfredo Deza: I was just recently married. I have never worked in my life because I used to be a professional athlete. And the only thing that I decided to do was to do amazing work, apply myself and try to keep learning and never stop learning. In the back of my mind, it's like, oh, I have a tremendous knowledge gap that I need to fulfill by learning. And actually, I have tremendous respect and I'm incredibly grateful by all of the people that opened doors for me and gave me an opportunity, one of them being Noah Giff, which I co authored a few books with him and some of the courses. And he actually taught me to write Python. I didn't know how to program. And he said, you know what? I think you should learn to write some python. Alfredo Deza: And I was like, python? Why would I ever need to do that? And I did. He's like, let's just find something to automate. I mean, what a concept. Find something to apply automation. And every week on Fridays, we'll just take a look at it and that's it. And we did that for a while. And then he said, you know what? You should apply for speaking at Python. How can I be speaking at a conference when I just started learning? It's like your perspective is different. Alfredo Deza: You just started learning these. You're going to do it in an interesting way. So I think those are concepts that are very important to me. Stay disciplined, stay consistent, and keep at it. The secret is that there's no secret. That's the bottom line. You have to keep consistent. Otherwise things are always making excuses. Alfredo Deza: Is very simple. Demetrios: The secret is there is no secret. That is beautiful. So you did kind of sprinkle this idea of, oh, I wish there was more stuff happening with Qdrant and rust. Can you talk a little bit more to that? Because one piece of Qdrant that people tend to love is that it's built in rust. Right. But also, I know that you mentioned before, could we get a little bit of this action so that I don't have to deal with any. What was it you were saying? The containers. Alfredo Deza: Yeah. Right. Now, if you want to have a proof of concept, and I always go for like, what's the easiest, the most straightforward, the less annoying things I need to do, the better. And with Python, the Python API for Qdrant, you can just write a few lines and say, I want to create an instance in memory and then that's it. The database is created for you. This is very similar, or I would say actually almost identical to how you run SQLite. Sqlite is the embedded database you can create in memory. And it's actually how I teach SQL as well. Alfredo Deza: When I have to teach SQl, I use sqlite. I think it's perfect. But in rust, like you said, Qdrant's backend is built on rust. There is no in memory implementation. So you are required to have an actual instance of the Qdrant database running. So you have a couple of options, but one of them probably means you'll have to bring up a container with Qdrant running and then you'll have to connect to that instance. So when you're teaching, the development environments are kind of constrained. Either you are in a lab somewhere like Crusader has labs, but those are self contained. Alfredo Deza: It's kind of tricky to get them running 100%. You can run multiple containers at the same time. So things start becoming more complex. Not only more complex for the learner, but also in this case, like the teacher, me who wants to figure out how to make this all run in a very constrained environment. And that makes it tricky. And I fasted the team, by the way, and I was told that maybe at some point they can do some magic and put the in memory implementation on the rust side of things, which I think it would be tremendous. Sabrina Aquino: We're going to advocate for that on our side. We're also going to be asking for it. And I think this is really good too. It really makes it easier. Me as a student not long ago, I do see what you mean. It's quite hard to get it all working very fast in the time of a class that you don't have a lot of time and students can get. I don't know, it's quite complex. I do get what you mean. Sabrina Aquino: And you also are working both on the tech industry and on academia, which I think is super interesting. And I always kind of feel like those two are a bit disconnected sometimes. And I was wondering what you think that how important is the collaboration of these two areas considering how fast the AI space is going to right now? And what are your thoughts? Alfredo Deza: Well, I don't like generalizing, but I'm going to generalize right now. I would say most universities are several steps behind, and there's a lot of complexities involved in higher education specifically. Most importantly, these institutions tend to be fairly large, and with fairly large institutions, what do you get? Oh, you get the magical bureaucracy for anything you want to do. Something like, oh, well, you need to talk to that department that needs to authorize something, that needs to go to some other department, and it's like, I'm going to change the curriculum. It's like, no, you can't. What does that mean? I have actually had conversations with faculty in universities where they say, listen, curricula. Yeah, we get that. We need to update it, but we change curricula every five years. Alfredo Deza: And so. See you in a while. It's been three years. We have two more years to go. See you in a couple of years. And that's detrimental to students now. I get it. Building curricula, it's very hard. Alfredo Deza: It takes a lot of work for the faculty to put something together. So it is something that, from a faculty perspective, it's like they're not going to get paid more if they update the curriculum. Demetrios: Right. Alfredo Deza: And it's a massive amount of work now that, of course, comes to the detriment of the learner. The student will be under service because they will have to go through curricula that is fairly dated. Now, there are situations and there are programs where this doesn't happen. And Duke, I've worked with several. They're teaching Llama file, which was built by Mozilla. And when did Llama file came out? It was just like a few months ago. And I think it's incredible. And I think those skills that are the ones that students need today in order to not only learn these things, but also be able to apply them when they're looking for a job or trying to professionally even apply them into their day to day, now that's one side of things. Alfredo Deza: But there's the other aspect. In the case of Duke, as well as other universities out there, they're using these online platforms so that they can put courses out there faster. Do you really need to go through a four year program to understand how retrieval augmented generation works? Or how to implement it? I would argue no, but would you be better out, like, taking a course that will take you perhaps a couple of weeks to go through and be fairly proficient? I would say yes, 100%. And you see several institutions putting courses out there that are meaningful, that are useful, that they can cope with the speed at which things are needed. I think it's kind of good. And I think that sometimes we tend to think about knowledge and learning things, kind of like in a bubble, especially here in the US. I think there's this college is this magical place where all of the amazing things happen. And if you don't go to college, things are going to go very bad for you. Alfredo Deza: And I don't think that's true. I think if you like college, if you like university, by all means take advantage of it. You want to experience it. That sounds great. I think there's tons of opportunity to do it outside of the university or the college setting and taking online courses from validated instructors. They have a good profile. Not someone that just dumped something on genetic AI and started. Demetrios: Someone like you. Alfredo Deza: Well, if you want to. Yeah, sure, why not? I mean, there's students that really like my teaching style. I think that's great. If you don't like my teaching style. Sometimes I tend to go a little bit slower because I don't want to overwhelm anyone. That's all good. But there is opportunity. And when I mention these things, people are like, oh, really? I'm not advertising for Coursera or anything else, but some of these platforms, if you pay a monthly fee, I think it's between $40 and $60. Alfredo Deza: I think on the expensive side, you can take advantage of all of these courses and as much as you can take them. Sometimes even companies say, hey, you have a paid subscription, go take it all. And I've met people like that. It's like, this is incredible. I'm learning so much. Perfect. I think there's a mix of things. I don't think there's like a binary answer, like, oh, you need to do this, or, no, don't do that, and everything's going to be well again. Demetrios: Yeah. Can you talk a little bit more about your course? And if I wanted to go on Coursera, what can I expect from. Alfredo Deza: You know, and again, I don't think as much as I like talking about my courses and the things that I do, I want to emphasize, like, if someone is watching this video or listening into what we're talking about, find something that is interesting to you and find a course that kind of delivers that thing, that sliver of interesting stuff, and then try it out. I think that's the best way. Don't get overwhelmed by. It's like, is this the right vector database that I should be learning? Is this instructor? It's like, no, try it out. What's going to happen? You don't like it when you're watching a bad video series or docuseries on Netflix or any streaming platform? Do you just like, I pay my $10 a month, so I'm going to muster through this whole 20 more episodes of this thing that I don't like. It's meaningless. It doesn't matter. Just move on. Alfredo Deza: So having said that, on Coursera specifically with Duke University, we tend to put courses out there that are going to be used in our programs in the things that I teach. For example, we just released the large language models. Specialization and specialization is a grouping of between four and six courses. So in there we have doing large language models with Azure, for example, introduction to generative AI, having a very simple rag pattern with Qdrant. I also have examples on how to do it with Azure AI search, which I think is pretty cool as well. How to do it locally with Llama file, which I think is great. You can have all of these large language models running locally, and then you have a little bit of Qdrant sprinkle over there, and then you have rack pattern. Now, I tend to teach with things that I really like, and I'll give you a quick example. Alfredo Deza: I think there's three data sets that are one of the top three most used data sets in all of machine learning and data science. Those are the Boston housing market, the diabetes data set in the US, and the other one is the Titanic. And everybody uses those. And I don't really understand why. I mean, perhaps I do understand why. It's because they're easy, they're clean, they're ready to go. Nothing's ever wrong with these, and everybody has used them to boredom. But for the life of me, you wouldn't be able to convince me to use any of those, because these are not topics that I really care about and they don't resonate with me. Alfredo Deza: The Titanic specifically is just horrid. Well, if I was 37 and I'm on first class and I'm male, would I survive? It's like, what are we trying to do here? How is this useful to anyone? So I tend to use things that I like, and I'm really passionate about wine. So I built my own data set, which is a collection of wines from all over the world, they have the ratings, they have the region, they have the type of grape and the notes and the name of the wine. So when I'm teaching them, like, look at this, this is amazing. It's wines from all over the world. So let's do a little bit of things here. So, for rag, what I was able to do is actually in the courses as well. I do, ah, I really know wines from Argentina, but these wines, it would be amazing if you can find me not a Malbec, but perhaps a cabernet franc. Alfredo Deza: That is amazing. From, it goes through Qdrant, goes back to llama file using some large language model or even small language model, like the Phi 2 from Microsoft, I think is really good. And he goes, it tells. Yeah, sure. I get that you want to have some good wines. Here's some good stuff that I can give you. And so it's great, right? I think it's great. So I think those kinds of things that are interesting to the person that is teaching or presenting, I think that's the key, because whenever you're talking about things that are very boring, that you do not care about, things are not going to go well for you. Alfredo Deza: I mean, if I didn't like teaching, if I didn't like vector databases, you would tell right away. It's like, well, yes, I've been doing stuff with the vector databases. They're good. Yeah, Qdrant, very good. You would tell right away. I can't lie. Very good. Demetrios: You can't fool anybody. Alfredo Deza: No. Demetrios: Well, dude, this is awesome. We will drop a link to the chat. We will drop a link to the course in the chat so that in case anybody does want to go on this wine tasting journey with you, they can. And I'm sure there's all kinds of things that will spark the creativity of the students as they go through it, because when you were talking about that, I was like, oh, it would be really cool to make that same type of thing, but with ski resorts there, you go around the world. And if I want this type of ski resort, I'm going to just ask my chat bot. So I'm excited to see what people create with it. I also really appreciate you coming on here, giving us your time and talking through all this. It's been a pleasure, as always, Alfredo. Demetrios: Thank you so much. Alfredo Deza: Yeah, thank you. Thank you for having me. Always happy to chat with you. I think Qdrant is doing a very solid product. Hopefully, my wish list item of in memory in rust comes to fruition, but I get it. Sometimes there are other priorities. It's all good. Yeah. Alfredo Deza: If anyone wants to connect with me, I'm always active on LinkedIn primarily. Always happy to connect with folks and talk about learning and improving and always being a better person. Demetrios: Excellent. Well, we will sign off, and if anyone else out there wants to come on here and talk to us about vector databases, we're always happy to have you. Feel free to reach out. And remember, don't get lost in vector space, folks. We will see you on the next one. Sabrina Aquino: Good night. Thank you so much.
qdrant-landing/content/blog/the-bitter-lesson-of-retrieval-in-generative-language-model-workflows-mikko-lehtimäki-vector-space-talks.md
--- draft: false title: The Bitter Lesson of Retrieval in Generative Language Model Workflows - Mikko Lehtimäki | Vector Space Talks slug: bitter-lesson-generative-language-model short_description: Mikko Lehtimäki discusses the challenges and techniques in implementing retrieval augmented generation for Yokot AI description: Mikko Lehtimäki delves into the intricate world of retrieval-augmented generation, discussing how Yokot AI manages vast diverse data inputs and how focusing on re-ranking can massively improve LLM workflows and output quality. preview_image: /blog/from_cms/mikko-lehtimäki-cropped.png date: 2024-01-29T16:31:02.511Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - generative language model - Retrieval Augmented Generation - Softlandia --- > *"If you haven't heard of the bitter lesson, it's actually a theorem. It's based on a blog post by Ricard Sutton, and it states basically that based on what we have learned from the development of machine learning and artificial intelligence systems in the previous decades, the methods that can leverage data and compute tends to or will eventually outperform the methods that are designed or handcrafted by humans.”*\ -- Mikko Lehtimäki > Dr. Mikko Lehtimäki is a data scientist, researcher and software engineer. He has delivered a range of data-driven solutions, from machine vision for robotics in circular economy to generative AI in journalism. Mikko is a co-founder of Softlandia, an innovative AI solutions provider. There, he leads the development of YOKOTAI, an LLM-based productivity booster that connects to enterprise data. Recently, Mikko has contributed software to Llama-index and Guardrails-AI, two leading open-source initiatives in the LLM space. He completed his PhD in the intersection of computational neuroscience and machine learning, which gives him a unique perspective on the design and implementation of AI systems. With Softlandia, Mikko also hosts chill hybrid-format data science meetups where everyone is welcome to participate. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/5hAnDq7MH9qjjtYVjmsGrD?si=zByq7XXGSjOdLbXZDXTzoA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/D8lOvz5xp5c).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/D8lOvz5xp5c?si=k9tIcDf31xqjqiv1" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/The-Bitter-Lesson-of-Retrieval-in-Generative-Language-Model-Workflows---Mikko-Lehtimki--Vector-Space-Talk-011-e2evek4/a-aat2k24" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Aren’t you curious about what the bitter lesson is and how it plays out in generative language model workflows? Check it out as Mikko delves into the intricate world of retrieval-augmented generation, discussing how Yokot AI manages vast diverse data inputs and how focusing on re-ranking can massively improve LLM workflows and output quality. 5 key takeaways you’ll get from this episode: 1. **The Development of Yokot AI:** Mikko detangles the complex web of how Softlandia's in-house stack is changing the game for language model applications. 2. **Unpacking Retrieval-Augmented Generation:** Learn the rocket science behind uploading documents and scraping the web for that nugget of insight, all through the prowess of Yokot AI's LLMs. 3. **The "Bitter Lesson" Theory:** Dive into the theorem that's shaking the foundations of AI, suggesting the supremacy of data and computing over human design. 4. **High-Quality Content Generation:** Understand how the system's handling of massive data inputs is propelling content quality to stratospheric heights. 5. **Future Proofing with Re-Ranking:** Discover why improving the re-ranking component might be akin to discovering a new universe within our AI landscapes. > Fun Fact: Yokot AI incorporates a retrieval augmented generation mechanism to facilitate the retrieval of relevant information, which allows users to upload and leverage their own documents or scrape data from the web. > ## Show notes: 00:00 Talk on retrieval for language models and Yokot AI platform.\ 06:24 Data flexibility in various languages leads progress.\ 10:45 User inputs document, system converts to vectors.\ 13:40 Enhance data quality, reduce duplicates, streamline processing.\ 19:20 Reducing complexity by focusing on re-ranker.\ 21:13 Retrieval process enhances efficiency of language model.\ 24:25 Information retrieval methods evolving, leveraging data, computing.\ 28:11 Optimal to run lightning on local hardware. ## More Quotes from Mikko: "*We used to build image analysis on this type of features that we designed manually... Whereas now we can just feed a bunch of images to a transformer, and we'll get beautiful bounding boxes and semantic segmentation outputs without building rules into the system.*”\ -- Mikko Lehtimäki *"We cannot just leave it out and hope that someday soon we will have a language model that doesn't require us fetching the data for it in such a sophisticated manner. The reranker is a component that can leverage data and compute quite efficiently, and it doesn't require that much manual craftmanship either.”*\ -- Mikko Lehtimäki *"We can augment the data we store, for example, by using multiple chunking strategies or generating question answer pairs from the user's documents, and then we'll embed those and look them up when the queries come in.”*\ -- Mikko Lehtimäki in improving data quality in rack stack ## Transcript: Demetrios: What is happening? Everyone, it is great to have you here with us for yet another vector space talks. I have the pleasure of being joined by Mikko today, who is the co founder of Softlandia, and he's also lead data scientist. He's done all kinds of great software engineering and data science in his career, and currently he leads the development of Yokot AI, which I just learned the pronunciation of, and he's going to tell us all about it. But I'll give you the TLDR. It's an LLM based productivity booster that can connect to your data. What's going on, Mikko? How you doing, bro? Mikko Lehtimäki: Hey, thanks. Cool to be here. Yes. Demetrios: So, I have to say, I said it before we hit record or before we started going live, but I got to say it again. The talk title is spot on. Your talk title is the bitter lessons of retrieval in generative language model workflows. Mikko Lehtimäki: Exactly. Demetrios: So I'm guessing you've got a lot of hardship that you've been through, and you're going to hopefully tell us all about it so that we do not have to make the same mistakes as you did. We can be wise and learn from your mistakes before we have to make them ourselves, right? All right. That's a great segue into you getting into it, man. I know you got to talk. I know you got some slides to share, so feel free to start throwing those up on the screen. And for everyone that is here joining, feel free to add some questions in the chat. I'll be monitoring it so that in case you have any questions, I can jump in and make sure that Mikko answers them before he moves on to the next slide. All right, Mikko, I see your screen, bro. Demetrios: This is good stuff. Mikko Lehtimäki: Cool. So, shall we get into? Yeah. My name is Mikko. I'm the chief data scientist here at Softlandia. I finished my phd last summer and have been doing the Softlandia for two years now. I'm also a contributor to some open source AI LLM libraries like Llama index and cartrails AI. So if you haven't checked those out ever, please do. Here at Softlandia, we are primarily an AI consultancy that focuses on end to end AI solutions, but we've also developed our in house stack for large language model applications, which I'll be discussing today. Mikko Lehtimäki: So the topic of the talk is a bit provocative. Maybe it's a bitter lesson of retrieval for large language models, and it really stems from our experience in building production ready retrieval augmented generation solutions. I just want to say it's not really a lecture, so I'm going to tell you to do this or do that. I'll just try to walk you through the thought process that we've kind of adapted when we develop rack solutions, and we'll see if that resonates with you or not. So our LLM solution is called Yokot AI. It's really like a platform where enterprises can upload their own documents and get language model based insights from them. The typical example is question answering from your documents, but we're doing a bit more than that. For example, users can generate long form documents, leveraging their own data, and worrying about the token limitations that you typically run in when you ask an LLM to output something. Mikko Lehtimäki: Here you see just a snapshot of the data management view that we have built. So users can bring their own documents or scrape the web, and then access the data with LLMS right away. This is the document generation output. It's longer than you typically see, and each section can be based on different data sources. We've got different generative flows, like we call them, so you can take your documents and change the style using llms. And of course, the typical chat view, which is really like the entry point, to also do these workflows. And you can see the sources that the language model is using when you're asking questions from your data. And this is all made possible with retrieval augmented generation. Mikko Lehtimäki: That happens behind the scenes. So when we ask the LLM to do a task, we're first fetching data from what was uploaded, and then everything goes from there. So we decide which data to pull, how to use it, how to generate the output, and how to present it to the user so that they can keep on conversing with the data or export it to their desired format, whatnot. But the primary challenge with this kind of system is that it is very open ended. So we don't really set restrictions on what kind of data the users can upload or what language the data is in. So, for example, we're based in Finland. Most of our customers are here in the Nordics. They talk, speak Finnish, Swedish. Mikko Lehtimäki: Most of their data is in English, because why not? And they can just use whatever language they feel with the system. So we don't want to restrict any of that. The other thing is the chat view as an interface, it really doesn't set much limits. So the users have the freedom to do the task that they choose with the system. So the possibilities are really broad that we have to prepare for. So that's what we are building. Now, if you haven't heard of the bitter lesson, it's actually a theorem. It's based on a blog post by Ricard Sutton, and it states basically that based on what we have learned from the development of machine learning and artificial intelligence systems in the previous decades, the methods that can leverage data and compute tends to or will eventually outperform the methods that are designed or handcrafted by humans. Mikko Lehtimäki: So for example, I have an illustration here showing how this has manifested in image analysis. So on the left hand side, you see the output from an operation that extracts gradients from images. We used to build image analysis on this type of features that we designed manually. We would run some kind of edge extraction, we would count corners, we would compute the edge distances and design the features by hand in order to work with image data. Whereas now we can just feed a bunch of images to a transformer, and we'll get beautiful bounding boxes and semantic segmentation outputs without building rules into the system. So that's a prime example of the bitter lesson in action. Now, if we take this to the context of rack or retrieval augmented generation, let's have a look first at the simple rack architecture. Why do we do this in the first place? Well, it's because the language models themselves, they don't have up to date data because they've been trained a while ago. Mikko Lehtimäki: You don't really even know when. So we need to give them access to more recent data, and we need a method for doing that. And the other thing is problems like hallucinations. We found that if you just ask the model a question that is in the training data, you won't get always reliable results. But if you can crown the model's answers with data, you will get more factual results. So this is what can be done with the rack as well. And the final thing is that we just cannot give a book, for example, in one go the language model, because even if theoretically it could read the input in one go, the result quality that you get from the language model is going to suffer if you feed it too much data at once. So this is why we have designed retrieval augmented generation architectures. Mikko Lehtimäki: And if we look at this system on the bottom, you see the typical data ingestion. So the user gives a document, we slice it to small chunks, and we compute a numerical representation with vector embeddings and store those in a vector database. Why a vector database? Because it's really efficient to retrieve vectors from it when we get users query. So that is also embedded and it's used to look up relevant sources from the data that was previously uploaded efficiently directly on the database, and then we can fit the resulting text, the language model, to synthesize an answer. And this is how the RHe works in very basic form. Now you can see that if you have only a single document that you work with, it's nice if the problem set that you want to solve is very constrained, but the more data you can bring to your system, the more workflows you can build on that data. So if you have, for example, access to a complete book or many books, it's easy to see you can also generate higher quality content from that data. So this architecture really must be such that it can also make use of those larger amounts of data. Mikko Lehtimäki: Anyway, once you implement this for the first time, it really feels like magic. It tends to work quite nicely, but soon you'll notice that it's not suitable for all kinds of tasks. Like you will see sometimes that, for example, the lists. If you retrieve lists, they may be broken. If you ask questions that are document comparisons, you may not get complete results. If you run summarization tasks without thinking about it anymore, then that will most likely lead to super results. So we'll have to extend the architecture quite a bit to take into account all the use cases that we want to enable with bigger amounts of data that the users upload. And this is what it may look like once you've gone through a few design iterations. Mikko Lehtimäki: So let's see, what steps can we add to our rack stack in order to make it deliver better quality results? If we start from the bottom again, we can see that we try to enhance the quality of the data that we upload by adding steps to the data ingestion pipeline. We can augment the data we store, for example, by using multiple chunking strategies or generating question answer pairs from the user's documents, and then we'll embed those and look them up when the queries come in. At the same time, we can reduce the data we upload, so we want to make sure there are no duplicates. We want to clean low quality things like HTML stuff, and we also may want to add some metadata so that certain data, for example references, can be excluded from the search results if they're not needed to run the tasks that we like to do. We've modeled this as a stream processing pipeline, by the way. So we're using Bytewax, which is another really nice open source framework. Just a tiny advertisement we're going to have a workshop with Bytewax about rack on February 16, so keep your eyes open for that. At the center I have added different databases and different retrieval methods. Mikko Lehtimäki: We may, for example, add keyword based retrieval and metadata filters. The nice thing is that you can do all of this with quattron if you like. So that can be like a one stop shop for your document data. But some users may want to experiment with different databases, like graph databases or NoSQL databases and just ordinary SQL databases as well. They can enable different kinds of use cases really. So it's up to your service which one is really useful for you. If we look more to the left, we have a component called query planner and some query routers. And this really determines the response strategy. Mikko Lehtimäki: So when you get the query from the user, for example, you want to take different steps in order to answer it. For example, you may want to decompose the query to small questions that you answer individually, and each individual question may take a different path. So you may want to do a query based on metadata, for example pages five and six from a document. Or you may want to look up based on keywords full each page or chunk with a specific word. And there's really like a massive amount of choices how this can go. Another example is generating hypothetical documents based on the query and embedding those rather than the query itself. That will in some cases lead to higher quality retrieval results. But now all this leads into the right side of the query path. Mikko Lehtimäki: So here we have a re ranker. So if we implement all of this, we end up really retrieving a lot of data. We typically will retrieve more than it makes sense to give to the language model in a single call. So we can add a re ranker step here and it will firstly filter out low quality retrieved content and secondly, it will put the higher quality content on the top of the retrieved documents. And now when you pass this reranked content to the language model, it should be able to pay better attention to the details that actually matter given the query. And this should lead to you better managing the amount of data that you have to handle with your final response generator, LLM. And it should also make the response generator a bit faster because you will be feeding slightly less data in one go. The simplest way to build a re ranker is probably just asking a large language model to re rank or summarize the content that you've retrieved before you feed it to the language model. Mikko Lehtimäki: That's one way to do it. So yeah, that's a lot of complexity and honestly, we're not doing all of this right now with Yokot AI, either. We've tried all of it in different scopes, but really it's a lot of logic to maintain. And to me this just like screams the bitter lesson, because we're building so many steps, so much logic, so many rules into the system, when really all of this is done just because the language model can't be trusted, or it can't be with the current architectures trained reliably, or cannot be trained in real time with the current approaches that we have. So there's one thing in this picture, in my opinion, that is more promising than the others for leveraging data and compute, which should dominate the quality of the solution in the long term. And if we focus only on that, or not only, but if we focus heavily on that part of the process, we should be able to eliminate some complexity elsewhere. So if you're watching the recording, you can pause and think what this component may be. But in my opinion, it is the re ranker at the end. Mikko Lehtimäki: And why is that? Well, of course you could argue that the language model itself is one, but with the current architectures that we have, I think we need the retrieval process. We cannot just leave it out and hope that someday soon we will have a language model that doesn't require us fetching the data for it in such a sophisticated manner. The reranker is a component that can leverage data and compute quite efficiently, and it doesn't require that much manual craftmanship either. It's a stakes in samples and outputs samples, and it plays together really well with efficient vector search that we have available now. Like quatrant being a prime example of that. The vector search is an initial filtering step, and then the re ranker is the secondary step that makes sure that we get the highest possible quality data to the final LLM. And the efficiency of the re ranker really comes from the fact that it doesn't have to be a full blown generative language model so often it is a language model, but it doesn't have to have the ability to generate GPT four level content. It just needs to understand, and in some, maybe even a very fixed way, communicate the importance of the inputs that you give it. Mikko Lehtimäki: So typically the inputs are the user's query and the data that was retrieved. Like I mentioned earlier, the easiest way to use a read ranker is probably asking a large language model to rerank your chunks or sentences that you retrieved. But there are also models that have been trained specifically for this, the Colbert model being a primary example of that and we also have to remember that the rerankers have been around for a long time. They've been used in traditional search engines for a good while. We just now require a bit higher quality from them because there's no user checking the search results and deciding which of them is relevant. After the fact that the re ranking has already been run, we need to trust that the output of the re ranker is high quality and can be given to the language model. So you can probably get plenty of ideas from the literature as well. But the easiest way is definitely to use LLM behind a simple API. Mikko Lehtimäki: And that's not to say that you should ignore the rest like the query planner is of course a useful component, and the different methods of retrieval are still relevant for different types of user queries. So yeah, that's how I think the bitter lesson is realizing in these rack architectures I've collected here some methods that are recent or interesting in my opinion. But like I said, there's a lot of existing information from information retrieval research that is probably going to be rediscovered in the near future. So if we summarize the bitter lesson which we have or are experiencing firsthand, states that the methods that leverage data and compute will outperform the handcrafted approaches. And if we focus on the re ranking component in the RHE, we'll be able to eliminate some complexity elsewhere in the process. And it's good to keep in mind that we're of course all the time waiting for advances in the large language model technology. But those advances will very likely benefit the re ranker component as well. So keep that in mind when you find new, interesting research. Mikko Lehtimäki: Cool. That's pretty much my argument finally there. I hope somebody finds it interesting. Demetrios: Very cool. It was bitter like a black cup of coffee, or bitter like dark chocolate. I really like these lessons that you've learned, and I appreciate you sharing them with us. I know the re ranking and just the retrieval evaluation aspect is something on a lot of people's minds right now, and I know a few people at Qdrant are actively thinking about that too, and how to make it easier. So it's cool that you've been through it, you've felt the pain, and you also are able to share what has helped you. And so I appreciate that. In case anyone has any questions, now would be the time to ask them. Otherwise we will take it offline and we'll let everyone reach out to you on LinkedIn, and I can share your LinkedIn profile in the chat to make it real easy for people to reach out if they want to, because this was cool, man. Demetrios: This was very cool, and I appreciate it. Mikko Lehtimäki: Thanks. I hope it's useful to someone. Demetrios: Excellent. Well, if that is all, I guess I've got one question for you. Even though we are kind of running up on time, so it'll be like a lightning question. You mentioned how you showed the really descriptive diagram where you have everything on there, and it's kind of like the dream state or the dream outcome you're going for. What is next? What are you going to create out of that diagram that you don't have yet? Mikko Lehtimäki: You want the lightning answer would be really good to put this run on a local hardware completely. I know that's not maybe the algorithmic thing or not necessarily in the scope of Yoko AI, but if we could run this on a physical device in that form, that would be super. Demetrios: I like it. I like it. All right. Well, Mikko, thanks for everything and everyone that is out there. All you vector space astronauts. Have a great day. Morning, night, wherever you are at in the world or in space. And we will see you later. Demetrios: Thanks. Mikko Lehtimäki: See you.
qdrant-landing/content/blog/using-qdrant-and-langchain.md
--- draft: false title: "Integrating Qdrant and LangChain for Advanced Vector Similarity Search" short_description: Discover how Qdrant and LangChain can be integrated to enhance AI applications. description: Discover how Qdrant and LangChain can be integrated to enhance AI applications with advanced vector similarity search technology. preview_image: /blog/using-qdrant-and-langchain/qdrant-langchain.png date: 2024-03-12T09:00:00Z author: David Myriel featured: true tags: - Qdrant - LangChain - LangChain integration - Vector similarity search - AI LLM (large language models) - LangChain agents - Large Language Models --- > *"Building AI applications doesn't have to be complicated. You can leverage pre-trained models and support complex pipelines with a few lines of code. LangChain provides a unified interface, so that you can avoid writing boilerplate code and focus on the value you want to bring."* Kacper Lukawski, Developer Advocate, Qdrant ## Long-Term Memory for Your GenAI App Qdrant's vector database quickly grew due to its ability to make Generative AI more effective. On its own, an LLM can be used to build a process-altering invention. With Qdrant, you can turn this invention into a production-level app that brings real business value. The use of vector search in GenAI now has a name: **Retrieval Augmented Generation (RAG)**. [In our previous article](/articles/rag-is-dead/), we argued why RAG is an essential component of AI setups, and why large-scale AI can't operate without it. Numerous case studies explain that AI applications are simply too costly and resource-intensive to run using only LLMs. > Going forward, the solution is to leverage composite systems that use models and vector databases. **What is RAG?** Essentially, a RAG setup turns Qdrant into long-term memory storage for LLMs. As a vector database, Qdrant manages the efficient storage and retrieval of user data. Adding relevant context to LLMs can vastly improve user experience, leading to better retrieval accuracy, faster query speed and lower use of compute. Augmenting your AI application with vector search reduces hallucinations, a situation where AI models produce legitimate-sounding but made-up responses. Qdrant streamlines this process of retrieval augmentation, making it faster, easier to scale and efficient. When you are accessing vast amounts of data (hundreds or thousands of documents), vector search helps your sort through relevant context. **This makes RAG a primary candidate for enterprise-scale use cases.** ## Why LangChain? Retrieval Augmented Generation is not without its challenges and limitations. One of the main setbacks for app developers is managing the entire setup. The integration of a retriever and a generator into a single model can lead to a raised level of complexity, thus increasing the computational resources required. [LangChain](https://www.langchain.com/) is a framework that makes developing RAG-based applications much easier. It unifies interfaces to different libraries, including major embedding providers like OpenAI or Cohere and vector stores like Qdrant. With LangChain, you can focus on creating tangible GenAI applications instead of writing your logic from the ground up. > Qdrant is one of the **top supported vector stores** on LangChain, with [extensive documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant) and [examples](https://python.langchain.com/docs/integrations/retrievers/self_query/qdrant_self_query). **How it Works:** LangChain receives a query and retrieves the query vector from an embedding model. Then, it dispatches the vector to a vector database, retrieving relevant documents. Finally, both the query and the retrieved documents are sent to the large language model to generate an answer. ![qdrant-langchain-rag](/blog/using-qdrant-and-langchain/flow-diagram.png) When supported by LangChain, Qdrant can help you set up effective question-answer systems, detection systems and chatbots that leverage RAG to its full potential. When it comes to long-term memory storage, developers can use LangChain to easily add relevant documents, chat history memory & rich user data to LLM app prompts via Qdrant. ## Common Use Cases Integrating Qdrant and LangChain can revolutionize your AI applications. Let's take a look at what this integration can do for you: *Enhance Natural Language Processing (NLP):* LangChain is great for developing question-answering **chatbots**, where Qdrant is used to contextualize and retrieve results for the LLM. We cover this in [our article](/articles/langchain-integration/), and in OpenAI's [cookbook examples](https://cookbook.openai.com/examples/vector_databases/qdrant/qa_with_langchain_qdrant_and_openai) that use LangChain and GPT to process natural language. *Improve Recommendation Systems:* Food delivery services thrive on indecisive customers. Businesses need to accomodate a multi-aim search process, where customers seek recommendations though semantic search. With LangChain you can build systems for **e-commerce, content sharing, or even dating apps**. *Advance Data Analysis and Insights:* Sometimes you just want to browse results that are not necessarily closest, but still relevant. Semantic search helps user discover products in **online stores**. Customers don't exactly know what they are looking for, but require constrained space in which a search is performed. *Offer Content Similarity Analysis:* Ever been stuck seeing the same recommendations on your **local news portal**? You may be held in a similarity bubble! As inputs get more complex, diversity becomes scarce, and it becomes harder to force the system to show something different. LangChain developers can use semantic search to develop further context. ## Building a Chatbot with LangChain _Now that you know how Qdrant and LangChain work together - it's time to build something!_ Follow Daniel Romero's video and create a RAG Chatbot completely from scratch. You will only use OpenAI, Qdrant and LangChain. Here is what this basic tutorial will teach you: **1. How to set up a chatbot using Qdrant and LangChain:** You will use LangChain to create a RAG pipeline that retrieves information from a dataset and generates output. This will demonstrate the difference between using an LLM by itself and leveraging a vector database like Qdrant for memory retrieval. **2. Preprocess and format data for use by the chatbot:** First, you will download a sample dataset based on some academic journals. Then, you will process this data into embeddings and store it as vectors inside of Qdrant. **3. Implement vector similarity search algorithms:** Second, you will create and test a chatbot that only uses the LLM. Then, you will enable the memory component offered by Qdrant. This will allow your chatbot to be modified and updated, giving it long-term memory. **4. Optimize the chatbot's performance:** In the last step, you will query the chatbot in two ways. First query will retrieve parametric data from the LLM, while the second one will get contexual data via Qdrant. The goal of this exercise is to show that RAG is simple to implement via LangChain and yields much better results than using LLMs by itself. <iframe width="560" height="315" src="https://www.youtube.com/embed/O60-KuZZeQA?si=jkDsyJ52qA4ivXUy" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> ## Scaling Qdrant and LangChain If you are looking to scale up and keep the same level of performance, Qdrant and LangChain are a rock-solid combination. Getting started with both is a breeze and the [documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant) covers a broad number of cases. However, the main strength of Qdrant is that it can consistently support the user way past the prototyping and launch phases. > *"We are all-in on performance and reliability. Every release we make Qdrant faster, more stable and cost-effective for the user. When others focus on prototyping, we are already ready for production. Very soon, our users will build successful products and go to market. At this point, I anticipate a great need for a reliable vector store. Qdrant will be there for LangChain and the entire community."* Whether you are building a bank fraud-detection system, RAG for e-commerce, or services for the federal government - you will need to leverage a scalable architecture for your product. Qdrant offers different features to help you considerably increase your application’s performance and lower your hosting costs. > Read more about out how we foster [best practices for large-scale deployments](/articles/multitenancy/). ## Next Steps Now that you know how Qdrant and LangChain can elevate your setup - it's time to try us out. - Qdrant is open source and you can [quickstart locally](/documentation/quick-start/), [install it via Docker](/documentation/quick-start/), [or to Kubernetes](https://github.com/qdrant/qdrant-helm/). - We also offer [a free-tier of Qdrant Cloud](https://cloud.qdrant.io/) for prototyping and testing. - For best integration with LangChain, read the [official LangChain documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant/). - For all other cases, [Qdrant documentation](/documentation/integrations/langchain/) is the best place to get there. > We offer additional support tailored to your business needs. [Contact us](https://qdrant.to/contact-us) to learn more about implementation strategies and integrations that suit your company.
qdrant-landing/content/blog/v0-8-0-update-of-the-qdrant-engine-was-released.md
--- draft: true title: v0.8.0 update of the Qdrant engine was released slug: qdrant-0-8-0-released short_description: "The new version of our engine - v0.8.0, went live. " description: "The new version of our engine - v0.8.0, went live. " preview_image: /blog/from_cms/v0.8.0.jpg date: 2022-06-09T10:03:29.376Z author: Alyona Kavyerina author_link: https://www.linkedin.com/in/alyona-kavyerina/ categories: - News - Release update tags: - Corporate news - Release sitemapExclude: True --- <!--StartFragment--> The new version of our engine - v0.8.0, went live. Let's go through the new features it has: * On-disk payload storage allows storing more with less RAM usage. * Distributed deployment support is available. And we continue improving it, so stay tuned for new updates. * The payload can be indexed in the process without rebuilding the segment. * Advanced filtering support now includes filtering by similarity score. Also, it has a faster payload index, better error reporting, HNSW Speed improvements, and many more. Check out the change log for more details [](https://github.com/qdrant/qdrant/releases/tag/v0.8.0)https://github.com/qdrant/qdrant/releases/tag/v0.8.0. <!--EndFragment-->
qdrant-landing/content/blog/v0-9-0-update-of-the-qdrant-engine-went-live.md
--- draft: true title: v0.9.0 update of the Qdrant engine went live slug: qdrant-v090-release short_description: We've released the new version of Qdrant engine - v.0.9.0. description: We’ve released the new version of Qdrant engine - v.0.9.0. It features the dynamic cluster scaling capabilities. Now Qdrant is more flexible with cluster deployment, allowing to move preview_image: /blog/qdrant-v.0.9.0-release-update.png date: 2022-08-08T14:54:45.476Z author: Alyona Kavyerina author_link: https://www.linkedin.com/in/alyona-kavyerina/ featured: true categories: - release-update - news tags: - corporate news - release sitemapExclude: true --- We've released the new version of Qdrant engine - v.0.9.0. It features the dynamic cluster scaling capabilities. Now Qdrant is more flexible with cluster deployment, allowing to move shards between nodes and remove nodes from the cluster. v.0.9.0 also has various improvements, such as removing temporary snapshot files during the complete snapshot, disabling default mmap threshold, and more. You can read the detailed release noted by this link https://github.com/qdrant/qdrant/releases/tag/v0.9.0 We keep improving Qdrant and working on frequently requested functionality for the next release. Stay tuned!
qdrant-landing/content/blog/vector-image-search-rag-vector-space-talk-008.md
--- draft: false title: "Vector Search Complexities: Insights from Projects in Image Search and RAG - Noé Achache | Vector Space Talks" slug: vector-image-search-rag short_description: Noé Achache discusses their projects in image search and RAG and its complexities. description: Noé Achache shares insights on vector search complexities, discussing projects on image matching, document retrieval, and handling sensitive medical data with practical solutions and industry challenges. preview_image: /blog/from_cms/noé-achache-cropped.png date: 2024-01-09T13:51:26.168Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Image Search - Retrieval Augmented Generation --- > *"I really think it's something the technology is ready for and would really help this kind of embedding model jumping onto the text search projects.”*\ -- Noé Achache on the future of image embedding > Exploring the depths of vector search? Want an analysis of its application in image search and document retrieval? Noé got you covered. Noé Achache is a Lead Data Scientist at Sicara, where he worked on a wide range of projects mostly related to computer vision, prediction with structured data, and more recently LLMs. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2YgcSFjP7mKE0YpDGmSiq5?si=6BhlAMveSty4Yt7umPeHjA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/1vKoiFAdorE).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/1vKoiFAdorE?si=wupcX2v8vHNnR_QB" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Navigating-the-Complexities-of-Vector-Search-Practical-Insights-from-Diverse-Projects-in-Image-Search-and-RAG---No-Achache--Vector-Space-Talk-008-e2diivl/a-aap4q5d" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top Takeaways:** Discover the efficacy of Dino V2 in image representation and the complexities of deploying vector databases, while navigating the challenges of fine-tuning and data safety in sensitive fields. In this episode, Noe, shares insights on vector search from image search to retrieval augmented generation, emphasizing practical application in complex projects. 5 key insights you’ll learn: 1. Cutting-edge Image Search: Learn about the advanced model Dino V2 and its efficacy in image representation, surpassing traditional feature transform methods. 2. Data Deduplication Strategies: Gain knowledge on the sophisticated process of deduplicating real estate listings, a vital task in managing extensive data collections. 3. Document Retrieval Techniques: Understand the challenges and solutions in retrieval augmented generation for document searches, including the use of multi-language embedding models. 4. Protection of Sensitive Medical Data: Delve into strategies for handling confidential medical information and the importance of data safety in health-related applications. 5. The Path Forward in Model Development: Hear Noe discuss the pressing need for new types of models to address the evolving needs within the industry. > Fun Fact: The best-performing model Noé mentions for image representation in his image search project is Dino V2, which interestingly didn't require fine-tuning to understand objects and patterns. > ## Show Notes: 00:00 Relevant experience in vector DB projects and talks.\ 05:57 Match image features, not resilient to changes.\ 07:06 Compute crop vectors, and train to converge.\ 11:37 Simple training task, improve with hard examples.\ 15:25 Improving text embeddings using hard examples.\ 22:29 Future of image embedding for document search.\ 27:28 Efficient storage and retrieval process feature.\ 29:01 Models handle varied data; sparse vectors now possible.\ 35:59 Use memory, avoid disk for CI integration.\ 37:43 Challenging metadata filtering for vector databases and new models ## More Quotes from Noé: *"So basically what was great is that Dino manages to understand all objects and close patterns without fine tuning. So you can get an off the shelf model and get started very quickly and start bringing value very quickly without having to go through all the fine tuning processes.”*\ -- Noé Achache *"And at the end, the embeddings was not learning any very complex features, so it was not really improving it.”*\ -- Noé Achache *"When using an API model, it's much faster to use it in asynchronous mode like the embedding equation went something like ten times or 100 times faster. So it was definitely, it changed a lot of things.”*\ -- Noé Achache ## Transcript: Demetrios: Noe. Great to have you here everyone. We are back for another vector space talks and today we are joined by my man Noe, who is the lead data scientist at Sicara, and if you do not know, he is working on a wide range of projects, mostly related to computer vision. Vision. And today we are talking about navigating the complexities of vector search. We're going to get some practical insights from diverse projects in image search and everyone's favorite topic these days, retrieval augmented generation, aka rags. So noe, I think you got something for us. You got something planned for us here? Noe Acache: Yeah, I do. I can share them. Demetrios: All right, well, I'm very happy to have you on here, man. I appreciate you doing this. And let's get you sharing your screen so we can start rocking, rolling. Noe Acache: Okay. Can you see my screen? Demetrios: Yeah. Awesome. Noe Acache: Great. Thank you, Demetrius, for the great introduction. I just completed quickly. So as you may have guessed, I'm french. I'm a lead data scientist at Sicara. So Secura is a service company helping its clients in data engineering and data science, so building projects for them. Before being there, I worked at realtics on optical character recognition, and I'm now working mostly on, as you said, computer vision and also Gen AI. So I'm leading the geni side and I've been there for more than three years. Noe Acache: So some relevant experience on vector DB is why I'm here today, because I did four projects, four vector soft projects, and I also wrote an article on how to choose your database in 2023, your vector database. And I did some related talks in other conferences like Pydata, DVC, all the geni meetups of London and Paris. So what are we going to talk about today? First, an overview of the vector search projects. Just to give you an idea of the kind of projects we can do with vector search. Then we will dive into the specificities of the image search project and then into the specificities of the text search project. So here are the four projects. So two in image search, two in text search. The first one is about matching objects in videos to sell them afterwards. Noe Acache: So basically you have a video. We first detect the object. So like it can be a lamp, it can be a piece of clothes, anything, we classify it and then we compare it to a large selection of similar objects to retrieve the most similar one to a large collection of sellable objects. The second one is about deduplicating real estate adverts. So when agencies want to sell a property, like sometimes you have several agencies coming to take pictures of the same good. So you have different pictures of the same good. And the idea of this project was to match the different pictures of the same good, the same profile. Demetrios: I've seen that dude. I have been a victim of that. When I did a little house shopping back like five years ago, it would be the same house in many different ones, and sometimes you wouldn't know because it was different photos. So I love that you were thinking about it that way. Sorry to interrupt. Noe Acache: Yeah, so to be fair, it was the idea of my client. So basically I talk about it a bit later with aggregating all the adverts and trying to deduplicate them. And then the last two projects are about drugs retrieval, augmented generation. So the idea to be able to ask questions to your documentation. The first one was for my company's documentation and the second one was for a medical company. So different kind of complexities. So now we know all about this project, let's dive into them. So regarding the image search project, to compute representations of the images, the best performing model from the benchmark, and also from my experience, is currently Dino V two. Noe Acache: So a model developed by meta that you may have seen, which is using visual transformer. And what's amazing about it is that using the attention map, you can actually segment what's important in the picture, although you haven't told it specifically what's important. And as a human, it will learn to focus on the dog, on this picture and do not take into consideration the noisy background. So when I say best performing model, I'm talking about comparing to other architecture like Resnet efficient nets models, an approach I haven't tried, which also seems interesting. If anyone tried it for similar project, please reach out afterwards. I'll be happy to talk about it. Is sift for feature transform something about feature transform. It's basically a more traditional method without learned features through machine learning, as in you don't train the model, but it's more traditional methods. Noe Acache: And you basically detect the different features in an image and then try to find the same features in an image which is supposed to post to be the same. All the blue line trying to match the different features. Of course it's made to match image with exactly the same content, so it wouldn't really work. Probably not work in the first use case, because we are trying to match similar clothes, but which are not exactly the same one. And also it's known to be not very resilient with the changes of angles when it changes too much, et cetera. So it may not be very good as well for the second use case, but again, I haven't tried it, so just leaving it here on the side. Just a quick word about how Dino works in case you're interested. So it's a vision transformer and it's trade in an unsupervised way, as in you don't have any labels provided, so you just take pictures and you first extract small crops and large crops and you augment them. Noe Acache: And then you're going to use the model to compute vectors, representations of each of these crops. And since they all represent the same image, they should all be the same. So then you can compute a loss to see how they diverge and to basically train them to become the same. So this is how it works and how it works. And the difference between the second version is just that they use more data sets and the distillation method to have a very performant model, which is also very fast to run regarding the first use case. So, matching objects in videos to sellable items for people who use Google lengths before, it's quite similar, where in Google lens you can take a picture of something and then it will try to find similar objects to buy. So again, you have a video and then you detect one of the objects in the video, put it and compare it to a vector database which contains a lot of objects which are similar for the representation. And then it will output the most similar lamp here. Noe Acache: Now we're going to try to analyze how this project went regarding the positive outcomes and the changes we faced. So basically what was great is that Dino manages to understand all objects and close patterns without fine tuning. So you can get an off the shelf model and get started very quickly and start bringing value very quickly without having to go through all the fine tuning processes. And it also manages to focus on the object without segmentation. What I mean here is that we're going to get a box of the object, and in this box there will be a very noisy background which may disturb the matching process. And since Dino really manages to focus on the object, that's important on the image. It doesn't really matter that we don't segmentate perfectly the image. Regarding the vector database, this project started a while ago, and I think we chose the vector database something like a year and a half ago. Noe Acache: And so it was before all the vector database hype. And at the time, the most famous one was Milvos, the only famous one actually. And we went for an on premise development deployment. And actually our main learning is that the DevOps team really struggled to deploy it, because basically it's made of a lot of pods. And the documentations about how these pods are supposed to interact together is not really perfect. And it was really buggy at this time. So the clients lost a lot of time and money in this deployment. The challenges, other challenges we faced is that we noticed that the matching wasn't very resilient to large distortions. Noe Acache: So for furnitures like lamps, it's fine. But let's say you have a trouser and a person walking. So the trouser won't exactly have the same shape. And since you haven't trained your model to specifically know, it shouldn't focus on the movements. It will encode this movement. And then in the matching, instead of matching trouser, which looks similar, it will just match trouser where in the product picture the person will be working as well, which is not really what we want. And the other challenges we faced is that we tried to fine tune the model, but our first fine tuning wasn't very good because we tried to take an open source model and, and get the labels it had, like on different furnitures, clothes, et cetera, to basically train a model to classify the different classes and then remove the classification layer to just keep the embedding parts. The thing is that the labels were not specific enough. Noe Acache: So the training task was quite simple. And at the end, the embeddings was not learning any very complex features, so it was not really improving it. So jumping onto the areas of improvement, knowing all of that, the first thing I would do if I had to do it again will be to use the managed milboss for a better fine tuning, it would be to labyd hard examples, hard pairs. So, for instance, you know that when you have a matching pair where the similarity score is not too high or not too low, you know, it's where the model kind of struggles and you will find some good matching and also some mistakes. So it's where it kind of is interesting to level to then be able to fine tune your model and make it learn more complex things according to your tasks. Another possibility for fine tuning will be some sort of multilabel classification. So for instance, if you consider tab close, you could say, all right, those disclose contain buttons. It have a color, it have stripes. Noe Acache: And for all of these categories, you'll get a score between zero and one. And concatenating all these scores together, you can get an embedding which you can put in a vector database for your vector search. It's kind of hard to scale because you need to do a specific model and labeling for each type of object. And I really wonder how Google lens does because their algorithm work very well. So are they working more like with this kind of functioning or this kind of functioning? So if anyone had any thought on that or any idea, again, I'd be happy to talk about it afterwards. And finally, I feel like we made a lot of advancements in multimodal training, trying to combine text inputs with image. We've made input to build some kind of complex embeddings. And how great would it be to have an image embeding you could guide with text. Noe Acache: So you could just like when creating an embedding of your image, just say, all right, here, I don't care about the movements, I only care about the features on the object, for instance. And then it will learn an embedding according to your task without any fine tuning. I really feel like with the current state of the arts we are able to do this. I mean, we need to do it, but the technology is ready. Demetrios: Can I ask a few questions before you jump into the second use case? Noe Acache: Yes. Demetrios: What other models were you looking at besides the dyno one? Noe Acache: I said here, compared to Resnet, efficient nets and these kind of architectures. Demetrios: Maybe this was too early, or maybe it's not actually valuable. Was that like segment anything? Did that come into the play? Noe Acache: So segment anything? I don't think they redo embeddings. It's really about segmentation. So here I was just showing the segmentation part because it's a cool outcome of the model and it shows that the model works well here we are really here to build a representation of the image we cannot really play with segment anything for the matching, to my knowledge, at least. Demetrios: And then on the next slide where you talked about things you would do differently, or the last slide, I guess the areas of improvement you mentioned label hard examples for fine tuning. And I feel like, yeah, there's one way of doing it, which is you hand picking the different embeddings that you think are going to be hard. And then there's another one where I think there's tools out there now that can kind of show you where there are different embeddings that aren't doing so well or that are more edge cases. Noe Acache: Which tools are you talking about? Demetrios: I don't remember the names, but I definitely have seen demos online about how it'll give you a 3d space and you can kind of explore the different embeddings and explore what's going on I. Noe Acache: Know exactly what you're talking about. So tensorboard embeddings is a good tool for that. I could actually demo it afterwards. Demetrios: Yeah, I don't want to get you off track. That's something that came to mind if. Noe Acache: You'Re talking about the same tool. Turns out embedding. So basically you have an embedding of like 1000 dimensions and it just reduces it to free dimensions. And so you can visualize it in a 3d space and you can see how close your embeddings are from each other. Demetrios: Yeah, exactly. Noe Acache: But it's really for visualization purposes, not really for training purposes. Demetrios: Yeah, okay, I see. Noe Acache: Talking about the same thing. Demetrios: Yeah, I think that sounds like what I'm talking about. So good to know on both of these. And you're shooting me straight on it. Mike is asking a question in here, like text embedding, would that allow you to include an image with alternate text? Noe Acache: An image with alternate text? I'm not sure the question. Demetrios: So it sounds like a way to meet regulatory accessibility requirements if you have. I think it was probably around where you were talking about the multimodal and text to guide the embeddings and potentially would having that allow you to include an image with alternate text? Noe Acache: The idea is not to. I feel like the question is about inserting text within the image. It's what I understand. My idea was just if you could create an embedding that could combine a text inputs and the image inputs, and basically it would be trained in such a way that the text would basically be used as a guidance of the image to only encode the parts of the image which are required for your task to not be disturbed by the noisy. Demetrios: Okay. Yeah. All right, Mike, let us know if that answers the question or if you have more. Yes. He's saying, yeah, inserting text with image for people who can't see. Noe Acache: Okay, cool. Demetrios: Yeah, right on. So I'll let you keep cruising and I'll try not to derail it again. But that was great. It was just so pertinent. I wanted to stop you and ask some questions. Noe Acache: Larry, let's just move in. So second use case is about deduplicating real estate adverts. So as I was saying, you have two agencies coming to take different pictures of the same property. And the thing is that they may not put exactly the same price or the same surface or the same location. So you cannot just match them with metadata. So what our client was doing beforehand, and he kind of built a huge if machine, which is like, all right, if the location is not too far and if the surface is not too far. And the price, and it was just like very complex rules. And at the end there were a lot of edge cases. Noe Acache: It was very hard to maintain. So it was like, let's just do a simpler solution just based on images. So it was basically the task to match images of the same properties. Again on the positive outcomes is that the dino really managed to understand the patterns of the properties without any fine tuning. And it was resilient to read different angles of the same room. So like on the pictures I shown, I just showed, the model was quite good at identifying. It was from the same property. Here we used cudrant for this project was a bit more recent. Noe Acache: We leveraged a lot the metadata filtering because of course we can still use the metadata even it's not perfect just to say, all right, only search vectors, which are a price which is more or less 10% this price. The surface is more or less 10% the surface, et cetera, et cetera. And indexing of this metadata. Otherwise the search is really slowed down. So we had 15 million vectors and without this indexing, the search could take up to 20, 30 seconds. And with indexing it was like in a split second. So it was a killer feature for us. And we use quantization as well to save costs because the task was not too hard. Noe Acache: Since using the metadata we managed to every time reduce the task down to a search of 1000 vector. So it wasn't too annoying to quantize the vectors. And at the end for 15 million vectors, it was only $275 per month, which with the village version, which is very decent. The challenges we faced was really about bathrooms and empty rooms because all bathrooms kind of look similar. They have very similar features and same for empty rooms since there is kind of nothing in them, just windows. The model would often put high similarity scores between two bathroom of different properties and same for the empty rooms. So again, the method to overcome this thing will be to label harpers. So example were like two images where the model would think they are similar to actually tell the model no, they are not similar to allow it to improve its performance. Noe Acache: And again, same thing on the future of image embedding. I really think it's something the technology is ready for and would really help this kind of embedding model jumping onto the text search projects. So the principle of retribution generation for those of you who are not familiar with it is just you take some documents, you have an embedding model here, an embedding model trained on text and not on images, which will output representations from these documents, put it in a vector database, and then when a user will ask a question over the documentation, it will create an embedding of the request and retrieve the most similar documents. And afterwards we usually pass it to an LLM, which will generate an answer. But here in this talk, we won't focus on the overall product, but really on the vector search part. So the two projects was one, as I told you, a rack for my nutrition company, so endosion with around a few hundred thousand of pages, and the second one was for medical companies, so for the doctors. So it was really about the documentation search rather than the LLM, because you cannot output any mistake. The model we used was OpenAI Ada two. Noe Acache: Why? Mostly because for the first use case it's multilingual and it was off the shelf, very easy to use, so we did not spend a lot of time on this project. So using an API model made it just much faster. Also it was multilingual, approved by the community, et cetera. For the second use case, we're still working on it. So since we use GPT four afterwards, because it's currently the best LLM, it was also easier to use adatu to start with, but we may use a better one afterwards because as I'm saying, it's not the best one if you refer to the MTAB. So the massive text embedding benchmark made by hugging face, which basically gathers a lot of embeddings benchmark such as retrieval for instance, and so classified the different model for these benchmarks. The M tab is not perfect because it's not taking into account cross language capabilities. All the benchmarks are just for one language and it's not as well taking into account most of the languages, like it's only considering English, Polish and Chinese. Noe Acache: And also it's probably biased for models trained on close source data sets. So like most of the best performing models are currently closed source APIs and hence closed source data sets, and so we don't know how they've been trained. So they probably trained themselves on these data sets. At least if I were them, it's what I would do. So I assume they did it to gain some points in these data sets. Demetrios: So both of these rags are mainly with documents that are in French? Noe Acache: Yes. So this one is French and English, and this one is French only. Demetrios: Okay. Yeah, that's why the multilingual is super important for these use cases. Noe Acache: Exactly. Again, for this one there are models for French working much better than other two, so we may change it afterwards, but right now the performance we have is decent. Since both projects are very similar, I'll jump into the conclusion for both of them together. So Ada two is good for understanding diverse context, wide range of documentation, medical contents, technical content, et cetera, without any fine tuning. The cross language works quite well, so we can ask questions in English and retrieve documents in French and the other way around. And also, quick note, because I did not do it from the start, is that when using an API model, it's much faster to use it in asynchronous mode like the embedding equation went something like ten times or 100 times faster. So it was definitely, it changed a lot of things. Again, here we use cudrant mostly to leverage the free tier so they have a free version. Noe Acache: So you can pop it in a second, get the free version, and using the feature which allows to put the vectors on disk instead of storing them on ram, which makes it a bit slower, you can easily support few hundred thousand of vectors and with a very decent response time. The challenge we faced is that mostly for the notion, so like mostly in notion, we have a lot of pages which are just a title because they are empty, et cetera. And so when pages have just a title, the content is so small that it will be very similar actually to a question. So often the documents were retrieved were document with very little content, which was a bit frustrating. Chunking appropriately was also tough. Basically, if you want your retrieval process to work well, you have to divide your documents the right way to create the embeddings. So you can use matrix rules, but basically you need to divide your documents in content which semantically makes sense and it's not always trivial. And also for the rag, for the medical company, sometimes we are asking questions about a specific drug and it's just not under our search is just not retrieving the good documents, which is very frustrating because a basic search would. Noe Acache: So to handle these changes, a good option would be to use models handing differently question and documents like Bg or cohere. Basically they use the same model but trained differently on long documents and questions which allow them to map them differently in the space. And my guess is that using such model documents, which are only a title, et cetera, will not be as close as the question as they are right now because they will be considered differently. So I hope it will help this problem. Again, it's just a guess, maybe I'm wrong. Heap research so for the keyword problem I was mentioning here, so in the recent release, Cudran just enabled sparse vectors which make actually TFEdev vectors possible. The TFEDEF vectors are vectors which are based on keywords, but basically there is one number per possible word in the data sets, and a lot of zeros, so storing them as a normal vector will make the vector search very expensive. But as a sparse vector it's much better. Noe Acache: And so you can build a debrief search combining the TFDF search for keyword search and the other search for semantic search to get the best of both worlds and overcome this issue. And finally, I'm actually quite surprised that with all the work that is going on, generative AI and rag, nobody has started working on a model to help with chunking. It's like one of the biggest challenge, and I feel like it's quite doable to have a model which will our model, or some kind of algorithm which will understand the structure of your documentation and understand why it semantically makes sense to chunk your documents. Dude, so good. Demetrios: I got questions coming up. Don't go anywhere. Actually, it's not just me. Tom's also got some questions, so I'm going to just blame it on Tom, throw him under the bus. Rag with medical company seems like a dangerous use case. You can work to eliminate hallucinations and other security safety concerns, but you can't make sure that they're completely eliminated, right? You can only kind of make sure they're eliminated. And so how did you go about handling these concerns? Noe Acache: This is a very good question. This is why I mentioned this project is mostly about the document search. Basically what we do is that we use chainlit, which is a very good tool for chatting, and then you can put a react front in front of it to make it very custom. And so when the user asks a question, we provide the LLM answer more like as a second thought, like something the doctor could consider as a fagon thought. But what's the most important is that we directly put the, instead of just citing the sources, we put the HTML of the pages the source is based on, and what bring the most value is really these HTML pages. And so we know the answer may have some problems. The fact is, based on documents, hallucinations are almost eliminated. Like, we don't notice any hallucinations, but of course they can happen. Noe Acache: So it's really the way, it's really a product problem rather than an algorithm problem, an algorithmic problem, yeah. The documents retrieved rather than the LLM answer. Demetrios: Yeah, makes sense. My question around it is a lot of times in the medical space, the data that is being thrown around is super sensitive. Right. And you have a lot of Pii. How do you navigate that? Are you just not touching that? Noe Acache: So basically we work with a provider in front which has public documentation. So it's public documentation. There is no PII. Demetrios: Okay, cool. So it's not like some of it. Noe Acache: Is private, but still there is no PII in the documents. Demetrios: Yeah, because I think that's another really incredibly hard problem is like, oh yeah, we're just sending all this sensitive information over to the IDA model to create embeddings with it. And then we also pass it through Chat GPT before we get it back. And next thing you know, that is the data that was used to train GPT five. And you can say things like create an unlimited poem and get that out of it. So it's super sketchy, right? Noe Acache: Yeah, of course, one way to overcome that is to, for instance, for the notion project, it's our private documentation. We use Ada over Azure, which guarantees data safety. So it's quite a good workaround. And when you have to work with different level of security, if you deal with PII, a good way is to play with metadata. Depending on the security level of the person who has the question, you play with the metadata to output only some kind of documents. The database metadata. Demetrios: Excellent. Well, don't let me stop you. I know you had some conclusionary thoughts there. Noe Acache: No, sorry, I was about to conclude anyway. So just to wrap it up, so we got some good models without any fine tuning. With the model, we tried to overcome them, to overcome these limitations we still faced. For MS search, fine tuning is required at the moment. There's no really any other way to overcome it otherwise. While for tech search, fine tuning is not really necessary, it's more like tricks which are required about using eBrid search, using better models, et cetera. So two kind of approaches, Qdrant really made a lot of things easy. For instance, I love the feature where you can use the database as a disk file. Noe Acache: You can even also use it in memory for CI integration and stuff. But since for all my experimentations, et cetera, I won't use it as a disk file because it's much easier to play with. I just like this feature. And then it allows to use the same tool for your experiment and in production. When I was playing with milverse, I had to use different tools for experimentation and for the database in production, which was making the technical stock a bit more complex. Sparse vector for Tfedef, as I was mentioning, which allows to search based on keywords to make your retrieval much better. Manage deployment again, we really struggle with the deployment of the, I mean, the DevOps team really struggled with the deployment of the milverse. And I feel like in most cases, except if you have some security requirements, it will be much cheaper to use the managed deployments rather than paying dev costs. Noe Acache: And also with the free cloud and on these vectors, you can really do a lot of, at least start a lot of projects. And finally, the metadata filtering and indexing. So by the way, we went into a small trap. It's that indexing. It's recommended to index on your metadata before adding your vectors. Otherwise your performance may be impacted. So you may not retrieve the good vectors that you need. So it's interesting thing to take into consideration. Noe Acache: I know that metadata filtering is something quite hard to do for vector database, so I don't really know how it works, but I assume there is a good reason for that. And finally, as I was mentioning before, in my view, new types of models are needed to answer industrial needs. So the model we are talking about, tech guidance to make better image embeddings and automatic chunking, like some kind of algorithm and model which will automatically chunk your documents appropriately. So thank you very much. If you still have questions, I'm happy to answer them. Here are my social media. If you want to reach me out afterwards, twitch out afterwards, and all my writing and talks are gathered here if you're interested. Demetrios: Oh, I like how you did that. There is one question from Tom again, asking about if you did anything to handle images and tables within the documentation when you were doing those rags. Noe Acache: No, I did not do anything for the images and for the tables. It depends when they are well structured. I kept them because the model manages to understand them. But for instance, we did a small pock for the medical company when he tried to integrate some external data source, which was a PDF, and we wanted to use it as an HTML to be able to display the HTML otherwise explained to you directly in the answer. So we converted the PDF to HTML and in this conversion, the tables were absolutely unreadable. So even after cleaning. So we did not include them in this case. Demetrios: Great. Well, dude, thank you so much for coming on here. And thank you all for joining us for yet another vector space talk. If you would like to come on to the vector space talk and share what you've been up to and drop some knowledge bombs on the rest of us, we'd love to have you. So please reach out to me. And I think that is it for today. Noe, this was awesome, man. I really appreciate you doing this. Noe Acache: Thank you, Demetrius. Have a nice day. Demetrios: We'll see you all later. Bye.
qdrant-landing/content/blog/vector-search-and-applications-by-andrey-vasnetsov-cto-at-qdrant.md
--- draft: false title: '"Vector search and applications" by Andrey Vasnetsov, CTO at Qdrant' preview_image: /blog/from_cms/ramsri-podcast-preview.png sitemapExclude: true slug: vector-search-and-applications-record short_description: Andrey Vasnetsov, Co-founder and CTO at Qdrant has shared about vector search and applications with Learn NLP Academy.  description: Andrey Vasnetsov, Co-founder and CTO at Qdrant has shared about vector search and applications with Learn NLP Academy.  date: 2023-12-11T12:16:42.004Z author: Alyona Kavyerina featured: false tags: - vector search - webinar - news categories: - vector search - webinar - news --- <!--StartFragment--> Andrey Vasnetsov, Co-founder and CTO at Qdrant has shared about vector search and applications with Learn NLP Academy.  <iframe width="560" height="315" src="https://www.youtube.com/embed/MVUkbMYPYTE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> He covered the following topics: * Qdrant search engine and Quaterion similarity learning framework; * Similarity learning to multimodal settings; * Elastic search embeddings vs vector search engines; * Support for multiple embeddings; * Fundraising and VC discussions; * Vision for vector search evolution; * Finetuning for out of domain. <!--EndFragment-->
qdrant-landing/content/blog/vector-search-for-content-based-video-recommendation-gladys-and-sam-vector-space-talk-012.md
--- draft: false title: The challenges in using LLM-as-a-Judge - Sourabh Agrawal | Vector Space Talks slug: llm-as-a-judge short_description: Sourabh Agrawal explores the world of AI chatbots. description: Everything you need to know about chatbots, Sourabh Agrawal goes in to detail on evaluating their performance, from real-time to post-feedback assessments, and introduces uptrendAI—an open-source tool for enhancing chatbot interactions through customized and logical evaluations. preview_image: /blog/from_cms/sourabh-agrawal-bp-cropped.png date: 2024-03-19T15:05:02.986Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - LLM - retrieval augmented generation --- > "*You don't want to use an expensive model like GPT 4 for evaluation, because then the cost adds up and it does not work out. If you are spending more on evaluating the responses, you might as well just do something else, like have a human to generate the responses.*”\ -- Sourabh Agrawal > Sourabh Agrawal, CEO & Co-Founder at UpTrain AI is a seasoned entrepreneur and AI/ML expert with a diverse background. He began his career at Goldman Sachs, where he developed machine learning models for financial markets. Later, he contributed to the autonomous driving team at Bosch/Mercedes, focusing on computer vision modules for scene understanding. In 2020, Sourabh ventured into entrepreneurship, founding an AI-powered fitness startup that gained over 150,000 users. Throughout his career, he encountered challenges in evaluating AI models, particularly Generative AI models. To address this issue, Sourabh is developing UpTrain, an open-source LLMOps tool designed to evaluate, test, and monitor LLM applications. UpTrain provides scores and offers insights to enhance LLM applications by performing root-cause analysis, identifying common patterns among failures, and providing automated suggestions for resolution. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/1o7xdbdx32TiKe7OSjpZts?si=yCHU-FxcQCaJLpbotLk7AQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/vBJF2sy1Pyw).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/vBJF2sy1Pyw?si=H-HwmPHtFSfiQXjn" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/The-challenges-with-using-LLM-as-a-Judge---Sourabh-Agrawal--Vector-Space-Talks-013-e2fj7g8/a-aaurgd0" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Why is real-time evaluation critical in maintaining the integrity of chatbot interactions and preventing issues like promoting competitors or making false promises? What strategies do developers employ to minimize cost while maximizing the effectiveness of model evaluations, specifically when dealing with LLMs? These might be just some of the many questions people in the industry are asking themselves. Fear, not! Sourabh will break it down for you. Check out the full conversation as they dive into the intricate world of AI chatbot evaluations. Discover the nuances of ensuring your chatbot's quality and continuous improvement across various metrics. Here are the key topics of this episode: 1. **Evaluating Chatbot Effectiveness**: An exploration of systematic approaches to assess chatbot quality across various stages, encompassing retrieval accuracy, response generation, and user satisfaction. 2. **Importance of Real-Time Assessment**: Insights into why continuous and real-time evaluation of chatbots is essential to maintain integrity and ensure they function as designed without promoting undesirable actions. 3. **Indicators of Compromised Systems**: Understand the significance of identifying behaviors that suggest a system may be prone to 'jailbreaking' and the methods available to counter these through API integration. 4. **Cost-Effective Evaluation Models**: Discussion on employing smaller models for evaluation to reduce costs without compromising the depth of analysis, focusing on failure cases and root-cause assessments. 5. **Tailored Evaluation Metrics**: Emphasis on the necessity of customizing evaluation criteria to suit specific use case requirements, including an exploration of the different metrics applicable to diverse scenarios. > Fun Fact: Sourabh discussed the use of Uptrend, an innovative API that provides scores and explanations for various data checks, facilitating logical and informed decision-making when evaluating AI models. > ## Show notes: 00:00 Prototype evaluation subjective; scalability challenges emerge.\ 05:52 Use cheaper, smaller models for effective evaluation.\ 07:45 Use LLM objectively, avoid subjective biases.\ 10:31 Evaluate conversation quality and customization for AI.\ 15:43 Context matters for AI model performance.\ 19:35 Chat bot creates problems for car company.\ 20:45 Real-time user query evaluations, guardrails, and jailbreak.\ 27:27 Check relevance, monitor data, filter model failures.\ 28:09 Identify common themes, insights, experiment with settings.\ 32:27 Customize jailbreak check for specific app purposes.\ 37:42 Mitigate hallucination using evaluation data techniques.\ 38:59 Discussion on productizing hallucination mitigation techniques.\ 42:22 Experimentation is key for system improvement. ## More Quotes from Sourabh: *"There are some cases, let's say related to safety, right? Like you want to check whether the user is trying to jailbreak your LLMs or not. So in that case, what you can do is you can do this evaluation in parallel to the generation because based on just the user query, you can check whether the intent is to jailbreak or it's an intent to actually use your product to kind of utilize it for the particular model purpose.*”\ -- Sourabh Agrawal *"You have to break down the response into individual facts and just see whether each fact is relevant for the question or not. And then take some sort of a ratio to get the final score. So that way all the biases which comes up into the picture, like egocentric bias, where LLM prefers its own outputs, those biases can be mitigated to a large extent.”*\ -- Sourabh Agrawal *"Generally speaking, what we have been seeing is that the better context you retrieve, the better your model becomes.”*\ -- Sourabh Agrawal ## Transcript: Demetrios: Sourabh, I've got you here from Uptrain. I think you have some notes that you wanted to present, but I also want to ask you a few questions because we are going to be diving into a topic that is near and dear to my heart and I think it's been coming up so much recently that is using LLMs as a judge. It is really hot these days. Some have even gone as far to say that it is the topic of 2024. I would love for you to dive in. Let's just get right to it, man. What are some of the key topics when you're talking about using LLMs to evaluate what key metrics are you using? How does this work? Can you break it down? Sourabh Agrawal: Yeah. First of all, thanks a lot for inviting me and no worries for hiccup. I guess I have never seen a demo or a talk which goes without any technical hiccups. It is bound to happen. Really excited to be here. Really excited to talk about LLM evaluations. And as you rightly pointed right, it's really a hot topic and rightly so. Right. Sourabh Agrawal: The way things have been panning out with LLMs and chat, GPT and GPT four and so on, is that people started building all these prototypes, right? And the way to evaluate them was just like eyeball them, just trust your gut feeling, go with the vibe. I guess they truly adopted the startup methodology, push things out to production and break things. But what people have been realizing is that it's not scalable, right? I mean, rightly so. It's highly subjective. It's a developer, it's a human who is looking at all the responses, someday he might like this, someday he might like something else. And it's not possible for them to kind of go over, just read through more than ten responses. And now the unique thing about production use cases is that they need continuous refinement. You need to keep on improving them, you need to keep on improving your prompt or your retrieval, your embedding model, your retrieval mechanisms and so on. Sourabh Agrawal: So that presents a case like you have to use a more scalable technique, you have to use LLMs as a judge because that's scalable. You can have an API call, and if that API call gives good quality results, it's a way you can mimic whatever your human is doing or in a way augment them which can truly act as their copilot. Demetrios: Yeah. So one question that's been coming through my head when I think about using LLMs as a judge and I get more into it, has been around when do we use those API calls. It's not in the moment that we're looking for this output. Is it like just to see if this output is real? And then before we show it to the user, it's kind of in bunches after we've gotten a bit of feedback from the user. So that means that certain use cases are automatically discarded from this, right? Like if we are thinking, all right, we're going to use LLMs as a judge to make sure that we're mitigating hallucinations or that we are evaluating better, it is not necessarily something that we can do in the moment, if I'm understanding it correctly. So can you break that down a little bit more? How does it actually look in practice? Sourabh Agrawal: Yeah, definitely. And that's a great point. The way I see it, there are three cases. Case one is what you mentioned in the moment before showing the response to the user. You want to check whether the response is good or not. In most of the scenarios you can't do that because obviously checking requires extra time and you don't want to add latency. But there are some cases, let's say related to safety, right? Like you want to check whether the user is trying to jailbreak your LLMs or not. So in that case, what you can do is you can do this evaluation in parallel to the generation because based on just the user query, you can check whether the intent is to jailbreak or it's an intent to actually use your product to kind of utilize it for the particular model purpose. Sourabh Agrawal: But most of the other evaluations like relevance, hallucinations, quality and so on, it has to be done. Post whatever you show to the users and then there you can do it in two ways. You can either experiment with use them to experiment with things, or you can run monitoring on your production and find out failure cases. And typically we are seeing like developers are adopting a combination of these two to find cases and then experiment and then improve their systems. Demetrios: Okay, so when you're doing it in parallel, that feels like something that is just asking you craft a prompt and as soon as. So you're basically sending out two prompts. Another piece that I have been thinking about is, doesn't this just add a bunch more cost to your system? Because there you're effectively doubling your cost. But then later on I can imagine you can craft a few different ways of making the evaluations and sending out the responses to the LLM better, I guess. And you can figure out how to trim some tokens off, or you can try and concatenate some of the responses and do tricks there. I'm sure there's all kinds of tricks that you know about that I don't, and I'd love to tell you to tell me about them, but definitely what kind of cost are we looking at? How much of an increase can we expect? Sourabh Agrawal: Yeah, so I think that's like a very valid limitation of evaluation. So that's why, let's say at uptrend, what we truly believe in is that you don't want to use an expensive model like GPT four for evaluation, because then the cost adds up and it does not work out. Right. If you are spending more on evaluating the responses, you may as well just do something else, like have a human to generate the responses. We rely on smaller models, on cheaper models for this. And secondly, the methodology which we adopt is that you don't want to evaluate everything on all the data points. Like maybe you have a higher level check, let's say, for jailbreak or let's say for the final response quality. And when you find cases where the quality is low, you run a battery of checks on these failures to figure out which part of the pipeline is exactly failing. Sourabh Agrawal: This is something what we call as like root cause analysis, where you take all these failure cases, which may be like 10% or 20% of the cases out of all what you are seeing in production. Take these 20% cases, run like a battery of checks on them. They might be exhaustive. You might run like five to ten checks on them. And then based on those checks, you can figure out that, what is the error mode? Is it a retrieval problem? Is it a citation problem? Is it a utilization problem? Is it hallucination? Is the query like the question asked by the user? Is it not clear enough? Is it like your embedding model is not appropriate? So that's how you can kind of take best of the two. Like, you can also improve the performance at the same time, make sure that you don't burn a hole in your pocket. Demetrios: I've also heard this before, and it's almost like you're using the LLMs as tests and they're helping you write. It's not that they're helping you write tests, it's that they are there and they're part of the tests that you're writing. Sourabh Agrawal: Yeah, I think the key here is that you have to use them objectively. What I have seen is a lot of people who are trying to do LLM evaluations, what they do is they ask the LLM that, okay, this is my response. Can you tell is it relevant or not? Or even, let's say, they go a step beyond and do like a grading thing, that is it highly relevant, somewhat relevant, highly irrelevant. But then it becomes very subjective, right? It depends upon the LLM to decide whether it's relevant or not. Rather than that you have to transform into an objective setting. You have to break down the response into individual facts and just see whether each fact is relevant for the question or not. And then take some sort of a ratio to get the final score. So that way all the biases which comes up into the picture, like egocentric bias, where LLM prefers its own outputs, those biases can be mitigated to a large extent. Sourabh Agrawal: And I believe that's the key for making LLM evaluations work, because similar to LLM applications, even LLM evaluations, you have to put in a lot of efforts to make them really work and finally get some scores which align well with human expectations. Demetrios: It's funny how these LLMs mimic humans so much. They love the sound of their own voice, even. It's hilarious. Yeah, dude. Well, talk to me a bit more about how this looks in practice, because there's a lot of different techniques that you can do. Also, I do realize that when it comes to the use cases, it's very different, right. So if it's code generation use case, and you're evaluating that, it's going to be pretty clear, did the code run or did it not? And then you can go into some details on is this code actually more valuable? Is it a hacked way to do it? Et cetera, et cetera. But there's use cases that I would consider more sensitive and less sensitive. Demetrios: And so how do you look at that type of thing? Sourabh Agrawal: Yeah, I think so. The way even we think about evaluations is there's no one size fit all solution for different use cases. You need to look at different things. And even if you, let's say, looking at hallucinations, different use cases, or different businesses would look at evaluations from different lenses. Right. For someone, whatever, if they are focusing a lot on certain aspects of the correctness, someone else would focus less on those aspects and more on other aspects. The way we think about it is, know, we define different criteria for different use cases. So if you have A-Q-A bot, right? So you look at the quality of the response, the quality of the context. Sourabh Agrawal: If you have a conversational agent, then you look at the quality of the conversation as a whole. You look at whether the user is satisfied with that conversation. If you are writing long form content. Like, you look at coherence across the content, you look at the creativity or the sort of the interestingness of the content. If you have an AI agent, you look at how well they are able to plan, how well they were able to execute a particular task, and so on. How many steps do they take to achieve their objective? So there are a variety of these evaluation matrices, which are each one of which is more suitable for different use cases. And even there, I believe a good tool needs to provide certain customization abilities to their developers so that they can transform it, they can modify it in a way that it makes most sense for their business. Demetrios: Yeah. Is there certain ones that you feel like are more prevalent and that if I'm just thinking about this, I'm developing on the side and I'm thinking about this right now and I'm like, well, how could I start? What would you recommend? Sourabh Agrawal: Yeah, definitely. One of the biggest use case for LLMs today is rag. Applications for Rag. I think retrieval is the key. So I think the best starting points in terms of evaluations is like look at the response quality, so look at the relevance of the response, look at the completeness of the response, look at the context quality. So like context relevance, which judges the retrieval quality. Hallucinations, which judges whether the response is grounded by the context or not. If tone matters for your use case, look at the tonality and finally look at the conversation satisfaction, because at the end, whatever outputs you give, you also need to judge whether the end user is satisfied with these outputs. Sourabh Agrawal: So I would say these four or five matrices are the best way for any developer to start who is building on top of these LLMs. And from there you can understand how the behavior is going, and then you can go more deeper, look at more nuanced metrics, which can help you understand your systems even better. Demetrios: Yeah, I like that. Now, one thing that has also been coming up in my head a lot are like the custom metrics and custom evaluation and also proprietary data set, like evaluation data sets, because as we all know, the benchmarks get gamed. And you see on Twitter, oh wow, this new model just came out. It's so good. And then you try it and you're like, what are you talking about? This thing just was trained on the benchmarks. And so it seems like it's good, but it's not. And can you talk to us about creating these evaluation data sets? What have you seen as far as the best ways of going about it? What kind of size? Like how many do we need to actually make it valuable. And what is that? Give us a breakdown there? Sourabh Agrawal: Yeah, definitely. So, I mean, surprisingly, the answer is that you don't need that many to get started. We have seen cases where even if someone builds a test data sets of like 50 to 100 samples, that's actually like a very good starting point than where they were in terms of manual annotation and in terms of creation of this data set, I believe that the best data set is what actually your users are asking. You can look at public benchmarks, you can generate some synthetic data, but none of them matches the quality of what actually your end users are looking, because those are going to give you issues which you can never anticipate. Right. Even you're generating and synthetic data, you have to anticipate what issues can come up and generate data. Beyond that, if you're looking at public data sets, they're highly curated. There is always problems of them leaking into the training data and so on. Sourabh Agrawal: So those benchmarks becomes highly reliable. So look at your traffic, take 50 samples from them. If you are collecting user feedback. So the cases where the user has downvoted or the user has not accepted the response, I mean, they are very good cases to look at. Or if you're running some evaluations, quality checks on that cases which are failing, I think they are the best starting point for you to have a good quality test data sets and use that as a way to experiment with your prompts, experiment with your systems, experiment with your retrievals, and iteratively improve them. Demetrios: Are you weighing any metrics more than others? Because I've heard stories about how sometimes you'll see that a new model will come out, or you're testing out a new model, and it seems like on certain metrics, it's gone down. But then the golden metric that you have, it actually has gone up. And so have you seen which metrics are better for different use cases? Sourabh Agrawal: I think for here, there's no single answer. I think that metric depends upon the business. Generally speaking, what we have been seeing is that the better context you retrieve, the better your model becomes. Especially like if you're using any of the bigger models, like any of the GPT or claudes, or to some extent even mistral, is highly performant. So if you're using any of these highly performant models, then if you give them the right context, the response more or less, it comes out to be good. So I think one thing which we are seeing people focusing a lot on, experimenting with different retrieval mechanisms, embedding models, and so on. But then again, the final golden key, I think many people we have seen, they annotate some data set so they have like a ground root response or a golden response, and they completely rely on just like how well their answer matches with that golden response, which I believe it's a very good starting point because now you know that, okay, if this is right and you're matching very highly with that, then obviously your response is also right. Demetrios: And what about those use cases where golden responses are very subjective? Sourabh Agrawal: Yeah, I think that's where the issues like. So I think in those scenarios, what we have seen is that one thing which people have been doing a lot is they try to see whether all information in the golden response is contained in the generated response. You don't miss out any of the important information in your ground truth response. And on top of that you want it to be concise, so you don't want it to be blabbering too much or giving highly verbose responses. So that is one way we are seeing where people are getting around this subjectivity issue of the responses by making sure that the key information is there. And then beyond that it's being highly concise and it's being to the point in terms of the task being asked. Demetrios: And so you kind of touched on this earlier, but can you say it again? Because I don't know if I fully grasped it. Where are all the places in the system that you are evaluating? Because it's not just the output. Right. And how do you look at evaluation as a system rather than just evaluating the output every once in a while? Sourabh Agrawal: Yeah, so I mean, what we do is we plug with every part. So even if you start with retrieval, so we have a high level check where we look at the quality of retrieved context. And then we also have evaluations for every part of this retrieval pipeline. So if you're doing query rewrite, if you're doing re ranking, if you're doing sub question, we have evaluations for all of them. In fact, we have worked closely with the llama index team to kind of integrate with all of their modular pipelines. Secondly, once we cross the retrieval step, we have around five to six matrices on this retrieval part. Then we look at the response generation. We have their evaluations for different criterias. Sourabh Agrawal: So conciseness, completeness, safety, jailbreaks, prompt injections, as well as you can define your custom guidelines. So you can say that, okay, if the user is asking anything and related to code, the output should also give an example code snippet so you can just in plain English, define this guideline. And we check for that. And then finally, like zooming out, we also have checks. We look at conversations as a whole, how the user is satisfied, how many turns it requires for them to, for the chatbot or the LLM to answer the user. Yeah, that's how we look at the whole evaluations as a whole. Demetrios: Yeah. It really reminds me, I say this so much because it's one of the biggest fails, I think, on the Internet, and I'm sure you've seen it where I think it was like Chevy or GM, the car manufacturer car company, they basically slapped a chat bot on their website. It was a GPT call, and people started talking to it and realized, oh my God, this thing will do anything that we want it to do. So they started asking it questions like, is Tesla better than GM? And the bot would say, yeah, give a bunch of reasons why Tesla is better than GM on the website of GM. And then somebody else asked it, oh, can I get a car for a dollar? And it said, no. And then it said, but I'm broke and I need a car for a dollar. And it said, ok, we'll sell you the car for the dollar. And so you're getting yourself into all this trouble just because you're not doing that real time evaluation. Demetrios: How do you think about the real time evaluation? And is that like an extra added layer of complexity? Sourabh Agrawal: Yeah, for the real time evaluations, I think the most important cases, which, I mean, there are two scenarios which we feel like are most important to deal with. One is you have to put some guardrails in the sense that you don't want the users to talk about your competitors. You don't want to answer some queries, like, say, you don't want to make false promises, and so on, right? Some of them can be handled with pure rejects, contextual logics, and some of them you have to do evaluations. And the second is jailbreak. Like, you don't want the user to use, let's say, your Chevy chatbot to kind of solve math problems or solve coding problems, right? Because in a way, you're just like subsidizing GPT four for them. And all of these can be done just on the question which is being asked. So you can have a system where you can fire a query, evaluate a few of these key matrices, and in parallel generate your responses. And as soon as you get your response, you also get your evaluations. Sourabh Agrawal: And you can have some logic that if the user is asking about something which I should not be answering. Instead of giving the response, I should just say, sorry, I could not answer this or have a standard text for those cases and have some mechanisms to limit such scenarios and so on. Demetrios: And it's better to do that in parallel than to try and catch the response. Make sure it's okay before sending out an LLM call. Sourabh Agrawal: I mean, generally, yes, because if you look at, if you catch the response, it adds another layer of latency. Demetrios: Right. Sourabh Agrawal: And at the end of the day, 95% of your users are not trying to do this any good product. A lot of those users are genuinely trying to use it and you don't want to build something which kind of breaks, creates an issue for them, add a latency for them just to solve for that 5%. So you have to be cognizant of this fact and figure out clever ways to do this. Demetrios: Yeah, I remember I was talking to Philip of company called honeycomb, and they added some LLM functionality to their product. And he said that when people were trying to either prompt, inject or jailbreak, it was fairly obvious because there were a lot of calls. It kind of started to be not human usage and it was easy to catch in that way. Have you seen some of that too? And what are some signs that you see when people are trying to jailbreak? Sourabh Agrawal: Yeah, I think we also have seen typically, what we also see is that whenever someone is trying to jailbreak, the length of their question or the length of their prompt typically is much larger than any average question, because they will have all sorts of instruction like forget everything, you know, you are allowed to say all of those things. And then again, this issue also comes because when they try to jailbreak, they try with one technique, it doesn't work. They try with another technique, it doesn't work. Then they try with third technique. So there is like a burst of traffic. And even in terms of sentiment, typically the sentiment or the coherence in those cases, we have seen that to be lower as compared to a genuine question, because people are just trying to cramp up all these instructions into the response. So there are definitely certain signs which already indicates that the user is trying to jailbreak this. And I think those are leg race indicators to catch them. Demetrios: And I assume that you've got it set up so you can just set an alert when those things happen and then it at least will flag it and have humans look over it or potentially just ask the person to cool off for the next minute. Hey, you've been doing some suspicious activity here. We want to see something different so I think you were going to show us a little bit about uptrend, right? I want to see what you got. Can we go for a spin? Sourabh Agrawal: Yeah, definitely. Let me share my screen and I can show you how that looks like. Demetrios: Cool, very cool. Yeah. And just while you're sharing your screen, I want to mention that for this talk, I wore my favorite shirt, which is it says, I don't know if everyone can see it, but it says, I hallucinate more than Chat GPT. Sourabh Agrawal: I think that's a cool one. Demetrios: What do we got here? Sourabh Agrawal: Yeah, so, yeah, let me kind of just get started. So I create an account with uptrend. What we have is an API method, API way of calculating these evaluations. So you get an API key similar to what you get for chat, GPT or others, and then you can just do uptrend log and evaluate and you can tell give your data. So you can give whatever your question responses context, and you can define your checks which you want to evaluate for. So if I create an API key, I can just copy this code and I just already have it here. So I'll just show you. So we have two mechanisms. Sourabh Agrawal: One is that you can just run evaluations so you can define like, okay, I want to run context relevance, I want to run response completeness. Similarly, I want to run jailbreak. I want to run for safety. I want to run for satisfaction of the users and so on. And then when you run it, it gives back you a score and it gives back you an explanation on why this particular score has been given for this particular question. Demetrios: Can you make that a little bit bigger? Yeah, just give us some plus. Yeah, there we. Sourabh Agrawal: It'S, it's essentially an API call which takes the data, takes the list of checks which you want to run, and then it gives back and score and an explanation for that. So based on that score, you can have logics, right? If the jailbreak score is like more than 0.5, then you don't want to show it. Like you want to switch back to a default response and so on. And then you can also configure that we log all of these course, and we have dashboard where you can access them. Demetrios: I was just going to ask if you have dashboards. Everybody loves a good dashboard. Let's see it. That's awesome. Sourabh Agrawal: So let's see. Okay, let's take this one. So in this case, I just ran some of this context relevance checks for some of the queries. So you can see how that changes on your data sets. If you're running the same. We also run this in a monitoring setting, so you can see how this varies over time. And then finally you have all of the data. So we provide all of the data, you can download it, run whatever analysis you want to run, and then you can also, one of the features which we have built recently and is getting very popular amongst our users is that you can filter cases where, let's say, the model is failing. Sourabh Agrawal: So let's say I take all the cases where the responses is zero and I can find common topics. So I can look at all these cases and I can find, okay, what's the common theme across them? Maybe, as you can see, they're all talking about France, Romeo Juliet and so on. So it can just pull out a common topic among these cases. So then this gives you some insights into where things are going wrong and what do you need to improve upon. And the second piece of the puzzle is the experiments. So, not just you can evaluate them, but also you can use it to experiment with different settings. So let's say. Let me just pull out an experiment I ran recently. Demetrios: Yeah. Sourabh Agrawal: So let's say I want to compare two different models, right? So GPT 3.5 and clot two. So I can now see that, okay, clot two is giving more concise responses, but in terms of factual accuracy, like GPT 3.5 is more factually accurate. So I can now decide, based on my application, based on what my users want, I can now decide which of these criteria is more meaningful for me, it's more meaningful for my users, for my data, and decide which prompt or which model I want to go ahead with. Demetrios: This is totally what I was talking about earlier, where you get a new model and you're seeing on some metrics, it's doing worse. But then on your core metric that you're looking at, it's actually performing better. So you have to kind of explain to yourself, why is it doing better on those other metrics? I don't know if I'm understanding this correctly. We can set the metrics that we're looking at. Sourabh Agrawal: Yeah, actually, I'll show you the kind of metric. Also, I forgot to mention earlier, uptrend is like open source. Demetrios: Nice. Sourabh Agrawal: Yeah. So we have these pre configured checks, so you don't need to do anything. You can just say uptrend response completeness or uptrend prompt injection. So these are like, pre configured. So we did the hard work of getting all these scores and so on. And on top of that, we also have ways for you to customize these matrices so you can define a custom guideline. You can change the prompt which you want. You can even define a custom python function which you want to act as an evaluator. Sourabh Agrawal: So we provide all of those functionalities so that they can also take advantage of things which are already there, as well as they can create custom things which make sense for them and have a way to kind of truly understand how their systems are doing. Demetrios: Oh, that's really cool. I really like the idea of custom, being able to set custom ones, but then also having some that just come right out of the box to make life easier on us. Sourabh Agrawal: Yeah. And I think both are needed because you want someplace to start, and as you advance, you also want to kind of like, you can't cover everything right, with pre configured. So you want to have a way to customize things. Demetrios: Yeah. And especially once you have data flowing, you'll start to see what other things you need to be evaluating exactly. Sourabh Agrawal: Yeah, that's very true. Demetrios: Just the random one. I'm not telling you how to build your product or anything, but have you thought about having a community sourced metric? So, like, all these custom ones that people are making, maybe there's a hub where we can add our custom? Sourabh Agrawal: Yeah, I think that's really interesting. This is something we also have been thinking a lot. It's not built out yet, but we plan to kind of go in that direction pretty soon. We want to kind of create, like a store kind of a thing where people can add their custom matrices. So. Yeah, you're right on. I think I also believe that's the way to go, and we will be releasing something on those fronts pretty soon. Demetrios: Nice. So drew's asking, how do you handle jailbreak for different types of applications? Jailbreak for a medical app would be different than one for a finance one, right? Yeah. Sourabh Agrawal: The way our jailbreak check is configured. So it takes something, what you call as a model purpose. So you define what is the purpose of your model? For a financial app, you need to say that, okay, this LLM application is designed to answer financial queries so and so on. From medical. You will have a different purpose, so you can configure what is the purpose of your app. And then when we take up a user query, we check whether the user query is under. Firstly, we check also for illegals activities and so on. And then we also check whether it's under the preview of this purpose. Sourabh Agrawal: If not, then we tag that as a scenario of jailbreak because the user is trying to do something other than the purpose so that's how we tackle it. Demetrios: Nice, dude. Well, this is awesome. Is there anything else you want to say before we jump off? Sourabh Agrawal: No, I mean, it was like, a great conversation. Really glad to be here and great talking to you. Demetrios: Yeah, I'm very happy that we got this working and you were able to show us a little bit of uptrend. Super cool that it's open source. So I would recommend everybody go check it out, get your LLMs working with confidence, and make sure that nobody is using your chatbot to be their GPT subsidy, like GM use case and. Yeah, it's great, dude. I appreciate. Sourabh Agrawal: Yeah, check us out like we are@GitHub.com. Slash uptrendai slashuptrend. Demetrios: There we go. And if anybody else wants to come on to the vector space talks and talk to us about all the cool stuff that you're doing, hit us up and we'll see you all astronauts later. Don't get lost in vector space. Sourabh Agrawal: Yeah, thank you. Thanks a lot. Demetrios: All right, dude. There we go. We are good. I don't know how the hell I'm going to stop this one because I can't go through on my phone or I can't go through on my computer. It's so weird. So I'm not, like, technically there's nobody at the wheel right now. So I think if we both get off, it should stop working. Okay. Demetrios: Yeah, but that was awesome, man. This is super cool. I really like what you're doing, and it's so funny. I don't know if we're not connected on LinkedIn, are we? I literally just today posted a video of me going through a few different hallucination mitigation techniques. So it's, like, super timely that you talk about this. I think so many people have been thinking about this. Sourabh Agrawal: Definitely with enterprises, it's like a big issue. Right? I mean, how do you make it safe? How do you make it production ready? So I'll definitely check out your video. Also would be super interesting. Demetrios: Just go to my LinkedIn right now. It's just like LinkedIn.com dpbrinkm or just search for me. I think we are connected. We're connected. All right, cool. Yeah, so, yeah, check out the last video I just posted, because it's literally all about this. And there's a really cool paper that came out and you probably saw it. It's all like, mitigating AI hallucinations, and it breaks down all 32 techniques. Demetrios: And I was talking with on another podcast that I do, I was literally talking with the guys from weights and biases yesterday, and I was talking about how I was like, man, these evaluation data sets as a service feels like something that nobody's doing. And I guess it's probably because, and you're the expert, so I would love to hear what you have to say about it, but I guess it's because you don't really need it that bad. With a relatively small amount of data, you can start getting some really good evaluation happening. So it's a lot better than paying somebody else. Sourabh Agrawal: And also, I think it doesn't make sense also for a service because some external person is not best suited to make a data set for your use case. Demetrios: Right. Sourabh Agrawal: It's you. You have to look at what your users are asking to create a good data set. You can have a method, which is what optrain also does. We basically help you to sample and pick out the right cases from this data set based on the feedback of your users, based on the scores which are being generated. But it's difficult for someone external to craft really good questions or really good queries or really good cases which make sense for your business. Demetrios: Because the other piece that kind of, like, spitballed off of that, the other piece of it was techniques. So let me see if I can place all this words into a coherent sentence for you. It's basically like, okay, evaluation data sets don't really make sense because you're the one who knows the most. With a relatively small amount of data, you're going to be able to get stuff going real quick. What I thought about is, what about these hallucination mitigation techniques so that you can almost have options. So in this paper, right, there's like 32 different kinds of techniques that they use, and some are very pertinent for rags. They have like, five different or four different types of techniques. When you're dealing with rags to mitigate hallucinations, then they have some like, okay, if you're distilling a model, here is how you can make sure that the new distilled model doesn't hallucinate as much. Demetrios: Blah, blah, blah. But what I was thinking is like, what about how can you get a product? Or can you productize these kind of techniques? So, all right, cool. They're in this paper, but in uptrain, can we just say, oh, you want to try this new mitigation technique? We make that really easy for you. You just have to select it as one of the hallucination mitigation techniques. And then we do the heavy lifting of, if it's like, there's one. Have you heard of fleek? That was one that I was talking about in the video. Fleek is like where there's a knowledge graph, LLM that is created, and it is specifically created to try and combat hallucinations. And the way that they do it is they say that LLM will try and identify anywhere in the prompt or the output. Demetrios: Sorry, the output. It will try and identify if there's anything that can be fact checked. And so if it says that humans landed on the moon in 1969, it will identify that. And then either through its knowledge graph or through just forming a search query that will go out and then search the Internet, it will verify if that fact is true in the output. So that's like one technique, right? And so what I'm thinking about is like, oh, man, wouldn't it be cool if you could have all these different techniques to be able to use really easily as opposed to, great, I read it in a paper. Now, how the fuck am I going to get my hands on one of these LLMs with a knowledge graph if I don't train it myself? Sourabh Agrawal: Shit, yeah, I think that's a great suggestion. I'll definitely check it out. One of the things which we also want to do is integrate with all these techniques because these are really good techniques and they help solve a lot of problems, but using them is not simple. Recently we integrated with Spade. It's basically like a technique where I. Demetrios: Did another video on spade, actually. Sourabh Agrawal: Yeah, basically. I think I'll also check out these hallucinations. So right now what we do is based on this paper called fact score, which instead of checking on the Internet, it checks in the context only to verify this fact can be verified from the context or not. But I think it would be really cool if people can just play around with these techniques and just see whether it's actually working on their data or not. Demetrios: That's kind of what I was thinking is like, oh, can you see? Does it give you a better result? And then the other piece is like, oh, wait a minute, does this actually, can I put like two or three of them in my system at the same time? Right. And maybe it's over engineering or maybe it's not. I don't know. So there's a lot of fun stuff that can go down there and it's fascinating to think about. Sourabh Agrawal: Yeah, definitely. And I think experimentation is the key here, right? I mean, unless you try out them, you don't know what works. And if something works which improves your system, then definitely it was worth it. Demetrios: Thanks for that. Sourabh Agrawal: We'll check into it. Demetrios: Dude, awesome. It's great chatting with you, bro. And I'll talk to you later, bro. Sourabh Agrawal: Yeah, thanks a lot. Great speaking. See you. Bye.
qdrant-landing/content/blog/vector-search-for-content-based-video-recommendation-gladys-and-sam-vector-space-talks.md
--- draft: false title: Vector Search for Content-Based Video Recommendation - Gladys and Samuel from Dailymotion slug: vector-search-vector-recommendation short_description: Gladys Roch and Samuel Leonardo Gracio join us in this episode to share their knowledge on content-based recommendation. description: Gladys Roch and Samuel Leonardo Gracio from Dailymotion, discussed optimizing video recommendations using Qdrant's vector search alongside challenges and solutions in content-based recommender systems. preview_image: /blog/from_cms/gladys-and-sam-bp-cropped.png date: 2024-03-19T14:08:00.190Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Video Recommender - content based recommendation --- > "*The vector search engine that we chose is Qdrant, but why did we choose it? Actually, it answers all the load constraints and the technical needs that we had. It allows us to do a fast neighbor search. It has a python API which matches the recommender tag that we have.*”\ -- Gladys Roch > Gladys Roch is a French Machine Learning Engineer at Dailymotion working on recommender systems for video content. > "*We don't have full control and at the end the cost of their solution is very high for a very low proposal. So after that we tried to benchmark other solutions and we found out that Qdrant was easier for us to implement.*”\ -- Samuel Leonardo Gracio > Samuel Leonardo Gracio, a Senior Machine Learning Engineer at Dailymotion, mainly works on recommender systems and video classification. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4YYASUZKcT5A90d6H2mOj9?si=a5GgBd4JTR6Yo3HBJfiejQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/z_0VjMZ2JY0).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/z_0VjMZ2JY0?si=buv9aSN0Uh09Y6Qx" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Vector-Search-for-Content-Based-Video-Recommendation---Gladys-and-Sam--Vector-Space-Talk-012-e2f9hmm/a-aatvqtr" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Are you captivated by how video recommendations that are engineered to serve up your next binge-worthy content? We definitely are. Get ready to unwrap the secrets that keep millions engaged, as Demetrios chats with the brains behind the scenes of Dailymotion. This episode is packed with insights straight from ML Engineers at Dailymotion who are reshaping how we discover videos online. Here's what you’ll unbox from this episode: 1. **The Mech Behind the Magic:** Understand how a robust video embedding process can change the game - from textual metadata to audio signals and beyond. 2. **The Power of Multilingual Understanding:** Discover the tools that help recommend videos to a global audience, transcending language barriers. 3. **Breaking the Echo Chamber:** Learn about Dailymotion's 'perspective' feature that's transforming the discovery experience for users. 4. **Challenges & Triumphs:** Hear how Qdrant helps Dailymotion tackle a massive video catalog and ensure the freshest content pops on your feed. 5. **Behind the Scenes with Qdrant:** Get an insider’s look at why Dailymotion entrusted their recommendation needs to Qdrant's capable hands (or should we say algorithms?). > Fun Fact: Did you know that Dailymotion juggles over 13 million recommendations daily? That's like serving up a personalized video playlist to the entire population of Greece. Every single day! > ## Show notes: 00:00 Vector Space Talks intro with Gladys and Samuel.\ 05:07 Recommender system needs vector search for recommendations.\ 09:29 Chose vector search engine for fast neighbor search.\ 13:23 Video transcript use for scalable multilingual embedding.\ 16:35 Transcripts prioritize over video title and tags.\ 17:46 Videos curated based on metadata for quality.\ 20:53 Qdrant setup overview for machine learning engineers.\ 25:25 Enhanced recommendation system improves user engagement.\ 29:36 Recommender system, A/B testing, collection aliases strategic.\ 33:03 Dailymotion's new feature diversifies video perspectives.\ 34:58 Exploring different perspectives and excluding certain topics. ## More Quotes from Gladys and Sam: "*Basically, we're computing the embeddings and then we feed them into Qdrant, and we do that with a streaming pipeline, which means that every time, so everything is in streaming, every time a new video is uploaded or updated, if the description changes, for example, then the embedding will be computed and then it will be fed directly into Qdrant.*”\ -- Gladys Roch *"We basically recommend videos to a user if other users watching the same video were watching other videos. But the problem with that is that it only works with videos where we have what we call here high signal. So videos that have at least thousands of views, some interactions, because for fresh and fresh or niche videos, we don't have enough interaction.”*\ -- Samuel Leonardo Gracio *"But every time we add new videos to Dailymotion, then it's growing. So it can provide recommendation for videos with few interactions that we don't know well. So we're very happy because it led us to huge performances increase on the low signal. We did a threefold increase on the CTR, which means the number of clicks on the recommendation. So with Qdrant we were able to kind of fix our call start issues.”*\ -- Gladys Roch *"The fact that you have a very cool team that helped us to implement some parts when it was difficult, I think it was definitely the thing that make us choose Qdrant instead of another solution.”*\ -- Samuel Leonardo Gracio ## Transcript: Demetrios: I don't know if you all realize what you got yourself into, but we are back for another edition of the Vector Space Talks. My stream is a little bit chunky and slow, so I think we're just to get into it with Gladys and Samuel from Daily motion. Thank you both for joining us. It is an honor to have you here. For everyone that is watching, please throw your questions and anything else that you want to remark about into the chat. We love chatting with you and I will jump on screen if there is something that we need to stop the presentation about and ask right away. But for now, I think you all got some screen shares you want to show us. Samuel Leonardo Gracio: Yes, exactly. So first of all, thank you for the invitation, of course. And yes, I will share my screen. We have a presentation. Excellent. Should be okay now. Demetrios: Brilliant. Samuel Leonardo Gracio: So can we start? Demetrios: I would love it. Yes, I'm excited. I think everybody else is excited too. Gladys Roch: So welcome, everybody, to our vector space talk. I'm Gladys Roch, machine learning engineer at Dailymotion. Samuel Leonardo Gracio: And I'm Samuel, senior machine learning engineer at Dailymotion. Gladys Roch: Today we're going to talk about Vector search in the context of recommendation and in particular how Qdrant. That's going to be a hard one. We actually got used to pronouncing Qdrant as a french way, so we're going to sleep a bit during this presentation, sorry, in advance, the Qdrant and how we use it for our content based recommender. So we are going to first present the context and why we needed a vector database and why we chose Qdrant, how we fit Qdrant, what we put in it, and we are quite open about the pipelines that we've set up and then we get into the results and how Qdrant helped us solve the issue that we had. Samuel Leonardo Gracio: Yeah. So first of all, I will talk about, globally, the recommendation at Dailymotion. So just a quick introduction about Dailymotion, because you're not all french, so you may not all know what Dailymotion is. So we are a video hosting platform as YouTube or TikTok, and we were founded in 2005. So it's a node company for videos and we have 400 million unique users per month. So that's a lot of users and videos and views. So that's why we think it's interesting. So Dailymotion is we can divide the product in three parts. Samuel Leonardo Gracio: So one part is the native app. As you can see, it's very similar from other apps like TikTok or Instagram reels. So you have vertical videos, you just scroll and that's it. We also have a website. So Dailymotion.com, that is our main product, historical product. So on this website you have a watching page like you can have for instance, on YouTube. And we are also a video player that you can find in most of the french websites and even in other countries. And so we have recommendation almost everywhere and different recommenders for each of these products. Gladys Roch: Okay, so that's Dailymotion. But today we're going to focus on one of our recommender systems. Actually, the machine learning engineer team handles multiple recommender systems. But the video to video recommendation is the oldest and the most used. And so it's what you can see on the screen, it's what you have the recommendation queue of videos that you can see on the side or below the videos that you're watching. And to compute these suggestions, we have multiple models running. So that's why it's a global system. This recommendation is quite important for Dailymotion. Gladys Roch: It's actually a key component. It's one of the main levers of audience generation. So for everybody who comes to the website from SEO or other ways, then that's how we generate more audience and more engagement. So it's very important in the revenue stream of the platform. So working on it is definitely a main topic of the team and that's why we are evolving on this topic all the time. Samuel Leonardo Gracio: Okay, so why would we need a vector search for this recommendation? I think we are here for that. So as many platforms and as many recommender systems, I think we have a very usual approach based on a collaborative model. So we basically recommend videos to a user if other users watching the same video were watching other videos. But the problem with that is that it only works with videos where we have what we call here high signal. So videos that have at least thousands of views, some interactions, because for fresh and fresh or niche videos, we don't have enough interaction. And we have a problem that I think all the recommender systems can have, which is a costar tissue. So this costar tissue is for new users and new videos, in fact. So if we don't have any information or interaction, it's difficult to recommend anything based on this collaborative approach. Samuel Leonardo Gracio: So the idea to solve that was to use a content based recommendation. It's also a classic solution. And the idea is when you have a very fresh video. So video, hey, in this case, a good thing to recommend when you don't have enough information is to recommend a very similar video and hope that the user will watch it also. So for that, of course, we use Qdrant and we will explain how. So yeah, the idea is to put everything in the vector space. So each video at Dailymotion will go through an embedding model. So for each video we'll get a video on embedding. Samuel Leonardo Gracio: We will describe how we do that just after and put it in a vector space. So after that we could use Qdrant to, sorry, Qdrant to query and get similar videos that we will recommend to our users. Gladys Roch: Okay, so if we have embeddings to represent our videos, then we have a vector space, but we need to be able to query this vector space and not only to query it, but to do it at scale and online because it's like a recommender facing users. So we have a few requirements. The first one is that we have a lot of videos in our catalog. So actually doing an exact neighbor search would be unreasonable, unrealistic. It's a combinatorial explosion issue, so we can't do an exact Knn. Plus we also have new videos being uploaded to Dailymotion every hour. So if we could somehow manage to do KNN and to pre compute it, it would never be up to date and it would be very expensive to recompute all the time to include all the new videos. So we need a solution that can integrate new videos all the time. Gladys Roch: And we're also at scale, we serve over 13 million recommendation each day. So it means that we need a big setup to retrieve the neighbors of many videos all day. And finally, we have users waiting for the recommendation. So it's not just pre computed and stored, and it's not just content knowledge. We are trying to provide the recommendation as fast as possible. So we have time constraints and we only have a few hundred milliseconds to compute the recommendation that we're going to show the user. So we need to be able to retrieve the close video that we'd like to propose to the user very fast. So we need to be able to navigate this vector space that we are building quite quickly. Gladys Roch: So of course we need vector search engine. That's the most easy way to do it, to be able to compute and approximate neighbor search and to do it at scale. So obviously, evidently the vector search engine that we chose this Qdrant, but why did we choose it? Actually, it answers all the load constraints and the technical needs that we had. It allows us to do a fast neighbor search. It has a python API which match the recommendous tag that we have. A very important issue for us was to be able to not only put the embeddings of the vectors in this space but also to put metadata with it to be able to get a bit more information and not just a mathematical representation of the video in this database. And actually doing that make it filterable, which means that we can retrieve neighbors of a video, but given some constraints, and it's very important for us typically for language constraints. Samuel will talk a bit more in details about that just after. Gladys Roch: But we have an embedding that is multilingual and we need to be able to filter all the language, all the videos on their language to offer more robust recommendation for our users. And also Qdrant is distributed and so it's scalable and we needed that due to the load that I just talked about. So that's the main points that led us to choose Qdrant. Samuel Leonardo Gracio: And also they have an amazing team. Gladys Roch: So that's another, that would be our return of experience. The team of Qdrant is really nice. You helped us actually put in place the cluster. Samuel Leonardo Gracio: Yeah. So what do we put in our Qdrant cluster? So how do we build our robust video embedding? I think it's really interesting. So the first point for us was to know what a video is about. So it's a really tricky question, in fact. So of course, for each video uploaded on the platform, we have the video signal, so many frames representing the video, but we don't use that for our meetings. And in fact, why we are not using them, it's because it contains a lot of information. Right, but not what we want. For instance, here you have video about an interview of LeBron James. Samuel Leonardo Gracio: But if you only use the frames, the video signal, you can't even know what he's saying, what the video is about, in fact. So we still try to use it. But in fact, the most interesting thing to represent our videos are the textual metadata. So the textual metadata, we have them for every video. So for every video uploaded on the platform, we have a video title, video description that are put by the person that uploads the video. But we also have automatically detected tags. So for instance, for this video, you could have LeBron James, and we also have subtitles that are automatically generated. So just to let you know, we do that using whisper, which is an open source solution provided by OpenAI, and we do it at scale. Samuel Leonardo Gracio: When a video is uploaded, we directly have the video transcript and we can use this information to represent our videos with just a textual embedding, which is far more easy to treat, and we need less compute than for frames, for instance. So the other issue for us was that we needed an embedding that could scale so that does not require too much time to compute because we have a lot of videos, more than 400 million videos, and we have many videos uploaded every hour, so it needs to scale. We also have many languages on our platform, more than 300 languages in the videos. And even if we are a french video platform, in fact, it's only a third of our videos that are actually in French. Most of the videos are in English or other languages such as Turkish, Spanish, Arabic, et cetera. So we needed something multilingual, which is not very easy to find. But we came out with this embedding, which is called multilingual universal sentence encoder. It's not the most famous embedding, so I think it's interesting to share it. Samuel Leonardo Gracio: It's open source, so everyone can use it. It's available on Tensorflow hub, and I think that now it's also available on hugging face, so it's easy to implement and to use it. The good thing is that it's pre trained, so you don't even have to fine tune it on your data. You can, but I think it's not even required. And of course it's multilingual, so it doesn't work with every languages. But still we have the main languages that are used on our platform. It focuses on semantical similarity. And you have an example here when you have different video titles. Samuel Leonardo Gracio: So for instance, one about soccer, another one about movies. Even if you have another video title in another language, if it's talking about the same topic, they will have a high cosine similarity. So that's what we want. We want to be able to recommend every video that we have in our catalog, not depending on the language. And the good thing is that it's really fast. Actually, it's a few milliseconds on cpu, so it's really easy to scale. So that was a huge requirement for us. Demetrios: Can we jump in here? Demetrios: There's a few questions that are coming through that I think are pretty worth. And it's actually probably more suited to the last slide. Sameer is asking this one, actually, one more back. Sorry, with the LeBron. Yeah, so it's really about how you understand the videos. And Sameer was wondering if you can quote unquote hack the understanding by putting some other tags or. Samuel Leonardo Gracio: Ah, you mean from a user perspective, like the person uploading the video, right? Demetrios: Yeah, exactly. Samuel Leonardo Gracio: You could do that before using transcripts, but since we are using them mainly and we only use the title, so the tags are automatically generated. So it's on our side. So the title and description, you can put whatever you want. But since we have the transcript, we know the content of the video and we embed that. So the title and the description are not the priority in the embedding. So I think it's still possible, but we don't have such use case. In fact, most of the people uploading videos are just trying to put the right title, but I think it's still possible. But yeah, with the transcript we don't have any examples like that. Samuel Leonardo Gracio: Yeah, hopefully. Demetrios: So that's awesome to think about too. It kind of leads into the next question, which is around, and this is from Juan Pablo. What do you do with videos that have no text and no meaningful audio, like TikTok or a reel? Samuel Leonardo Gracio: So for the moment, for these videos, we are only using the signal from the title tags, description and other video metadata. And we also have a moderation team which is watching the videos that we have here in the mostly recommended videos. So we know that the videos that we recommend are mostly good videos. And for these videos, so that don't have audio signal, we are forced to use the title tags and description. So these are the videos where the risk is at the maximum for us currently. But we are also working at the moment on something using the audio signal and the frames, but not all the frames. But for the moment, we don't have this solution. Right. Gladys Roch: Also, as I said, it's not just one model, we're talking about the content based model. But if we don't have a similarity score that is high enough, or if we're just not confident about the videos that were the closest, then we will default to another model. So it's not just one, it's a huge system. Samuel Leonardo Gracio: Yeah, and one point also, we are talking about videos with few interactions, so they are not videos at risk. I mean, they don't have a lot of views. When this content based algo is called, they are important because there are very fresh videos, and fresh videos will have a lot of views in a few minutes. But when the collaborative model will be retrained, it will be able to recommend videos on other things than the content itself, but it will use the collaborative signal. So I'm not sure that it's a really important risk for us. But still, I think we could still do some improvement for that aspect. Demetrios: So where do I apply to just watch videos all day for the content team? All right, I'll let you get back to it. Sorry to interrupt. And if anyone else has good questions. Samuel Leonardo Gracio: And I think it's good to ask your question during the presentation, it's more easier to answer. So, yeah, sorry, I was saying that we had this multilingual embedding, and just to present you our embedding pipeline. So, for each video that is uploaded or edited, because you can change the video title whenever you want, we have a pub sub event that is sent to a dataflow pipeline. So it's a streaming job for every video we will retrieve. So textual metadata, title, description tags or transcript, preprocess it to remove some words, for instance, and then call the model to have this embedding. And then. So we put it in bigquery, of course, but also in Qdrant. Gladys Roch: So I'm going to present a bit our Qdrant setup. So actually all this was deployed by our tier DevOps team, not by us machine learning engineers. So it's an overview, and I won't go into the details because I'm not familiar with all of this, but basically, as Samuel said, we're computing the embeddings and then we feed them into Qdrant, and we do that with a streaming pipeline, which means that every time, so everything is in streaming, every time a new video is uploaded or updated, if the description changes, for example, then the embedding will be computed and then it will be fed directly into Qdrant. And on the other hand, our recommender queries the Qdrant vector space through GrPC ingress. And actually Qdrant is running on six pods that are using arm nodes. And you have the specificities of which type of nodes we're using there, if you're interested. But basically that's the setup. And what is interesting is that our recommendation stack for now, it's on premise, which means it's running on Dailymotion servers, not on the Google Kubernetes engine, whereas Qdrant is on the TKE. Gladys Roch: So we are querying it from outside. And also if you have more questions about this setup, we'll be happy to redirect you to the DevOps team that helped us put that in place. And so finally the results. So we stated earlier that we had a call start issue. So before Qdrant, we had a lot of difficulties with this challenge. We had a collaborative recommender that was trained and performed very well on high senior videos, which means that is videos with a lot of interactions. So we can see what user like to watch, which videos they like to watch together. And we also had a metadata recommender. Gladys Roch: But first, this collaborative recommender was actually also used to compute call start recommendation, which is not allowed what it is trained on, but we were using a default embedding to compute like a default recommendation for call start, which led to a lot of popularity issues. Popularity issues for recommender system is when you always recommend the same video that is hugely popular and it's like a feedback loop. A lot of people will default to this video because it might be clickbait and then we will have a lot of inhaler action. So it will pollute the collaborative model all over again. So we had popularity issues with this, obviously. And we also had like this metadata recommender that only focused on a very small scope of trusted owners and trusted video sources. So it was working. It was an auto encoder and it was fine, but the scope was too small. Gladys Roch: Too few videos could be recommended through this model. And also those two models were trained very infrequently, only every 4 hours and 5 hours, which means that any fresh videos on the platform could not be recommended properly for like 4 hours. So it was the main issue because Dailymotion uses a lot of fresh videos and we have a lot of news, et cetera. So we need to be topical and this couldn't be done with this huge delay. So we had overall bad performances on the Los signal. And so with squadron we fixed that. We still have our collaborative recommender. It has evolved since then. Gladys Roch: It's actually computed much more often, but the collaborative model is only focused on high signal now and it's not computed like default recommendation for low signal that it doesn't know. And we have a content based recommender based on the muse embedding and Qdrant that is able to recommend to users video as soon as they are uploaded on the platform. And it has like a growing scope, 20 million vectors at the moment. But every time we add new videos to Dailymotion, then it's growing. So it can provide recommendation for videos with few interactions that we don't know well. So we're very happy because it led us to huge performances increase on the low signal. We did a threefold increase on the CTR, which means the number of clicks on the recommendation. So with Qdrant we were able to kind of fix our call start issues. Gladys Roch: What I was talking about fresh videos, popularities, low performances. We fixed that and we were very happy with the setup. It's running smoothly. Yeah, I think that's it for the presentation, for the slides at least. So we are open to discussion and if you have any questions to go into the details of the recommender system. So go ahead, shoot. Demetrios: I've got some questions while people are typing out everything in the chat and the first one I think that we should probably get into is how did the evaluation process go for you when you were looking at different vector databases and vector search engines? Samuel Leonardo Gracio: So that's a good point. So first of all, you have to know that we are working with Google cloud platform. So the first thing that we did was to use their vector search engine, so which called matching engine. Gladys Roch: Right. Samuel Leonardo Gracio: But the issue with matching engine is that we could not in fact add the API, wasn't easy to use. First of all. The second thing was that we could not put metadata, as we do in Qdrant, and filter out, pre filter before the query, as we are doing now in a Qdrant. And the first thing is that their solution is managed. Yeah, is managed. We don't have the full control and at the end the cost of their solution is very high for a very low proposal. So after that we tried to benchmark other solutions and we found out that Qdrant was easier for us to implement. We had a really cool documentation, so it was easy to test some things and basically we couldn't find any drawbacks for our use case at least. Samuel Leonardo Gracio: And moreover, the fact that you have a very cool team that helped us to implement some parts when it was difficult, I think it was definitely the thing that make us choose Qdrant instead of another solution, because we implemented Qdrant. Gladys Roch: Like on February or even January 2023. So Qdrant is fairly new, so the documentation was still under construction. And so you helped us through the discord to set up the cluster. So it was really nice. Demetrios: Excellent. And what about least favorite parts of using Qdrant? Gladys Roch: Yeah, I have one. I discovered it was not actually a requirement at the beginning, but for recommender systems we tend to do a lot of a B test. And you might wonder what's the deal with Qdrant and a b test. It's not related, but actually we were able to a b test our collection. So how we compute the embedding? First we had an embedding without the transcript, and now we have an embedding that includes the transcript. So we wanted to a b test that. And on Quellin you can have collection aliases and this is super helpful because you can have two collections that live on the cluster at the same time, and then on your code you can just call the production collection and then set the alias to the proper one. So for a d testing and rollout it's very useful. Gladys Roch: And I found it when I first wanted to do an a test. So I like this one. It was an existed and I like it also, the second thing I like is the API documentation like the one that is auto generated with all the examples and how to query any info on Qdrant. It's really nice for someone who's not from DevOps. It help us just debug our collection whenever. So it's very easy to get into. Samuel Leonardo Gracio: And the fact that the product is evolving so fast, like every week almost. You have a new feeder. I think it's really cool. There is one community and I think, yeah, it's really interesting and it's amazing to have such people working on that on an open source project like this one. Gladys Roch: We had feedback from our devot team when preparing this presentation. We reached out to them for the small schema that I tried to present. And yeah, they said that the open source community of quasant was really nice. It was easy to contribute, it was very open on Discord. I think we did a return on experience at some point on how we set up the cluster at the beginning. And yeah, they were very hyped by the fact that it's coded in rust. I don't know if you hear this a lot, but to them it's even more encouraging contributing with this kind of new language. Demetrios: 100% excellent. So last question from my end, and it is on if you're using Qdrant for anything else when it comes to products at Dailymotion, yes, actually we do. Samuel Leonardo Gracio: I have one slide about this. Gladys Roch: We have slides because we presented quadrum to another talk a few weeks ago. Samuel Leonardo Gracio: So we didn't prepare this slide just for this presentation, it's from another presentation, but still, it's a good point because we're currently trained to use it in other projects. So as we said in this presentation, we're mostly using it for the watching page. So Dailymotion.com but we also introduced it in the mobile app recently through a feature that is called perspective. So the goal of the feature is to be able to break this vertical feed algorithm to let the users to have like a button to discover new videos. So when you go through your feed, sometimes you will get a video talking about, I don't know, a movie. You will get this button, which is called perspective, and you will be able to have other videos talking about the same movie but giving to you another point of view. So people liking the movie, people that didn't like the movie, and we use Qdrant, sorry for the candidate generation part. So to get the similar videos and to get the videos that are talking about the same subject. Samuel Leonardo Gracio: So I won't talk too much about this project because it will require another presentation of 20 minutes or more. But still we are using it in other projects and yeah, it's really interesting to see what we are able to do with that tool. Gladys Roch: Once we have the vector space set up, we can just query it from everywhere. In every project of recommendation. Samuel Leonardo Gracio: We also tested some search. We are testing many things actually, but we don't have implemented it yet. For the moment we just have this perspective feed and the content based Roko, but we still have a lot of ideas using this vector search space. Demetrios: I love that idea on the get another perspective. So it's not like you get, as you were mentioning before, you don't get that echo chamber and just about everyone saying the same thing. You get to see are there other sides to this? And I can see how that could be very uh, Juan Pablo is back, asking questions in the chat about are you able to recommend videos with negative search queries and negative in the sense of, for example, as a user I want to see videos of a certain topic, but I want to exclude some topics from the video. Gladys Roch: Okay. We actually don't do that at the moment, but we know we can with squadron we can set positive and negative points from where to query. So actually for the moment we only retrieve close positive neighbors and we apply some business filters on top of that recommendation. But that's it. Samuel Leonardo Gracio: And that's because we have also this collaborative model, which is our main recommender system. But I think we definitely need to check that and maybe in the future we will implement that. We saw that many documentation about this and I'm pretty sure that it would work very well on our use case. Demetrios: Excellent. Well folks, I think that's about it for today. I want to thank you so much for coming and chatting with us and teaching us about how you're using Qdrant and being very transparent about your use. I learned a ton. And for anybody that's out there doing recommender systems and interested in more, I think they can reach out to you on LinkedIn. I've got both of your we'll drop them in the chat right now and we'll let everybody enjoy. So don't get lost in vector base. We will see you all later. Demetrios: If anyone wants to give a talk next, reach out to me. We always are looking for incredible talks and so this has been great. Thank you all. Gladys Roch: Thank you. Samuel Leonardo Gracio: Thank you very much for the invitation and for everyone listening. Thank you. Gladys Roch: See you. Bye.
qdrant-landing/content/blog/virtualbrain-best-rag-to-unleash-the-real-power-of-ai-guillaume-marquis-vector-space-talks.md
--- draft: false title: "VirtualBrain: Best RAG to unleash the real power of AI - Guillaume Marquis | Vector Space Talks" slug: virtualbrain-best-rag short_description: Let's explore information retrieval with Guillaume Marquis, CTO & Co-Founder at VirtualBrain. description: Guillaume Marquis, CTO & Co-Founder at VirtualBrain, reveals the mechanics of advanced document retrieval with RAG technology, discussing the challenges of scalability, up-to-date information, and navigating user feedback to enhance the productivity of knowledge workers. preview_image: /blog/from_cms/guillaume-marquis-2-cropped.png date: 2024-03-27T12:41:51.859Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Retrieval Augmented Generation - VirtualBrain --- > *"It's like mandatory to have a vector database that is scalable, that is fast, that has low latencies, that can under parallel request a large amount of requests. So you have really this need and Qdrant was like an obvious choice.”*\ — Guillaume Marquis > Guillaume Marquis, a dedicated Engineer and AI enthusiast, serves as the Chief Technology Officer and Co-Founder of VirtualBrain, an innovative AI company. He is committed to exploring novel approaches to integrating artificial intelligence into everyday life, driven by a passion for advancing the field and its applications. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/20iFzv2sliYRSHRy1QHq6W?si=xZqW2dF5QxWsAN4nhjYGmA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/v85HqNqLQcI?feature=shared).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/v85HqNqLQcI?si=hjUiIhWxsDVO06-H" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/VirtualBrain-Best-RAG-to-unleash-the-real-power-of-AI---Guillaume-Marquis--Vector-Space-Talks-017-e2grbfg/a-ab22dgt" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Who knew that document retrieval could be creative? Guillaume and VirtualBrain help draft sales proposals using past reports. It's fascinating how tech aids deep work beyond basic search tasks. Tackling document retrieval and AI assistance, Guillaume furthermore unpacks the ins and outs of searching through vast data using a scoring system, the virtue of RAG for deep work, and going through the 'illusion of work', enhancing insights for knowledge workers while confronting the challenges of scalability and user feedback on hallucinations. Here are some key insight from this episode you need to look out for: 1. How to navigate the world of data with a precision scoring system for document retrieval. 2. The importance of fresh data and how to avoid the black holes of outdated info. 3. Techniques to boost system scalability and speed — essential in the vastness of data space. 4. AI Assistants tailored for depth rather than breadth, aiding in tasks like crafting stellar commercial proposals. 5. The intriguing role of user perception in AI tool interactions, plus a dash of timing magic. > Fun Fact: VirtualBrain uses Qdrant, for its advantages in speed, scalability, and API capabilities. > ## Show notes: 00:00 Hosts and guest recommendations.\ 09:01 Leveraging past knowledge to create new proposals.\ 12:33 Ingesting and parsing documents for context retrieval.\ 14:26 Creating and storing data, performing advanced searches.\ 17:39 Analyzing document date for accurate information retrieval.\ 20:32 Perceived time can calm nerves and entertain.\ 24:23 Tried various vector databases, preferred open source.\ 27:42 LangFuse: open source tool for monitoring tasks.\ 33:10 AI tool designed to stay within boundaries.\ 34:31 Minimizing hallucination in AI through careful analysis. ## More Quotes from Guillaume: *"We only exclusively use open source tools because of security aspects and stuff like that. That's why also we are using Qdrant one of the important point on that. So we have a system, we are using this serverless stuff to ingest document over time.”*\ — Guillaume Marquis *"One of the challenging part was the scalability of the system. We have clients that come with terra octave of data and want to be parsed really fast and so you have the ingestion, but even after the semantic search, even on a large data set can be slow. And today ChatGPT answers really fast. So your users, even if the question is way more complicated to answer than a basic ChatGPT question, they want to have their answer in seconds. So you have also this challenge that you really have to take care.”*\ — Guillaume Marquis *"Our AI is not trained to write you a speech based on Shakespeare and with the style of Martin Luther King. It's not the purpose of the tool. So if you ask something that is out of the box, he will just say like, okay, I don't know how to answer that. And that's an important point. That's a feature by itself to be able to not go outside of the box.”*\ — Guillaume Marquis ## Transcript: Demetrios: So, dude, I'm excited for this talk. Before we get into it, I want to make sure that we have some pre conversation housekeeping items that go out, one of which being, as always, we're doing these vector space talks and everyone is encouraged and invited to join in. Ask your questions, let us know where you're calling in from, let us know what you're up to, what your use case is, and feel free to drop any questions that you may have in the chat. We will be monitoring it like a hawk. Today I am joined by none other than Sabrina. How are you doing, Sabrina? Sabrina Aquino: What's up, Demetrios? I'm doing great. Excited to be here. I just love seeing what amazing stuff people are building with Qdrant and. Yeah, let's get into it. Demetrios: Yeah. So I think I see Sabrina's wearing a special shirt which is don't get lost in vector space shirt. If anybody wants a shirt like that. There we go. Well, we got you covered, dude. You will get one at your front door soon enough. If anybody else wants one, come on here. Present at the next vector space talks. Demetrios: We're excited to have you. And we've got one last thing that I think is fun that we can talk about before we jump into the tech piece of the conversation. And that is I told Sabrina to get ready with some recommendations. Know vector databases, they can be used occasionally for recommendation systems, but nothing's better than getting that hidden gem from your friend. And right now what we're going to try and do is give you a few hidden gems so that the next time the recommendation engine is working for you, it's working in your favor. And Sabrina, I asked you to give me one music that you can recommend, one show and one rando. So basically one random thing that you can recommend to us. Sabrina Aquino: So I've picked. I thought about this. Okay, I give it some thought. The movie would be Catch Me If You Can by Leo DiCaprio and Tom Hanks. Have you guys watched it? Really good movie. The song would be oh, children by knee cave and the bad scenes. Also very good song. And the random recommendation is my favorite scented candle, which is citrus notes, sea salt and cedar. Sabrina Aquino: So there you go. Demetrios: A scented candle as a recommendation. I like it. I think that's cool. I didn't exactly tell you to get ready with that. So I'll go next, then you can have some more time to think. So for anybody that's joining in, we're just giving a few recommendations to help your own recommendation engines at home. And we're going to get into this conversation about rags in just a moment. But my song is with. Demetrios: Oh, my God. I've been listening to it because I didn't think that they had it on Spotify, but I found it this morning and I was so happy that they did. And it is Bill Evans and Chet Baker. Basically, their whole album, the legendary sessions, is just like, incredible. But the first song on that album is called Alone Together. And when Chet Baker starts playing his little trombone, my God, it is like you can feel emotion. You can touch it. That is what I would recommend. Demetrios: Anyone out there? I'll drop a link in the chat if you like it. The film or series. This fool, if you speak Spanish, it's even better. It is amazing series. Get that, do it. And as the rando thing, I've been having Rishi mushroom powder in my coffee in the mornings. I highly recommend it. All right, last one, let's get into your recommendations and then we'll get into this rag chat. Guillaume Marquis: So, yeah, I sucked a little bit. So for the song, I think I will give something like, because I'm french, I think you can hear it. So I will choose Get Lucky of Daft Punk and because I am a little bit sad of the end of their collaboration. So, yeah, just like, I cannot forget it. And it's a really good music. Like, miss them as a movie, maybe something like I really enjoy. So we have a lot of french movies that are really nice, but something more international maybe, and more mainstream. Jungle of Tarantino, that is really a good movie and really enjoy it. Guillaume Marquis: I watched it several times and still a good movie to watch. And random thing, maybe a city. A city to go to visit. I really enjoyed. It's hard to choose. Really hard to choose a place in general. Okay, Florence, like in Italy. Demetrios: There we go. Guillaume Marquis: Yeah, it's a really cool city to go. So if you have time, and even Sabrina, if you went to Europe soon, it's really a nice place to go. Demetrios: That is true. Sabrina is going to Europe soon. We're blowing up her spot right now. So hopefully Florence is on the list. I know that most people watching did not tune in to hearing the three of us just randomly give recommendations. We are here to talk more about retrieval augmented generation. But hopefully those recommendations help some of you all at home with your recommendation engines. And you're maybe using a little bit of a vector database in your recommendation engine building skills. Demetrios: Let's talk about this, though, man, because I think it would be nice if you can set the scene. What exactly are you working on? I know you've got virtual brain. Can you tell us a little bit about that so that we can know how you're doing rags? Guillaume Marquis: Because rag is like, I think the most famous word in the AI sphere at the moment. So, virtual brain, what we are building in particular is that we are building an AI assistant for knowledge workers. So we are not only building this next gen search bar to search content through documents, it's a tool for enterprises at enterprise grade that provide some easy way to interact with your knowledge. So basically, we create a tool that we connect to the world knowledge of the company. It could be whatever, like the drives, sharepoints, whatever knowledge you have, any kind of documents, and with that you will be able to perform tasks on your knowledge, such as like audit, RFP, due diligence. It's not only like everyone that is building rag or building a kind of search system through rag are always giving the same number. Is that like 20%? As a knowledge worker, you spend 20% of your time by searching information. And I think I heard this number so much time, and that's true, but it's not enough. Guillaume Marquis: Like the search bar, a lot of companies, like many companies, are working on how to search stuff for a long time, and it's always a subject. But the real pain and what we want to handle and what we are handling is deep work, is real tasks, is how to help these workers, to really help them as an assistant, not only on search bar, like as an assistant on real task, real added value tasks. So inside that, can you give us. Demetrios: An example of that? Is it like that? It pops up when it sees me working on notion and talking about or creating a PRD, and then it says, oh, this might be useful for your PRD because you were searching about that a week ago or whatever. Guillaume Marquis: For instance. So we are working with companies that have from 100 employees to several thousand employees. For instance, when you have to create a commercial proposal as a salesperson in a company, you have an history with a company, an history in this ecosystem, a history within this environment, and you have to capitalize on all this commercial proposition that you did in the past in your company, you can have thousands of propositions, you can have thousands of documents, you can have reporting from different departments, depending of the industry you are working on, and with that, with the tool. So you can ask question, you can capitalize on this document, and you can easily create new proposal by asking question, by interacting with the tool, to go deeply in this use case and to create something that is really relevant for your new use case. And that is using really the knowledge that you have in your company. And so it's not only like retrieve or just like find me as last proposition of this client. It's more like, okay, use x past proposals to create a new one. And that's a real challenge that is linked to our subject. Guillaume Marquis: It's because it's not only like retrieve one, two or even ten documents, it's about retrieving like hundred, 200, a lot of documents, a lot of information, and you have a real something to do with a lot of documents, a lot of context, a lot of information you have to manage. Demetrios: I have the million dollar question that I think is probably coming through everyone's head is like, you're retrieving so many documents, how are you evaluating your retrieval? Guillaume Marquis: That's definitely the $1 million question. It's a toss task to do, to be honest. To be fair. Currently what we are doing is that we monitor every tasks of the process, so we have the output of every tasks. On each tasks we use a scoring system to evaluate if it's relevant to the initial question or the initial task of the user. And we have a global scoring system on all the system. So it's quite odd, it's a little bit empiric, but it works for now. And it really help us to also improve over time all the tasks and all the processes that are done by the tool. Guillaume Marquis: So it's really important. And for instance, you have this kind of framework that is called RAGtriad. That is a way to evaluate rag on the accuracy of the context you retrieve on the link with the initial question and so on, several parameters. And you can really have a first way to evaluate the quality of answers and the quality of everything on each steps. Sabrina Aquino: I love it. Can you go more into the tech that you use for each one of these steps in architecture? Guillaume Marquis: So the process is quite like, it starts at the moment we ingest documents because basically it's hard to retrieve good documents or retrieve documents in a proper way if you don't parse it well. If you just like the dumb rug, as I call it, is like, okay, you take a document, you divide it in text, and that's it. But you will definitely lose the context, the global context of the document, what the document in general is talking about. And you really need to do it properly and to keep this context. And that's a real challenge, because if you keep some noises, if you don't do that well, everything will be broken at the end. So technically how it works. So we have a proper system that we developed to ingest documents using technologies, open source technologies. We only exclusively use open source tools because of security aspects and stuff like that. Guillaume Marquis: That's why also we are using Qdrant one of the important point on that. So we have a system, we are using this serverless stuff to ingest document over time. We have also models that create tags on documents. So we use open source slms to tag documents, to enrich documents, also to create a new title, to create a summary of documents, to keep the context. When we divide the document, we keep the title of paragraphers, the context inside paragraphers, and we leak every piece of text between each other to keep the context after that, when we retrieve the document. So it's like the retrieving part. We have a new breed search system. We are using Qdrant on the semantic port. Guillaume Marquis: So basically we are creating unbelieving, we are storing it into Qdrant. We are performing similarity search to retrieve documents based on title summary filtering, on tags, on the semantic context. And we have also some keyword search, but it's more for specific tasks, like when we know that we need a specific document, at some point we are searching it with a keyword search. So it's like a kind of ebrid system that is using deterministic approach with filtering with tags, and a probabilistic approach with selecting document with this ebot search, and doing a scoring system after that to get what is the most relevant document and to select how much content we will take from each document. It's a little bit techy, but it's really cool to create and we have a way to evolve it and to improve it. Demetrios: That's what we like around here, man. We want the techie stuff. That's what I think everybody signed up for. So that's very cool. One question that definitely comes up a lot when it comes to rags and when you're ingesting documents, and then when you're retrieving documents and updating documents, how do you make sure that the documents that you are, let's say, I know there's probably a hypothetical HR scenario where the company has a certain policy and they say you can have European style holidays, you get like three months of holidays a year, or even French style holidays. Basically, you just don't work. And whenever you want, you can work, you don't work. And then all of a sudden a US company comes and takes it over and they say, no, you guys don't get holidays. Demetrios: Even when you do get holidays, you're not working or you are working and so you have to update all the HR documents, right? So now when you have this knowledge worker that is creating something, or when you have anyone that is getting help, like this copilot help, how do you make sure that the information that person is getting is the most up to date information possible? Guillaume Marquis: That's a new $1 million question. Demetrios: I'm coming with the hits today. I don't know what you were looking for. Guillaume Marquis: That's a really good question. So basically you have several possibilities on that. First one you have like this PowerPoint presentation. That's a mess in the knowledge bases and sometimes you just want to use the most updated up to date documents. So basically we can filter on the created ad and the date of the documents. Sometimes you want to also compare the evolution of the process over time. So that's another use case. Basically we base. Guillaume Marquis: So during the ingestion we are analyzing if date is inside the document, because sometimes in documentation you have like the date at the end of the document or at the beginning of the document. That's a first way to do it. We have the date of the creation of the document, but it's not a source of truth because sometimes you created it after or you duplicated it and the date is not the same, depending if you are working on Windows, Microsoft, stuff like that. It's definitely a mess. And also we compare documents. So when we retry the documents and documents are really similar one to each other, we keep it in mind and we try to give more information as possible. Sometimes it's not possible, so it's not 100%, it's not bulletproof, but it's a real question of that. So it's a partial answer of your question, but it's like some way we are today filtering and answering on this special topic. Sabrina Aquino: Now I wonder what was the most challenging part of building this frag since there was like. Guillaume Marquis: There are a lot of parts that are really challenging. Sabrina Aquino: Challenging. Guillaume Marquis: One of the challenging part was the scalability of the system. We have clients that come with terra octave of data and want to be parsed really fast and so you have the ingestion, but even after the semantic search, even on a large data set can be slow. And today Chat GPT answer really fast. So your users, even if the question is way more complicated to answer than a basic Chat GPT question, they want to have their answer in seconds. So you have also this challenge that is really you have to take care. So it's quite challenging and it's like this industrial supply chain. So when you upgrade something, you have to be sure that everything is working well on the other side. And that's a real challenge to handle. Guillaume Marquis: And we are still on it because we are still evolving and getting more data. And at the end of the day, you have to be sure that everything is working well in terms of LLM, but in terms of research and in terms also a few weeks to give some insight to the user of what is working under the hood, to give them the possibility to wait a few seconds more, but starting to give them pieces of answer. Demetrios: Yeah, it's funny you say that because I remember talking to somebody that was working at you.com and they were saying how there's like the actual time. So they were calling it something like perceived time and real, like actual time. So you as an end user, if you get asked a question or maybe there's like a trivia quiz while the question is coming up, then it seems like it's not actually taking as long as it is. Even if it takes 5 seconds, it's a little bit cooler. Or as you were mentioning, I remember reading some paper, I think, on how people are a lot less anxious if they see the words starting to pop up like that and they see like, okay, it's not just I'm waiting and then the whole answer gets spit back out at me. It's like I see the answer forming as it is in real time. And so that can calm people's nerves too. Guillaume Marquis: Yeah, definitely. Human's brain is like marvelous on that. And you have a lot of stuff. Like, one of my favorites is the illusion of work. Do you know it? It's the total opposite. If you have something that seems difficult to do, adding more time of processing. So the user will imagine that it's really an OD task to do. And so that's really funny. Demetrios: So funny like that. Guillaume Marquis: Yeah. Yes. It's the opposite of what you will think if you create a product, but that's real stuff. And sometimes just to output them that you are performing toss tasks in the background, it helps them to. Oh, yes. My question was really like a complex question, like you have a lot of work to do. It's Axe word like. If you answer too fast, they will not trust the answer. Guillaume Marquis: And it's the opposite if you answer too slow. You can have this. Okay. But it should be dumb because it's really slow. So it's a dumb AI or stuff like that. So that's really funny. My co founder actually was a product guy, so really focused on product, and he really loves this kind of stuff. Demetrios: Great thought experiment, that's interesting. Sabrina Aquino: And you mentioned like you chose Qdrant because it's open source, but now I wonder if there's also something to do with your need for something that's fast, that's scalable, and what other factors you took in consideration when choosing the vector DB. Guillaume Marquis: Yes, so I told you that the scalability and the speed is like one of the most important points and toast part to endure. And yes, definitely, because when you are building a complex rag, you are not like just performing one research, at some points you are doing it maybe like you are splitting the question, doing several at the same time. And so it's like mandatory to have a vector database that is scalable, that is fast, that has low latencies, that can under parallel request a large amount of requests. So you have really this need. And Qdrant was like an obvious choose. Actually, we did a benchmark, so we really tried several possibilities. Demetrios: Some tell me more. Yeah. Guillaume Marquis: So we tried the classic postgres page vectors, that is, I think we tried it like 30 minutes, and we realized really fast that it was really not good for our use case. We tried Weaviate, we tried Milvus, we tried Qdrant, we tried a lot. We prefer use open source because of security issues. We tried Pinecone initially, we were on Pinecone at the beginning of the company. And so the most important point, so we have the speed of the tool, we have the scalability we have also, maybe it's a little bit dumb to say that, but we have also the API. I remember using Pinecone and trying just to get all vectors and it was not possible somehow, and you have this dumb stuff that are sometimes really strange. And if you have a tool that is 100% made for your use case with people that are working on it, really dedicated on that, and that are aligned with your vision of what is the evolution of this. I think it's like the best tool you have to choose. Demetrios: So one thing that I would love to hear about too, is when you're looking at your system and you're looking at just the product in general, what are some of the key metrics that you are constantly monitoring, and how do you know that you're hitting them or you're not? And then if you're not hitting them, what are some ways that you debug the situation? Guillaume Marquis: By metrics you mean like usage metrics. Demetrios: Or like, I'm more thinking on your whole tech setup and the quality of your rag. Guillaume Marquis: Basically we are focused on industry of knowledge workers and industry in particular like of consultants. So we have some data set of questions that we know should be answered. Well, we know the kind of outputs we should have. The metrics we are like monitoring on our rag is mostly the accuracy of the answer, the accuracy of sources, the number of hallucination that is sometimes really also hard to manage. Actually our tool is sourcing everything. When you ask a question or when you perform a task, it gives you all the sources. But sometimes you can have a perfect answer and just like one number inside your answer that comes from nowhere, that is totally like invented and that's up to get. We are still working on that. Guillaume Marquis: We are not the most advanced on this part. We just implemented a tool I think you may know it's LangFuse. Do you know them? LangFuse? Demetrios: No. Tell me more. Guillaume Marquis: LangFuse is like a tool that is made to monitor tasks on your rack so you can easily log stuff. It's also open source tool, you can easily self host it and you can monitor every part of your rag. You can create data sets based on questions and answers that has been asked or some you created by yourself. And you can easily perform like check of your rag just to trade out and to give a final score of it, and to be able to monitor everything and to give global score based on your data set of your rag. So we are currently implementing it. I give their name because it's wonderful the work they did, and I really enjoyed it. It's one of the most important points to not be blind. I mean, in general, in terms of business, you have to follow metrics. Guillaume Marquis: Numbers cannot lie. Humans lies, but not numbers. But after that you have to interpret numbers. So that's also another toss part. But it's important to have the good metrics and to be able to know if you are evolving it, if you are improving your system and if everything is working. Basically the different stuff we are doing, we are not like. Demetrios: Are you collecting human feedback? For the hallucinations part, we try, but. Guillaume Marquis: Humans are not like giving a lot of feedback. Demetrios: It's hard. That's why it's really hard the end user to do anything, even just like the thumbs up, thumbs down can be difficult. Guillaume Marquis: We tried several stuff. We have the thumbs up, thumbs down, we tried stars. You ask real feedback to write something, hey, please help us. Human feedback is quite poor, so we are not counting on that. Demetrios: I think the hard part about it, at least me as an end user, whenever I've been using these, is like the thumbs down or the, I've even seen it go as far as, like, you have more than just one emoji. Like, maybe you have the thumbs up, you have the thumbs down. You have, like, a mushroom emoji. So it's, like, hallucinated. And you have, like. Guillaume Marquis: What was the. Demetrios: Other one that I saw that I thought was pretty? I can't remember it right now, but. Guillaume Marquis: I never saw the mushroom. But that's quite fun. Demetrios: Yeah, it's good. It's not just wrong. It's absolutely, like, way off the mark. And what I think is interesting there when I've been the end user is that it's a little bit just like, I don't have time to explain the nuances as to why this is not useful. I really would have to sit down and almost, like, write a book or at least an essay on, yeah, this is kind of useful, but it's like a two out of a five, not a four out of a five. And so that's why I gave it the thumbs down. Or there was this part that is good and that part's bad. And so it's just like the ways that you have to, or the nuances that you have to go into as the end user when you're trying to evaluate it, I think it's much better. Demetrios: And what I've seen a lot of people do is just expect to do that in house. After the fact, you get all the information back, you see, on certain metrics, like, oh, did this person commit the code? Then that's a good signal that it's useful. But then you can also look at it, or did this person copy paste it? Et cetera, et cetera. And how can we see if they didn't copy paste that or if they didn't take that next action that we would expect them to take? Why not? And let's try and dig into what we can do to make that better. Guillaume Marquis: Yes. We can also evaluate the next questions, like the following questions. That's a great point. We are not currently doing it automatically, but if you see that a user just answer, no, it's not true, or you should rephrase it or be more concise, or these kind of following questions, you know that the first answer was not as relevant as. Demetrios: That's such a great point. Or you do some sentiment analysis and it slowly is getting more and more angry. Guillaume Marquis: Yeah, that's true. That's a good point also. Demetrios: Yeah, this one went downhill, so. All right, cool. I think that's it. Sabrina, any last questions from your side? Sabrina Aquino: Yeah, I think I'm just very interesting to know from a user perspective, from a virtual brain, how are traditional models worse or what kind of errors virtual brain fixes in their structure, that users find it better that way. Guillaume Marquis: I think in this particular, so we talked about hallucinations, I think it's like one of the main issues people have on classic elements. We really think that when you create a one size fit all tool, you have some chole because you have to manage different approaches, like when you are creating copilot as Microsoft, you have to under the use cases of, and I really think so. Our AI is not trained to write you a speech based on Shakespeare and with the style of Martin Luther King. It's not the purpose of the tool. So if you ask something that is out of the box, he will just say like, okay, I don't know how to answer that. And that's an important point. That's a feature by itself to be able to not go outside of the box. And so we did this choice of putting the AI inside the box, the box that is containing basically all the knowledge of your company, all the retrieved knowledge. Guillaume Marquis: Actually we do not have a lot of hallucination, I will not say like 0%, but it's close to zero. Because we analyze a question, we put the AI in a box, we enforce the AI to think about the answer before answering, and we analyze also the answer to know if the answer is relevant. And that's an important point that we are fixing and we fix for our user and we prefer yes, to give like non answers and a bad answer. Sabrina Aquino: Absolutely. And there are people who think like, hey, this is a rag, it's not going to hallucinate, and that's not the case at all. It will hallucinate less inside a certain context window that you provide. Right. But it still has a possibility. So minimizing that as much as possible is very valuable. Demetrios: So good. Well, I think with that, our time here is coming to an end. I really appreciate this. I encourage everyone to go and have a little look at virtual brain. We'll drop a link in the comment in case anyone wants free to sign up. Guillaume Marquis: So you can trade for free. Demetrios: Even better. Look at that, Christmas came early. Well, let's go have some fun, play around with it. And I can't promise, but I may give you some feedback, I may give you some evaluation metrics if it's hallucinating. Guillaume Marquis: Or what if I see some thumbs up or thumbs down, I will know that it's you. Demetrios: Yeah, cool. Exactly. All right, folks, that's about it for today. We will see you all later. As a reminder, don't get lost in vector space. This has been another vector space talks. And if you want to come on here and chat with us, feel free to reach out. See ya. Guillaume Marquis: Cool. Sabrina Aquino: See you guys. Thank you. Bye.
qdrant-landing/content/blog/when-music-just-doesnt-match-our-vibe-can-ai-help-filip-makraduli-vector-space-talks-003.md
--- draft: false title: When music just doesn't match our vibe, can AI help? - Filip Makraduli | Vector Space Talks slug: human-language-ai-models short_description: Filip Makraduli discusses using AI to create personalized music recommendations based on user mood and vibe descriptions. description: Filip Makraduli discusses using human language and AI to capture music vibes, encoding text with sentence transformers, generating recommendations through vector spaces, integrating Streamlit and Spotify API, and future improvements for AI-powered music recommendations. preview_image: /blog/from_cms/filip-makraduli-cropped.png date: 2024-01-09T10:44:20.559Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Database - LLM Recommendation System --- > *"Was it possible to somehow maybe find a way to transfer this feeling that we have this vibe and get the help of AI to understand what exactly we need at that moment in terms of songs?”*\ > -- Filip Makraduli > Imagine if the recommendation system could understand spoken instructions or hummed melodies. This would greatly impact the user experience and accuracy of the recommendations. Filip Makraduli, an electrical engineering graduate from Skopje, Macedonia, expanded his academic horizons with a Master's in Biomedical Data Science from Imperial College London. Currently a part of the Digital and Technology team at Marks and Spencer (M&S), he delves into retail data science, contributing to various ML and AI projects. His expertise spans causal ML, XGBoost models, NLP, and generative AI, with a current focus on improving outfit recommendation systems. Filip is not only professionally engaged but also passionate about tech startups, entrepreneurship, and ML research, evident in his interest in Qdrant, a startup he admires. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/6a517GfyUQLuXwFRxvwtp5?si=ywXPY_1RRU-qsMt9qrRS6w), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/WIBtZa7mcCs).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/WIBtZa7mcCs?si=szfeeuIAZ5LEgVI3" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/When-music-just-doesnt-match-our-vibe--can-AI-help----Filip-Makraduli--Vector-Space-Talks-003-e2bskcq/a-aajslv4" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top Takeaways:** Take a look at the song vibe recommender system created by Filip Makraduli. Find out how it works! Filip discusses how AI can assist in finding the perfect songs for any mood. He takes us through his unique approach, using human language and AI models to capture the essence of a song and generate personalized recommendations. Here are 5 key things you'll learn from this video: 1. How AI can help us understand and capture the vibe and feeling of a song 2. The use of language to transfer the experience and feeling of a song 3. The role of data sets and descriptions in building unconventional song recommendation systems 4. The importance of encoding text and using sentence transformers to generate song embeddings 5. How vector spaces and cosine similarity search are used to generate song recommendations > Fun Fact: Filip actually created a Spotify playlist in real-time during the video, based on the vibe and mood Demetrios described, showing just how powerful and interactive this AI music recommendation system can be! > ## Show Notes: 01:25 Using AI to capture desired music vibes.\ 06:17 Faster and accurate model.\ 10:07 Sentence embedding model maps song descriptions.\ 14:32 Improving recommendations, user personalization in music.\ 15:49 Qdrant Python client creates user recommendations.\ 21:26 Questions about getting better embeddings for songs.\ 25:04 Contextual information for personalized walking recommendations.\ 26:00 Need predictions, voice input, and music options. ## More Quotes from Filip: *"When you log in with Spotify, you could get recommendations related to your taste on Spotify or on whatever app you listen your music on.”*\ -- Filip Makraduli *"Once the user writes a query and the query mentions, like some kind of a mood, for example, I feel happy and it's a sunny day and so on, you would get the similarity to the song that has this kind of language explanations and language intricacies in its description.”*\ -- Filip Makraduli *"I've explored Qdrant and as I said with Spotify web API there are a lot of things to be done with these specific user-created recommendations.”*\ -- Filip Makraduli ## Transcript: Demetrios: So for those who do not know, you are going to be talking to us about when the music we listen to does not match our vibe. And can we get AI to help us on that? And you're currently working as a data scientist at Marks and Spencer. I know you got some slides to share, right? So I'll let you share your screen. We can kick off the slides and then we'll have a little presentation and I'll be back on to answer some questions. And if Neil's is still around at the end, which I don't think he will be able to hang around, but we'll see, we can pull him back on and have a little discussion at the end of the. Filip Makraduli: That's. That's great. All right, cool. I'll share my screen. Demetrios: Right on. Filip Makraduli: Yeah. Demetrios: There we go. Filip Makraduli: Yeah. So I had to use this slide because it was really well done as an introductory slide. Thank you. Yeah. Thank you also for making it so. Yeah, the idea was, and kind of the inspiration with music, we all listen to it. It's part of our lives in many ways. Sometimes it's like the gym. Filip Makraduli: We're ready to go, we're all hyped up, ready to do a workout, and then we click play. But the music and the playlist we get, it's just not what exactly we're looking for at that point. Or if we try to work for a few hours and try to get concentrated and try to code for hours, we can do the same and then we click play, but it's not what we're looking for again. So my inspiration was here. Was it possible to somehow maybe find a way to transfer this feeling that we have this vibe and get the help of AI to understand what exactly we need at that moment in terms of songs. So the obvious first question is how do we even capture a vibe and feel of a song? So initially, one approach that's popular and that works quite well is basically using a data set that has a lot of features. So Spotify has one data set like this and there are many others open source ones which include different features like loudness, key tempo, different kind of details related to the acoustics, the melody and so on. And this would work. Filip Makraduli: And this is kind of a way that a lot of song recommendation systems are built. However, what I wanted to do was maybe try a different approach in a way. Try to have a more unconventional recommender system, let's say. So what I did here was I tried to concentrate just on language. So my idea was, okay, is it possible to use human language to transfer this experience, this feeling that we have, and just use that and try to maybe encapsulate these features of songs. And instead of having a data set, just have descriptions of songs or sentences that explain different aspects of a song. So, as I said, this is a bit of a less traditional approach, and it's more of kind of testing the waters, but it worked to a decent extent. So what I did was, first I created a data set where I queried a large language model. Filip Makraduli: So I tried with llama and chat GPT, both. And the idea was to ask targeted questions, for example, like, what movie character does this song make you feel like? Or what's the tempo like? So, different questions that would help us understand maybe in what situation we would listen to this song, how will it make us feel like? And so on. And the idea was, as I said, again, to only use song names as queries for this large language model. So not have the full data sets with multiple features, but just song name, and kind of use this pretrained ability of all these LLMs to get this info that I was looking for. So an example of the generated data was this. So this song called Deep Sea Creature. And we have, like, a small description of the song. So it says a heavy, dark, mysterious vibe. Filip Makraduli: It will make you feel like you're descending into the unknown and so on. So a bit of a darker choice here, but that's the general idea. So trying to maybe do a bit of prompt engineering in a way to get the right features of a song, but through human language. So that was the first step. So the next step was how to encode this text. So all of this kind of querying reminds me of sentences. And this led me to sentence transformers and sentence Bird. And the usual issue with kind of doing this sentence similarity in the past was this, what I have highlighted here. Filip Makraduli: So this is actually a quote from a paper that Nils published a few years ago. So, basically, the way that this similarity was done was using cross encoders in the past, and that worked well, but it was really slow and unscalable. So Nils and his colleague created this kind of model, which helped scale this and make this a lot quicker, but also keep a lot of the accuracy. So Bert and Roberta were used, but they were not, as I said, quite scalable or useful for larger applications. So that's how sentence Bert was created. So the idea here was that there would be, like, a Siamese network that would train the model so that there could be, like, two bird models, and then the training would be done using this like zero, one and two tags, where kind of the sentences would be compared, whether there is entailment, neutrality or contradiction. So how similar these sentences are to each other. And by training a model like this and doing mean pooling, in the end, the model performed quite well and was able to kind of encapsulate this language intricacies of sentences. Filip Makraduli: So I decided to use and try out sentence transformers for my use case, and that was the encoding bit. So we have the model, we encode the text, and we have the embedding. So now the question is, how do we actually generate the recommendations? How is the similarity performed? So the similarity was done using vector spaces and cosine similarity search here. There were multiple ways of doing this. First, I tried things with a flat index and I tried Qdrant and I tried FIS. So I've worked with both. And with the flat index, it was good. It works well. Filip Makraduli: It's quick for small number of examples, small number of songs, but there is an issue when scaling. So once the vector indices get bigger, there might be a problem. So one popular kind of index architecture is this one here on the left. So hierarchical, navigable, small world graphs. So the idea here is that you wouldn't have to kind of go through all of the examples, but search through the examples in different layers, so that the search for similarities quicker. And this is a really popular approach. And Qdrant have done a really good customizable version of this, which is quite useful, I think, for very larger scales of application. And this graph here illustrates kind of well what the idea is. Filip Makraduli: So there is the sentence in this example. It's like a stripped striped blue shirt made from cotton, and then there is the network or the encoder. So in my case, this sentence is the song description, the neural network is the sentence transformer in my case. And then this embeddings are generated, which are then mapped into this vector space, and then this vector space is queryed and the cosine similarity is found, and the recommendations are generated in this way, so that once the user writes a query and the query mentions, like some kind of a mood, for example, I feel happy and it's a sunny day and so on, you would get the similarity to the song that has this kind of language explanations and language intricacies in its description. And there are a lot of ways of doing this, as Nils mentioned, especially with different embedding models and doing context related search. So this is an interesting area for improvement, even in my use case. And the quick screenshot looks like this. So for example, the mood that the user wrote, it's a bit rainy, but I feel like I need a long walk in London. Filip Makraduli: And these are the top five suggested songs. This is also available on Streamlit. In the end I'll share links of everything and also after that you can click create a Spotify playlist and this playlist will be saved in your Spotify account. As you can see here, it says playlist generated earlier today. So yeah, I tried this, it worked. I will try live demo bit later. Hopefully it works again. But this is in beta currently so you won't be able to try it at home because Spotify needs to approve my app first and go through that process so that then I can do this part fully. Filip Makraduli: And the front end bit, as I mentioned, was done in Streamlit. So why Streamlit? I like the caching bit. So of course this general part where it's really easy and quick to do a lot of data dashboarding and data applications to test out models, that's quite nice. But this caching options that they have help a lot with like loading models from hugging face or if you're loading models from somewhere, or if you're loading different databases. So if you're combining models and data. In my case I had a binary file of the index and also the model. So it was quite useful and quick to do these things and to be able to try things out quickly. So this is kind of the step by step outline of everything I've mentioned and the whole project. Filip Makraduli: So the first step is encoding this descriptions into embeddings. Then this vector embeddings are mapped into a vector space. Examples here with how I've used Qdrant for this, which was quite nice. I feel like the developer experience is really good for scalable purposes. It's really useful. So if the number of songs keep increasing it's quite good. And the query and more similar embeddings. The front is done with Streamlit and the Spotify API to save the playlists on the Spotify account. Filip Makraduli: All of these steps can be improved and tweaked in certain ways and I will talk a bit about that too. So a lot more to be done. So now there are 2000 songs, but as I've mentioned, in this vector space, the more songs that are there, the more representative this recommendations would be. So this is something I'm currently exploring and doing, generating, filtering and user specific personalization. So once maybe you log in with Spotify, you could get recommendations related to your taste on Spotify or on whatever app you listen your music on. And referring to the talk that Niels had a lot of potential for better models and embeddings and embedding models. So also the contrastive learning bits or the contents aware querying, that could be useful too. And a vector database because currently I'm using a binary file. Filip Makraduli: But I've explored Qdrant and as I said with Spotify web API there are a lot of things to be done with this specific user created recommendations. So with Qdrant, the Python client is quite good. The getting started helps a lot. So I wrote a bit of code. I think for production use cases it's really great. So for my use case here, as you can see on the right, I just read the text from a column and then I encode with the model. So the sentence transformer is the model that I encode with. And there is this collections that they're so called in Qdrant that are kind of like this vector spaces that you can create and you can also do different things with them, which I think one of the more helpful ones is the payload one and the batch one. Filip Makraduli: So you can batch things in terms of how many vectors will go to the server per single request. And also the payload helps if you want to add extra context. So maybe I want to filter by genres. I can add useful information to the vector embedding. So this is quite a cool feature that I'm planning on using. And another potential way of doing this and kind of combining things is using audio waves too, lyrics and descriptions and combining all of this as embeddings and then going through the similar process. So that's something that I'm looking to do also. And yeah, you also might have noticed that I'm a data scientist at Marks and Spencer and I just wanted to say that there are a lot of interesting ML and data related stuff going on there. Filip Makraduli: So a lot of teams that work on very interesting use cases, like in recommender systems, personalization of offers different stuff about forecasting. There is a lot going on with causal ML and yeah, the digital and tech department is quite well developed and I think it's a fun place to explore if you're interested in retail data science use cases. So yeah, thank you for your attention. I'll try the demo. So this is the QR code with the repo and all the useful links. You can contact me on LinkedIn. This is the screenshot of the repo and you have the link in the QR code. The name of the repo is song Vibe. Filip Makraduli: A friend of mine said that that wasn't a great name of a repo. Maybe he was right. But yeah, here we are. I'll just try to do the demo quickly and then we can step back to the. Demetrios: I love dude, I got to say, when you said you can just automatically create the Spotify playlist, that made me. Filip Makraduli: Go like, oh, yes, let's see if it works locally. Do you have any suggestion what mood are you in? Demetrios: I was hoping you would ask me, man. I am in a bit of an esoteric mood and I want female kind of like Gaelic voices, but not Gaelic music, just Gaelic voices and lots of harmonies, heavy harmonies. Filip Makraduli: Also. Demetrios: You didn't realize you're asking a musician. Let's see what we got. Filip Makraduli: Let's see if this works in 2000 songs. Okay, so these are the results. Okay, yeah, you'd have to playlist. Let's see. Demetrios: Yeah, can you make the playlist public and then I'll just go find it right now. Here we go. Filip Makraduli: Let's see. Okay, yeah, open in. Spotify playlist created now. Okay, cool. I can also rename it. What do you want to name the playlist? Demetrios: Esoteric Gaelic Harmonies. That's what I think we got to go with AI. Well, I mean, maybe we could just put maybe in parenthes. Filip Makraduli: Yeah. So I'll share this later with you. Excellent. But yeah, basically that was it. Demetrios: It worked. Ten out of ten for it. Working. That is also very cool. Filip Makraduli: Live demo working. That's good. So now doing the infinite screen, which I have stopped now. Demetrios: Yeah, classic, dude. Well, I've got some questions coming through and the chat has been active too. So I'll ask a few of the questions in the chat for a minute. But before I ask those questions in the chat, one thing that I was thinking about when you were talking about how to, like, the next step is getting better embeddings. And so was there a reason that you just went with the song title and then did you check, you said there was 2000 songs or how many songs? So did you do anything to check the output of the descriptions of these songs? Filip Makraduli: Yeah, so I didn't do like a systematic testing in terms of like, oh, yeah, the output is structured in this way. But yeah, I checked it roughly went through a few songs and they seemed like, I mean, of course you could add more info, but they seemed okay. So I was like, okay, let me try kind of whether this works. And, yeah, the descriptions were nice. Demetrios: Awesome. Yeah. So that kind of goes into one of the questions that mornie's asking. Let me see. Are you going to team this up with other methods, like collaborative filtering, content embeddings and stuff like that. Filip Makraduli: Yeah, I was thinking about this different kind of styles, but I feel like I want to first try different things related to embeddings and language just because I feel like with the other things, with the other ways of doing these recommendations, other companies and other solutions have done a really great job there. So I wanted to try something different to see whether that could work as well or maybe to a similar degree. So that's why I went towards this approach rather than collaborative filtering. Demetrios: Yeah, it kind of felt like you wanted to test the boundaries and see if something like this, which seems a little far fetched, is actually possible. And it seems like I would give it a yes. Filip Makraduli: It wasn't that far fetched, actually, once you see it working. Demetrios: Yeah, totally. Another question is coming through is asking, is it possible to merge the current mood so the vibe that you're looking for with your musical preferences? Filip Makraduli: Yeah. So I was thinking of that when we're doing this, the playlist creation that I did for you, there is a way to get your top ten songs or your other playlists and so on from Spotify. So my idea of kind of capturing this added element was through Spotify like that. But of course it could be that you could enter that in your own profile in the app or so on. So one idea would be how would you capture that preferences of the user once you have the user there. So you'd need some data of the preferences of the user. So that's the problem. But of course it is possible. Demetrios: You know what I'd lOve? Like in your example, you put that, I feel like going for a walk or it's raining, but I still feel like going through for a long walk in London. Right. You could probably just get that information from me, like what is the weather around me, where am I located? All that kind of stuff. So I don't have to give you that context. You just add those kind of contextual things, especially weather. And I get the feeling that that would be another unlock too. Unless you're like, you are the exact opposite of a sunny day on a sunny day. And it's like, why does it keep playing this happy music? I told you I was sad. Filip Makraduli: Yeah. You're predicting not just the songs, but the mood also. Demetrios: Yeah, totally. Filip Makraduli: You don't have to type anything, just open the website and you get everything. Demetrios: Exactly. Yeah. Give me a few predictions just right off the bat and then maybe later we can figure it out. The other thing that I was thinking, could be a nice add on. I mean, the infinite feature request, I don't think you realized you were going to get so many feature requests from me, but let it be known that if you come on here and I like your app, you'll probably get some feature requests from me. So I was thinking about how it would be great if I could just talk to it instead of typing it in, right? And I could just explain my mood or explain my feeling and even top that off with a few melodies that are going on in my head, or a few singers or songwriters or songs that I really want, something like this, but not this song, and then also add that kind of thing, do the. Filip Makraduli: Humming sound a bit and you play your melody and then you get. Demetrios: Except I hum out of tune, so I don't think that would work very well. I get a lot of random songs, that's for sure. It would probably be just about as accurate as your recommendation engine is right now. Yeah. Well, this is awesome, man. I really appreciate you coming on here. I'm just going to make sure that there's no other questions that came through the chat. No, looks like we're good. Demetrios: And for everyone out there that is listening, if you want to come on and talk about anything cool that you have built with Qdrant, or how you're using Qdrant, or different ways that you would like Qdrant to be better, or things that you enjoy, whatever it may be, we'd love to have you on here. And I think that is it. We're going to call it a day for the vector space talks, number two. We'll see you all later. Philip, thanks so much for coming on. It's.
qdrant-landing/content/brand-resources/_index.md
--- title: brand-resources description: brand-resources build: render: always cascade: - build: list: local publishResources: false render: never ---
qdrant-landing/content/brand-resources/brand-resources-content.md
--- logo: title: Our Logo description: "The Qdrant logo represents a paramount expression of our core brand identity. With consistent placement, sizing, clear space, and color usage, our logo affirms its recognition across all platforms." logoCards: - id: 0 logo: src: /img/brand-resources-logos/logo.svg alt: Logo Full Color title: Logo Full Color link: url: /img/brand-resources-logos/logo.svg text: Download - id: 1 logo: src: /img/brand-resources-logos/logo-black.svg alt: Logo Black title: Logo Black link: url: /img/brand-resources-logos/logo-black.svg text: Download - id: 2 logo: src: /img/brand-resources-logos/logo-white.svg alt: Logo White title: Logo White link: url: /img/brand-resources-logos/logo-white.svg text: Download logomarkTitle: Logomark logomarkCards: - id: 0 logo: src: /img/brand-resources-logos/logomark.svg alt: Logomark Full Color title: Logomark Full Color link: url: /img/brand-resources-logos/logomark.svg text: Download - id: 1 logo: src: /img/brand-resources-logos/logomark-black.svg alt: Logomark Black title: Logomark Black link: url: /img/brand-resources-logos/logomark-black.svg text: Download - id: 2 logo: src: /img/brand-resources-logos/logomark-white.svg alt: Logomark White title: Logomark White link: url: /img/brand-resources-logos/logomark-white.svg text: Download colors: title: Colors description: Our brand colors play a crucial role in maintaining a cohesive visual identity. The careful balance of these colors ensures a consistent and impactful representation of Qdrant, reinforcing our commitment to excellence and precision in every aspect of our work. cards: - id: 0 name: Amaranth type: HEX code: "DC244C" - id: 1 name: Blue type: HEX code: "2F6FF0" - id: 2 name: Violet type: HEX code: "8547FF" - id: 3 name: Teal type: HEX code: "038585" - id: 4 name: Black type: HEX code: "090E1A" - id: 5 name: White type: HEX code: "FFFFFF" typography: title: Typography description: Main typography is Satoshi, this is employed for both UI and marketing purposes. Headlines are set in Bold (600), while text is rendered in Medium (500). example: AaBb specimen: "ABCDEFGHIJKLMNOPQRSTUVWXYZ<br>abcdefghijklmnopqrstuvwxyz<br>0123456789 !@#$%^&*()" link: url: https://api.fontshare.com/v2/fonts/download/satoshi text: Download trademarks: title: Trademarks description: All features associated with the Qdrant brand are safeguarded by relevant trademark, copyright, and intellectual property regulations. Utilization of the Qdrant trademark must adhere to the specified Qdrant Trademark Standards for Use.<br><br>Should you require clarification or seek permission to utilize these resources, feel free to reach out to us at link: url: "mailto:info@qdrant.com" text: info@qdrant.com. sitemapExclude: true ---
qdrant-landing/content/brand-resources/brand-resources-hero.md
--- title: Qdrant Brand Resources buttons: - id: 0 url: "#logo" text: Logo - id: 1 url: "#colors" text: Colors - id: 2 url: "#typography" text: Typography - id: 3 url: "#trademarks" text: Trademarks sitemapExclude: true ---
qdrant-landing/content/community/_index.md
--- title: Community description: Community build: render: always cascade: - build: list: local publishResources: false render: never ---
qdrant-landing/content/community/community-features.md
--- title: Discover our Programs resources: - id: 0 title: Qdrant Stars description: Qdrant Stars are our top contributors, organizers, and evangelists. Learn more about how you can become a Star. link: text: Learn More url: /blog/qdrant-stars-announcement/ image: src: /img/community-features/qdrant-stars.svg alt: Avatar - id: 1 title: Discord description: Chat in real-time with the Qdrant team and community members. link: text: Join our Discord url: https://discord.gg/qdrant image: src: /img/community-features/discord.svg alt: Avatar - id: 2 title: Community Blog description: Learn all the latest tips and tricks in the AI space through our community blog. link: text: Visit our Blog url: /blog/ image: src: /img/community-features/community-blog.svg alt: Avatar - id: 3 title: Vector Space Talks description: Weekly tech talks with Qdrant users and industry experts. link: text: Learn More url: https://www.youtube.com/watch?v=4aUq5VnR_VI&list=PL9IXkWSmb36_eANzd_sKeQ3tXbFiEGEWn&pp=iAQB image: src: /img/community-features/vector-space-talks.svg alt: Avatar features: - id: 0 icon: src: /icons/outline/documentation-blue.svg alt: Documentation title: Documentation description: Docs carefully crafted to support developers and decision-makers learning about Qdrant features. link: text: Read More url: /documentation/ - id: 1 icon: src: /icons/outline/guide-blue.svg alt: Guide title: Contributors Guide description: Whatever your strengths are, we got you covered. Learn more about how to contribute to Qdrant. link: text: Learn More url: https://github.com/qdrant/qdrant/blob/master/CONTRIBUTING.md - id: 2 icon: src: /icons/outline/handshake-blue.svg alt: Partners title: Partners description: Technology partners and applications that support Qdrant. link: text: Learn More url: /partners/ - id: 3 icon: src: /icons/outline/mail-blue.svg alt: Newsletter title: Newsletter description: Stay up to date with all the latest Qdrant news link: text: Learn More url: /subscribe/ sitemapExclude: true ---
qdrant-landing/content/community/community-hero.md
--- title: Welcome to the Qdrant Community description: Connect with over 30,000 community members, get access to educational resources, and stay up to date on all news and discussions about Qdrant and the vector database space. image: src: /img/community-hero.svg srcMobile: /img/mobile/community-hero.svg alt: Community button: text: Join our Discord url: https://discord.gg/qdrant about: Get access to educational resources, and stay up to date on all news and discussions about Qdrant and the vector database space. sitemapExclude: true ---
qdrant-landing/content/community/community-testimonials.md
--- title: Love from our community testimonials: - id: 0 name: Owen Colegrove nickname: "@ocolegro" avatar: src: /img/customers/owen-colegrove.svg alt: Avatar text: qurant has been amazing! - id: 1 name: Darren nickname: "@darrenangle" avatar: src: /img/customers/darren.svg alt: Avatar text: qdrant is so fast I'm using Rust for all future projects goodnight everyone - id: 2 name: Greg Schoeninger nickname: "@gregschoeninger" avatar: src: /img/customers/greg-schoeninger.svg alt: Avatar text: Indexing millions of embeddings into <span>@qdrant_engine</span> has been the smoothest experience I've had so far with a vector db. Team Rustacian all the way &#129408; - id: 3 name: Ifioravanti nickname: "@ivanfioravanti" avatar: src: /img/customers/ifioravanti.svg alt: Avatar text: <span>@qdrant_engine</span> is ultra super powerful! Combine it to <span>@LangChainAI</span> and you have a super productivity boost for your AI projects &#9193;&#9193;&#9193; - id: 4 name: sengpt nickname: "@sengpt" avatar: src: /img/customers/sengpt.svg alt: Avatar text: Thank you, Qdrant is awesome - id: 4 name: Owen Colegrove nickname: "@ocolegro" avatar: src: /img/customers/owen-colegrove.svg alt: Avatar text: that sounds good to me, big fan of qdrant. sitemapExclude: true ---
qdrant-landing/content/contact-hybrid-cloud/_index.md
--- title: Qdrant Hybrid Cloud salesTitle: Hybrid Cloud description: Bring your own Kubernetes clusters from any cloud provider, on-premise infrastructure, or edge locations and connect them to the Managed Cloud. cards: - id: 0 icon: /icons/outline/separate-blue.svg title: Deployment Flexibility description: Use your existing infrastructure, whether it be on cloud platforms, on-premise setups, or even at edge locations. - id: 1 icon: /icons/outline/money-growth-blue.svg title: Unmatched Cost Advantage description: Maximum deployment flexibility to leverage the best available resources, in the cloud or on-premise. - id: 2 icon: /icons/outline/switches-blue.svg title: Transparent Control description: Fully managed experience for your Qdrant clusters, while your data remains exclusively yours. form: title: Connect with us # description: id: contact-sales-form hubspotFormOptions: '{ "region": "eu1", "portalId": "139603372", "formId": "f583c7ea-15ff-4c57-9859-650b8f34f5d3", "submitButtonClass": "button button_contained", }' logosSectionTitle: Qdrant is trusted by top-tier enterprises ---
qdrant-landing/content/contact-sales/_index.md
--- salesTitle: Qdrant Enterprise Solutions description: Our Managed Cloud, Hybrid Cloud, and Private Cloud solutions offer flexible deployment options for top-tier data privacy. cards: - id: 0 icon: /icons/outline/cloud-managed-blue.svg title: Managed Cloud description: Qdrant Cloud provides optimal flexibility and offers a suite of features focused on efficient and scalable vector search - fully managed. Available on AWS, Google Cloud, and Azure. - id: 1 icon: /icons/outline/cloud-hybrid-violet.svg title: Hybrid Cloud description: Bring your own Kubernetes clusters from any cloud provider, on-premise infrastructure, or edge locations and connect them to the Managed Cloud. - id: 2 icon: /icons/outline/cloud-private-teal.svg title: Private Cloud description: Deploy Qdrant in your own infrastructure. form: title: Connect with us # description: id: contact-sales-form hubspotFormOptions: '{ "region": "eu1", "portalId": "139603372", "formId": "fc7a9f1d-9d41-418d-a9cc-ef9c5fb9b207", "submitButtonClass": "button button_contained", }' logosSectionTitle: Qdrant is trusted by top-tier enterprises ---
qdrant-landing/content/contact-us/_index.md
--- title: Contact Qdrant description: Please let us know how we can help and we will get in touch with you soon. cards: - id: 0 icon: /icons/outline/comments-violet.svg title: Qdrant Cloud Support description: For questions or issues with Qdrant Cloud, please report to mailLink: text: support@qdrant.io href: support@qdrant.io - id: 1 icon: /icons/outline/discord-blue.svg title: Developer Support description: For developer questions about Qdrant usage, please join our link: text: Discord Server href: https://qdrant.to/discord form: id: contact-us-form title: Talk to our Team hubspotFormOptions: '{ "region": "eu1", "portalId": "139603372", "formId": "814b303f-2f24-460a-8a81-367146d98786", "submitButtonClass": "button button_contained", }' ---
qdrant-landing/content/customers/_index.md
--- title: Customers description: Customers build: render: always cascade: - build: list: local publishResources: false render: never ---
qdrant-landing/content/customers/customers-case-studies.md
--- title: Customers description: Learn how Qdrant powers thousands of top AI solutions that require vector search with unparalleled efficiency, performance and massive-scale data processing. caseStudy: logo: src: /img/customers-case-studies/customer-logo.svg alt: Logo title: Recommendation Engine with Qdrant Vector Database description: Dailymotion leverages Qdrant to optimize its <b>video recommendation engine</b>, managing over 420 million videos and processing 13 million recommendations daily. With this, Dailymotion was able to <b>reduced content processing times from hours to minutes</b> and <b>increased user interactions and click-through rates by more than 3x.</b> link: text: Read Case Study url: /blog/case-study-dailymotion/ image: src: /img/customers-case-studies/case-study.png alt: Preview cases: - id: 0 logo: src: /img/customers-case-studies/visua.svg alt: Visua Logo image: src: /img/customers-case-studies/case-visua.png alt: The hands of a person in a medical gown holding a tablet against the background of a pharmacy shop title: VISUA improves quality control process for computer vision with anomaly detection by 10x. link: text: Read Story url: /blog/case-study-visua/ - id: 1 logo: src: /img/customers-case-studies/dust.svg alt: Dust Logo image: src: /img/customers-case-studies/case-dust.png alt: A man in a jeans shirt is holding a smartphone, only his hands are visible. In the foreground, there is an image of a robot surrounded by chat and sound waves. title: Dust uses Qdrant for RAG, achieving millisecond retrieval, reducing costs by 50%, and boosting scalability. link: text: Read Story url: /blog/dust-and-qdrant/ - id: 2 logo: src: /img/customers-case-studies/iris-agent.svg alt: Logo image: src: /img/customers-case-studies/case-iris-agent.png alt: Hands holding a smartphone, styled smartphone interface visualisation in the foreground. First-person view title: IrisAgent uses Qdrant for RAG to automate support, and improve resolution times, transforming customer service. link: text: Read Story url: /blog/iris-agent-qdrant/ sitemapExclude: true ---
qdrant-landing/content/customers/customers-hero.md
--- title: Customers description: Learn how Qdrant powers thousands of top AI solutions that require vector search with unparalleled efficiency, performance and massive-scale data processing. sitemapExclude: true ---
qdrant-landing/content/customers/customers-testimonial1.md
--- review: “We looked at all the big options out there right now for vector databases, with our focus on ease of use, performance, pricing, and communication. <strong>Qdrant came out on top in each category...</strong> ultimately, it wasn't much of a contest.” names: Alex Webb positions: Director of Engineering, CB Insights avatar: src: /img/customers/alex-webb.svg alt: Alex Webb Avatar logo: src: /img/brands/cb-insights.svg alt: Logo sitemapExclude: true ---
qdrant-landing/content/customers/customers-testimonial2.md
--- review: “We LOVE Qdrant! The exceptional engineering, strong business value, and outstanding team behind the product drove our choice. Thank you for your great contribution to the technology community!” names: Kyle Tobin positions: Principal, Cognizant avatar: src: /img/customers/kyle-tobin.png alt: Kyle Tobin Avatar logo: src: /img/brands/cognizant.svg alt: Cognizant Logo sitemapExclude: true ---
qdrant-landing/content/customers/customers-vector-space-wall.md
--- title: Vector Space Wall link: url: https://testimonial.to/qdrant/all text: Submit Your Testimonial testimonials: - id: 0 name: Jonathan Eisenzopf position: Chief Strategy and Research Officer at Talkmap avatar: src: /img/customers/jonathan-eisenzopf.svg alt: Avatar text: “With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform on enterprise scale.” - id: 1 name: Angel Luis Almaraz Sánchez position: Full Stack | DevOps avatar: src: /img/customers/angel-luis-almaraz-sanchez.svg alt: Avatar text: Thank you, great work, Qdrant is my favorite option for similarity search. - id: 2 name: Shubham Krishna position: ML Engineer @ ML6 avatar: src: /img/customers/shubham-krishna.svg alt: Avatar text: Go ahead and checkout Qdrant. I plan to build a movie retrieval search where you can ask anything regarding a movie based on the vector embeddings generated by a LLM. It can also be used for getting recommendations. - id: 3 name: Kwok Hing LEON position: Data Science avatar: src: /img/customers/kwok-hing-leon.svg alt: Avatar text: Check out qdrant for improving searches. Bye to non-semantic KM engines. - id: 4 name: Ankur S position: Building avatar: src: /img/customers/ankur-s.svg alt: Avatar text: Quadrant is a great vector database. There is a real sense of thought behind the api! - id: 5 name: Yasin Salimibeni View Yasin Salimibeni’s profile position: AI Evangelist | Generative AI Product Designer | Entrepreneur | Mentor avatar: src: /img/customers/yasin-salimibeni-view-yasin-salimibeni.svg alt: Avatar text: Great work. I just started testing Qdrant Azure and I was impressed by the efficiency and speed. Being deploy-ready on large cloud providers is a great plus. Way to go! - id: 6 name: Marcel Coetzee position: Data and AI Plumber avatar: src: /img/customers/marcel-coetzee.svg alt: Avatar text: Using Qdrant as a blazing fact vector store for a stealth project of mine. It offers fantasic functionality for semantic search &#10024; - id: 7 name: Andrew Rove position: Principal Software Engineer avatar: src: /img/customers/andrew-rove.svg alt: Avatar text: We have been using Qdrant in production now for over 6 months to store vectors for cosine similarity search and it is way more stable and faster than our old ElasticSearch vector index.<br/><br/>No merging segments, no red indexes at random times. It just works and was super easy to deploy via docker to our cluster.<br/><br/>It’s faster, cheaper to host, and more stable, and open source to boot! - id: 8 name: Josh Lloyd position: ML Engineer avatar: src: /img/customers/josh-lloyd.svg alt: Avatar text: I'm using Qdrant to search through thousands of documents to find similar text phrases for question answering. Qdrant's awesome filtering allows me to slice along metadata while I'm at it! &#128640; and it's fast &#9193;&#128293; - id: 9 name: Leonard Püttmann position: data scientist avatar: src: /img/customers/leonard-puttmann.svg alt: Avatar text: Amidst the hype around vector databases, Qdrant is by far my favorite one. It's super fast (written in Rust) and open-source! At Kern AI we use Qdrant for fast document retrieval and to do quick similarity search for text data. - id: 10 name: Stanislas Polu position: Software Engineer & Co-Founder, Dust avatar: src: /img/customers/stanislas-polu.svg alt: Avatar text: Qdrant's the best. By. Far. - id: 11 name: Sivesh Sukumar position: Investor at Balderton avatar: src: /img/customers/sivesh-sukumar.svg alt: Avatar text: We're using Qdrant to help segment and source Europe's next wave of extraordinary companies! - id: 12 name: Saksham Gupta position: AI Governance Machine Learning Engineer avatar: src: /img/customers/saksham-gupta.svg alt: Avatar text: Looking forward to using Qdrant vector similarity search in the clinical trial space! OpenAI Embeddings + Qdrant = Match made in heaven! - id: 12 name: Rishav Dash position: Data Scientist avatar: src: /img/customers/rishav-dash.svg alt: Avatar text: awesome stuff &#128293; sitemapExclude: true ---
qdrant-landing/content/customers/logo-cards-1.md
--- logos: - /img/customers-logo/discord.svg - /img/customers-logo/johnson-and-johnson.svg - /img/customers-logo/perplexity.svg - /img/customers-logo/mozilla.svg - /img/customers-logo/voiceflow.svg - /img/customers-logo/bosch-digital.svg sitemapExclude: true ---
qdrant-landing/content/customers/logo-cards-2.md
--- logos: - /img/customers-logo/flipkart.svg - /img/customers-logo/x.svg - /img/customers-logo/quora.svg sitemapExclude: true ---
qdrant-landing/content/customers/logo-cards-3.md
--- logos: - /img/customers-logo/gitbook.svg - /img/customers-logo/deloitte.svg - /img/customers-logo/disney.svg sitemapExclude: true ---
qdrant-landing/content/data-analysis/_index.md
--- title: data-analysis description: data-analysis url: data-analysis-anomaly-detection build: render: always cascade: - build: list: local publishResources: false render: never ---
qdrant-landing/content/data-analysis/data-analysis-anomaly-detection.md
--- title: Anomaly Detection with Qdrant description: Qdrant optimizes anomaly detection by integrating vector embeddings for nuanced data analysis. It supports dissimilarity, diversity searches, and advanced anomaly detection techniques, enhancing applications from cybersecurity to finance with precise, efficient data insights. image: src: /img/data-analysis-anomaly-detection/anomaly-detection.svg alt: Anomaly detection caseStudy: logo: src: /img/data-analysis-anomaly-detection/customer-logo.svg alt: Logo title: Metric Learning for Anomaly Detection description: "Detecting Coffee Anomalies with Qdrant: Discover how Qdrant can be used for anomaly detection in green coffee quality control, transforming the industry's approach to sorting and classification." link: text: Read Case Study url: /articles/detecting-coffee-anomalies/ image: src: /img/data-analysis-anomaly-detection/case-study.png alt: Preview sitemapExclude: true ---
qdrant-landing/content/data-analysis/data-analysis-hero.md
--- title: Data Analysis and Anomaly Detection description: Explore entity matching for deduplication and anomaly detection with Qdrant, leveraging neural networks while still being fast and affordable in your applications for insights hard to get in other ways. startFree: text: Get Started url: https://cloud.qdrant.io/ learnMore: text: Contact Us url: /contact-us/ image: src: /img/vectors/vector-3.svg alt: Anomaly Detection sitemapExclude: true ---
qdrant-landing/content/debug.skip/_index.md
--- title: Debugging ---
qdrant-landing/content/debug.skip/bootstrap.md
--- title: Bootstrap slug: bootstrap --- <h2>Colors</h2> <details> <summary>Toggle details</summary> <h3>Text Color</h3> <p>Ignore the background colors in this section, they are just to show the text color.</p> <p class="text-primary">.text-primary</p> <p class="text-secondary">.text-secondary</p> <p class="text-success">.text-success</p> <p class="text-danger">.text-danger</p> <p class="text-warning bg-dark">.text-warning</p> <p class="text-info bg-dark">.text-info</p> <p class="text-light bg-dark">.text-light</p> <p class="text-dark">.text-dark</p> <p class="text-body">.text-body</p> <p class="text-muted">.text-muted</p> <p class="text-white bg-dark">.text-white</p> <p class="text-black-50">.text-black-50</p> <p class="text-white-50 bg-dark">.text-white-50</p> <h3>Background with contrasting text color</h3> <div class="text-bg-primary p-3">Primary with contrasting color</div> <div class="text-bg-secondary p-3">Secondary with contrasting color</div> <div class="text-bg-success p-3">Success with contrasting color</div> <div class="text-bg-danger p-3">Danger with contrasting color</div> <div class="text-bg-warning p-3">Warning with contrasting color</div> <div class="text-bg-info p-3">Info with contrasting color</div> <div class="text-bg-light p-3">Light with contrasting color</div> <div class="text-bg-dark p-3">Dark with contrasting color</div> <h3>Background Classes</h3> <div class="p-3 mb-2 bg-primary text-white">.bg-primary</div> <div class="p-3 mb-2 bg-secondary text-white">.bg-secondary</div> <div class="p-3 mb-2 bg-success text-white">.bg-success</div> <div class="p-3 mb-2 bg-danger text-white">.bg-danger</div> <div class="p-3 mb-2 bg-warning text-dark">.bg-warning</div> <div class="p-3 mb-2 bg-info text-dark">.bg-info</div> <div class="p-3 mb-2 bg-light text-dark">.bg-light</div> <div class="p-3 mb-2 bg-dark text-white">.bg-dark</div> <div class="p-3 mb-2 bg-body text-dark">.bg-body</div> <div class="p-3 mb-2 bg-white text-dark">.bg-white</div> <div class="p-3 mb-2 bg-transparent text-dark">.bg-transparent</div> <h3>Colored Links</h3> <a href="#" class="link-primary">Primary link</a><br> <a href="#" class="link-secondary">Secondary link</a><br> <a href="#" class="link-success">Success link</a><br> <a href="#" class="link-danger">Danger link</a><br> <a href="#" class="link-warning">Warning link</a><br> <a href="#" class="link-info">Info link</a><br> <a href="#" class="link-light">Light link</a><br> <a href="#" class="link-dark">Dark link</a><br> </details> <h2>Typography</h2> <details> <summary>Toggle details</summary> <h1>h1. Bootstrap heading</h1> <h2>h2. Bootstrap heading</h2> <h3>h3. Bootstrap heading</h3> <h4>h4. Bootstrap heading</h4> <h5>h5. Bootstrap heading</h5> <h6>h6. Bootstrap heading</h6> <p class="h1">h1. Bootstrap heading</p> <p class="h2">h2. Bootstrap heading</p> <p class="h3">h3. Bootstrap heading</p> <p class="h4">h4. Bootstrap heading</p> <p class="h5">h5. Bootstrap heading</p> <p class="h6">h6. Bootstrap heading</p> <h3> Fancy display heading <small class="text-muted">With faded secondary text</small> </h3> <h1 class="display-1">Display 1</h1> <h1 class="display-2">Display 2</h1> <h1 class="display-3">Display 3</h1> <h1 class="display-4">Display 4</h1> <h1 class="display-5">Display 5</h1> <h1 class="display-6">Display 6</h1> <p class="lead"> This is a lead paragraph. It stands out from regular paragraphs. <a href="#">Some link</a> </p> <p>You can use the mark tag to <mark>highlight</mark> text.</p> <p><del>This line of text is meant to be treated as deleted text.</del></p> <p><s>This line of text is meant to be treated as no longer accurate.</s></p> <p><ins>This line of text is meant to be treated as an addition to the document.</ins></p> <p><u>This line of text will render as underlined.</u></p> <p><small>This line of text is meant to be treated as fine print.</small></p> <p><strong>This line rendered as bold text.</strong></p> <p><em>This line rendered as italicized text.</em></p> <p><abbr title="attribute">attr</abbr></p> <p><abbr title="HyperText Markup Language" class="initialism">HTML</abbr></p> <p><a href="#">This is a link</a></p> <blockquote class="blockquote"> <p>A well-known quote, contained in a blockquote element.</p> </blockquote> <figure> <blockquote class="blockquote"> <p>A well-known quote, contained in a blockquote element.</p> </blockquote> <figcaption class="blockquote-footer"> Someone famous in <cite title="Source Title">Source Title</cite> </figcaption> </figure> <ul class="list-unstyled"> <li>This is a list.</li> <li>It appears completely unstyled.</li> <li>Structurally, it's still a list.</li> <li>However, this style only applies to immediate child elements.</li> <li>Nested lists: <ul> <li>are unaffected by this style</li> <li>will still show a bullet</li> <li>and have appropriate left margin</li> </ul> </li> <li>This may still come in handy in some situations.</li> </ul> </details>
qdrant-landing/content/debug.skip/components.md
--- title: Components --- ## Buttons **.button** <a href="#" class="button button_contained">Text</a> <button class="button button_outlined">Text</button> <button class="button button_contained" disabled>Text</button> ### Variants <div class="row"> <div class="col-4 p-4"> **.button .button_contained .button_sm** <a href="#" class="button button_contained button_sm">Try Free</a> **.button .button_contained .button_md** <a href="#" class="button button_contained button_md">Try Free</a> **.button .button_contained .button_lg** <a href="#" class="button button_contained button_lg">Try Free</a> **.button .button_contained .button_disabled** <a href="#" class="button button_contained button_disabled">Try Free</a> </div> <div class="col-4 text-bg-dark p-4"> **.button .button_outlined .button_sm** <a href="#" class="button button_outlined button_sm">Try Free</a> **.button .button_outlined .button_md** <a href="#" class="button button_outlined button_md">Try Free</a> **.button .button_outlined .button_lg** <a href="#" class="button button_outlined button_lg">Try Free</a> **.button .button_outlined .button_disabled** <a href="#" class="button button_outlined button_disabled">Try Free</a> </div> </div> ## Links **.link** <a href="#" class="link">Text</a>
qdrant-landing/content/demo/_index.md
--- title: Qdrant Demos and Tutorials description: Experience firsthand how Qdrant powers intelligent search, anomaly detection, and personalized recommendations, showcasing the full capabilities of vector search to revolutionize data exploration and insights. cards: - id: 0 title: Semantic Search Demo - Startup Search paragraphs: - id: 0 content: This demo leverages a pre-trained SentenceTransformer model to perform semantic searches on startup descriptions, transforming them into vectors for the Qdrant engine. - id: 1 content: Enter a query to see how neural search compares to traditional full-text search, with the option to toggle neural search on and off for direct comparison. link: text: View Demo url: https://qdrant.to/semantic-search-demo - id: 1 title: Semantic Search and Recommendations Demo - Food Discovery paragraphs: - id: 0 content: Explore personalized meal recommendations with our demo, using Delivery Service data. Like or dislike dish photos to refine suggestions based on visual appeal. - id: 1 content: Filter options allow for restaurant selections within your delivery area, tailoring your dining experience to your preferences. link: text: View Demo url: https://food-discovery.qdrant.tech/ - id: 2 title: Categorization Demo -<br> E-Commerce Products paragraphs: - id: 0 content: Discover the power of vector databases in e-commerce through our demo. Simply input a product name and watch as our multi-language model intelligently categorizes it. The dots you see represent product clusters, highlighting our system's efficient categorization. link: text: View Demo url: https://qdrant.to/extreme-classification-demo - id: 3 title: Code Search Demo -<br> Explore Qdrant's Codebase paragraphs: - id: 0 content: Semantic search isn't just for natural language. By combining results from two models, qdrant is able to locate relevant code snippets down to the exact line. link: text: View Demo url: https://code-search.qdrant.tech/ sitemapExclude: true ---
qdrant-landing/content/documentation/0-dl.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Getting Started" type: delimiter weight: 8 # Change this weight to change order of sections sitemapExclude: True ---
qdrant-landing/content/documentation/1-dl.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "User Manual" type: delimiter weight: 20 # Change this weight to change order of sections sitemapExclude: True ---
qdrant-landing/content/documentation/2-dl.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Integrations" type: delimiter weight: 30 # Change this weight to change order of sections sitemapExclude: True ---
qdrant-landing/content/documentation/3-dl.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Support" type: delimiter weight: 40 # Change this weight to change order of sections sitemapExclude: True ---
qdrant-landing/content/documentation/4-dl.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Managed Services" type: delimiter weight: 13 # Change this weight to change order of sections sitemapExclude: True ---
qdrant-landing/content/documentation/_index.md
--- title: Qdrant Documentation weight: 10 --- # Documentation **Qdrant (read: quadrant)** is a vector similarity search engine. Use our documentation to develop a production-ready service with a convenient API to store, search, and manage vectors with an additional payload. Qdrant's expanding features allow for all sorts of neural network or semantic-based matching, faceted search, and other applications. ## Product Release: Announcing Qdrant Hybrid Cloud! ***<p style="text-align: center;">Now you can attach your own infrastructure to Qdrant Cloud!</p>*** [![Hybrid Cloud](/docs/homepage/hybrid-cloud-cta.png)](https://qdrant.to/cloud) Use [**Qdrant Hybrid Cloud**](/hybrid-cloud/) to build the best private environment that suits your needs. Manage your own clusters via the [Qdrant Cloud UI](/documentation/cloud/), but continue to run them within your own private infrastructure for complete security and sovereignty. ## First-Time Users: There are three ways to use Qdrant: 1. [**Run a Docker image**](quick-start/) if you don't have a Python development environment. Setup a local Qdrant server and storage in a few moments. 2. [**Get the Python client**](https://github.com/qdrant/qdrant-client) if you're familiar with Python. Just `pip install qdrant-client`. The client also supports an in-memory database. 3. [**Spin up a Qdrant Cloud cluster:**](cloud/) the recommended method to run Qdrant in production. Read [Quickstart](cloud/quickstart-cloud/) to setup your first instance. ### Recommended Workflow: ![Local mode workflow](https://raw.githubusercontent.com/qdrant/qdrant-client/master/docs/images/try-develop-deploy.png) First, try Qdrant locally using the [Qdrant Client](https://github.com/qdrant/qdrant-client) and with the help of our [Tutorials](tutorials/) and Guides. Develop a sample app from our [Examples](examples/) list and try it using a [Qdrant Docker](guides/installation/) container. Then, when you are ready for production, deploy to a Free Tier [Qdrant Cloud](cloud/) cluster. ### Try Qdrant with Practice Data: You may always use our [Practice Datasets](datasets/) to build with Qdrant. This page will be regularly updated with dataset snapshots you can use to bootstrap complete projects. ## Popular Topics: | Tutorial | Description | Tutorial| Description | |----------------------------------------------------|----------------------------------------------|---------|------------------| | [Installation](guides/installation/) | Different ways to install Qdrant. | [Collections](concepts/collections/) | Learn about the central concept behind Qdrant. | | [Configuration](guides/configuration/) | Update the default configuration. | [Bulk Upload](tutorials/bulk-upload/) | Efficiently upload a large number of vectors. | | [Optimization](tutorials/optimize/) | Optimize Qdrant's resource usage. | [Multitenancy](tutorials/multiple-partitions/) | Setup Qdrant for multiple independent users. | ## Common Use Cases: Qdrant is ideal for deploying applications based on the matching of embeddings produced by neural network encoders. Check out the [Examples](examples/) section to learn more about common use cases. Also, you can visit the [Tutorials](tutorials/) page to learn how to work with Qdrant in different ways. | Use Case | Description | Stack | |-----------------------|----------------------------------------------|--------| | [Semantic Search for Beginners](tutorials/search-beginners/) | Build a search engine locally with our most basic instruction set. | Qdrant | | [Build a Simple Neural Search](tutorials/neural-search/) | Build and deploy a neural search. [Check out the live demo app.](https://demo.qdrant.tech/#/) | Qdrant, BERT, FastAPI | | [Build a Search with Aleph Alpha](tutorials/aleph-alpha-search/) | Build a simple semantic search that combines text and image data. | Qdrant, Aleph Alpha | | [Developing Recommendations Systems](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_getting_started/getting_started.ipynb) | Learn how to get started building semantic search and recommendation systems. | Qdrant | | [Search and Recommend Newspaper Articles](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_text_data/qdrant_and_text_data.ipynb) | Work with text data to develop a semantic search and a recommendation engine for news articles. | Qdrant | | [Recommendation System for Songs](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_audio_data/03_qdrant_101_audio.ipynb) | Use Qdrant to develop a music recommendation engine based on audio embeddings. | Qdrant | | [Image Comparison System for Skin Conditions](https://colab.research.google.com/github/qdrant/examples/blob/master/qdrant_101_image_data/04_qdrant_101_cv.ipynb) | Use Qdrant to compare challenging images with labels representing different skin diseases. | Qdrant | | [Question and Answer System with LlamaIndex](https://githubtocolab.com/qdrant/examples/blob/master/llama_index_recency/Qdrant%20and%20LlamaIndex%20%E2%80%94%20A%20new%20way%20to%20keep%20your%20Q%26A%20systems%20up-to-date.ipynb) | Combine Qdrant and LlamaIndex to create a self-updating Q&A system. | Qdrant, LlamaIndex, Cohere | | [Extractive QA System](https://githubtocolab.com/qdrant/examples/blob/master/extractive_qa/extractive-question-answering.ipynb) | Extract answers directly from context to generate highly relevant answers. | Qdrant | | [Ecommerce Reverse Image Search](https://githubtocolab.com/qdrant/examples/blob/master/ecommerce_reverse_image_search/ecommerce-reverse-image-search.ipynb) | Accept images as search queries to receive semantically appropriate answers. | Qdrant |
qdrant-landing/content/documentation/api-reference.md
--- title: API Reference weight: 12 type: external-link external_url: https://api.qdrant.tech/api-reference sitemapExclude: True ---
qdrant-landing/content/documentation/benchmarks.md
--- title: Benchmarks weight: 33 draft: true ---
qdrant-landing/content/documentation/community-links.md
--- title: Community links weight: 42 --- # Community Contributions Though we do not officially maintain this content, we still feel that is is valuable and thank our dedicated contributors. | Link | Description | Stack | |------|------------------------------|--------| | [Pinecone to Qdrant Migration](https://github.com/NirantK/qdrant_tools) | Complete python toolset that supports migration between two products. | Qdrant, Pinecone | | [LlamaIndex Support for Qdrant](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/QdrantIndexDemo.html) | Documentation on common integrations with LlamaIndex. | Qdrant, LlamaIndex | | [Geo.Rocks Semantic Search Tutorial](https://geo.rocks/post/qdrant-transformers-js-semantic-search/) | Create a fully working semantic search stack with a built in search API and a minimal stack. | Qdrant, HuggingFace, SentenceTransformers, transformers.js |
qdrant-landing/content/documentation/contribution-guidelines.md
--- title: Contribution Guidelines weight: 35 draft: true --- # How to contribute If you are a Qdrant user - Data Scientist, ML Engineer, or MLOps, the best contribution would be the feedback on your experience with Qdrant. Let us know whenever you have a problem, face an unexpected behavior, or see a lack of documentation. You can do it in any convenient way - create an [issue](https://github.com/qdrant/qdrant/issues), start a [discussion](https://github.com/qdrant/qdrant/discussions), or drop up a [message](https://discord.gg/tdtYvXjC4h). If you use Qdrant or Metric Learning in your projects, we'd love to hear your story! Feel free to share articles and demos in our community. For those familiar with Rust - check out our [contribution guide](https://github.com/qdrant/qdrant/blob/master/CONTRIBUTING.md). If you have problems with code or architecture understanding - reach us at any time. Feeling confident and want to contribute more? - Come to [work with us](https://qdrant.join.com/)!
qdrant-landing/content/documentation/datasets.md
--- title: Practice Datasets weight: 41 --- # Common Datasets in Snapshot Format You may find that creating embeddings from datasets is a very resource-intensive task. If you need a practice dataset, feel free to pick one of the ready-made snapshots on this page. These snapshots contain pre-computed vectors that you can easily import into your Qdrant instance. ## Available datasets Our snapshots are usually generated from publicly available datasets, which are often used for non-commercial or academic purposes. The following datasets are currently available. Please click on a dataset name to see its detailed description. | Dataset | Model | Vector size | Documents | Size | Qdrant snapshot | HF Hub | |--------------------------------------------|-----------------------------------------------------------------------------|-------------|-----------|--------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | [Arxiv.org titles](#arxivorg-titles) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 7.1 GB | [Download](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) | | [Arxiv.org abstracts](#arxivorg-abstracts) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 8.4 GB | [Download](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-abstracts-instructorxl-embeddings) | | [Wolt food](#wolt-food) | [clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32) | 512 | 1.7M | 7.9 GB | [Download](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/wolt-food-clip-ViT-B-32-embeddings) | Once you download a snapshot, you need to [restore it](/documentation/concepts/snapshots/#restore-snapshot) using the Qdrant CLI upon startup or through the API. ## Qdrant on Hugging Face <p align="center"> <a href="https://huggingface.co/Qdrant"> <img style="width: 500px; max-width: 100%;" src="/content/images/hf-logo-with-title.svg" alt="HuggingFace" title="HuggingFace"> </a> </p> [Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and datasets. [Qdrant](https://huggingface.co/Qdrant) is one of the organizations there! We aim to provide you with datasets containing neural embeddings that you can use to practice with Qdrant and build your applications based on semantic search. **Please let us know if you'd like to see a specific dataset!** If you are not familiar with [Hugging Face datasets](https://huggingface.co/docs/datasets/index), or would like to know how to combine it with Qdrant, please refer to the [tutorial](/documentation/tutorials/huggingface-datasets/). ## Arxiv.org [Arxiv.org](https://arxiv.org) is a highly-regarded open-access repository of electronic preprints in multiple fields. Operated by Cornell University, arXiv allows researchers to share their findings with the scientific community and receive feedback before they undergo peer review for formal publication. Its archives host millions of scholarly articles, making it an invaluable resource for those looking to explore the cutting edge of scientific research. With a high frequency of daily submissions from scientists around the world, arXiv forms a comprehensive, evolving dataset that is ripe for mining, analysis, and the development of future innovations. <aside role="status"> Arxiv.org snapshots were created using precomputed embeddings exposed by <a href="https://alex.macrocosm.so/download">the Alexandria Index</a>. </aside> ### Arxiv.org titles This dataset contains embeddings generated from the paper titles only. Each vector has a payload with the title used to create it, along with the DOI (Digital Object Identifier). ```json { "title": "Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities", "DOI": "1612.05191" } ``` The embeddings generated with InstructorXL model have been generated using the following instruction: > Represent the Research Paper title for retrieval; Input: The following code snippet shows how to generate embeddings using the InstructorXL model: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR("hkunlp/instructor-xl") sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments" instruction = "Represent the Research Paper title for retrieval; Input:" embeddings = model.encode([[instruction, sentence]]) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot" } ``` ### Arxiv.org abstracts This dataset contains embeddings generated from the paper abstracts. Each vector has a payload with the abstract used to create it, along with the DOI (Digital Object Identifier). ```json { "abstract": "Recently Cole and Gkatzelis gave the first constant factor approximation\nalgorithm for the problem of allocating indivisible items to agents, under\nadditive valuations, so as to maximize the Nash Social Welfare. We give\nconstant factor algorithms for a substantial generalization of their problem --\nto the case of separable, piecewise-linear concave utility functions. We give\ntwo such algorithms, the first using market equilibria and the second using the\ntheory of stable polynomials.\n In AGT, there is a paucity of methods for the design of mechanisms for the\nallocation of indivisible goods and the result of Cole and Gkatzelis seemed to\nbe taking a major step towards filling this gap. Our result can be seen as\nanother step in this direction.\n", "DOI": "1612.05191" } ``` The embeddings generated with InstructorXL model have been generated using the following instruction: > Represent the Research Paper abstract for retrieval; Input: The following code snippet shows how to generate embeddings using the InstructorXL model: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR("hkunlp/instructor-xl") sentence = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train." instruction = "Represent the Research Paper abstract for retrieval; Input:" embeddings = model.encode([[instruction, sentence]]) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot" } ``` ## Wolt food Our [Food Discovery demo](https://food-discovery.qdrant.tech/) relies on the dataset of food images from the Wolt app. Each point in the collection represents a dish with a single image. The image is represented as a vector of 512 float numbers. There is also a JSON payload attached to each point, which looks similar to this: ```json { "cafe": { "address": "VGX7+6R2 Vecchia Napoli, Valletta", "categories": ["italian", "pasta", "pizza", "burgers", "mediterranean"], "location": {"lat": 35.8980154, "lon": 14.5145106}, "menu_id": "610936a4ee8ea7a56f4a372a", "name": "Vecchia Napoli Is-Suq Tal-Belt", "rating": 9, "slug": "vecchia-napoli-skyparks-suq-tal-belt" }, "description": "Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli", "image": "https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg", "name": "L'Amatriciana" } ``` The embeddings generated with clip-ViT-B-32 model have been generated using the following code snippet: ```python from PIL import Image from sentence_transformers import SentenceTransformer image_path = "5dbfd216-5cce-11eb-8122-de94874ad1c8_ns_takeaway_seelachs_ei_baguette.jpeg" model = SentenceTransformer("clip-ViT-B-32") embedding = model.encode(Image.open(image_path)) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot" } ```
qdrant-landing/content/documentation/quick-start.md
--- title: Quickstart weight: 11 aliases: - quick_start --- # Quickstart In this short example, you will use the Python Client to create a Collection, load data into it and run a basic search query. <aside role="status">Before you start, please make sure Docker is installed and running on your system.</aside> ## Download and run First, download the latest Qdrant image from Dockerhub: ```bash docker pull qdrant/qdrant ``` Then, run the service: ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` Under the default configuration all data will be stored in the `./qdrant_storage` directory. This will also be the only directory that both the Container and the host machine can both see. Qdrant is now accessible: - REST API: [localhost:6333](http://localhost:6333) - Web UI: [localhost:6333/dashboard](http://localhost:6333/dashboard) - GRPC API: [localhost:6334](http://localhost:6334) ## Initialize the client ```python from qdrant_client import QdrantClient client = QdrantClient(url="http://localhost:6333") ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); ``` ```rust use qdrant_client::client::QdrantClient; // The Rust client uses Qdrant's GRPC interface let client = QdrantClient::from_url("http://localhost:6334").build()?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; // The Java client uses Qdrant's GRPC interface QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); ``` ```csharp using Qdrant.Client; // The C# client uses Qdrant's GRPC interface var client = new QdrantClient("localhost", 6334); ``` <aside role="status">By default, Qdrant starts with no encryption or authentication . This means anyone with network access to your machine can access your Qdrant container instance. Please read <a href="/documentation/security/">Security</a> carefully for details on how to secure your instance.</aside> ## Create a collection You will be storing all of your vector data in a Qdrant collection. Let's call it `test_collection`. This collection will be using a dot product distance metric to compare vectors. ```python from qdrant_client.models import Distance, VectorParams client.create_collection( collection_name="test_collection", vectors_config=VectorParams(size=4, distance=Distance.DOT), ) ``` ```typescript await client.createCollection("test_collection", { vectors: { size: 4, distance: "Dot" }, }); ``` ```rust use qdrant_client::qdrant::{vectors_config::Config, VectorParams, VectorsConfig}; client .create_collection(&CreateCollection { collection_name: "test_collection".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 4, distance: Distance::Dot.into(), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; client.createCollectionAsync("test_collection", VectorParams.newBuilder().setDistance(Distance.Dot).setSize(4).build()).get(); ``` ```csharp using Qdrant.Client.Grpc; await client.CreateCollectionAsync( collectionName: "test_collection", vectorsConfig: new VectorParams { Size = 4, Distance = Distance.Dot } ); ``` <aside role="status">TypeScript, Rust examples use async/await syntax, so should be used in an async block.</aside> <aside role="status">Java examples are enclosed within a try/catch block.</aside> ## Add vectors Let's now add a few vectors with a payload. Payloads are other data you want to associate with the vector: ```python from qdrant_client.models import PointStruct operation_info = client.upsert( collection_name="test_collection", wait=True, points=[ PointStruct(id=1, vector=[0.05, 0.61, 0.76, 0.74], payload={"city": "Berlin"}), PointStruct(id=2, vector=[0.19, 0.81, 0.75, 0.11], payload={"city": "London"}), PointStruct(id=3, vector=[0.36, 0.55, 0.47, 0.94], payload={"city": "Moscow"}), PointStruct(id=4, vector=[0.18, 0.01, 0.85, 0.80], payload={"city": "New York"}), PointStruct(id=5, vector=[0.24, 0.18, 0.22, 0.44], payload={"city": "Beijing"}), PointStruct(id=6, vector=[0.35, 0.08, 0.11, 0.44], payload={"city": "Mumbai"}), ], ) print(operation_info) ``` ```typescript const operationInfo = await client.upsert("test_collection", { wait: true, points: [ { id: 1, vector: [0.05, 0.61, 0.76, 0.74], payload: { city: "Berlin" } }, { id: 2, vector: [0.19, 0.81, 0.75, 0.11], payload: { city: "London" } }, { id: 3, vector: [0.36, 0.55, 0.47, 0.94], payload: { city: "Moscow" } }, { id: 4, vector: [0.18, 0.01, 0.85, 0.80], payload: { city: "New York" } }, { id: 5, vector: [0.24, 0.18, 0.22, 0.44], payload: { city: "Beijing" } }, { id: 6, vector: [0.35, 0.08, 0.11, 0.44], payload: { city: "Mumbai" } }, ], }); console.debug(operationInfo); ``` ```rust use qdrant_client::qdrant::PointStruct; use serde_json::json; let points = vec![ PointStruct::new( 1, vec![0.05, 0.61, 0.76, 0.74], json!( {"city": "Berlin"} ) .try_into() .unwrap(), ), PointStruct::new( 2, vec![0.19, 0.81, 0.75, 0.11], json!( {"city": "London"} ) .try_into() .unwrap(), ), // ..truncated ]; let operation_info = client .upsert_points_blocking("test_collection".to_string(), None, points, None) .await?; dbg!(operation_info); ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.UpdateResult; UpdateResult operationInfo = client .upsertAsync( "test_collection", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f)) .putAllPayload(Map.of("city", value("Berlin"))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f)) .putAllPayload(Map.of("city", value("London"))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f)) .putAllPayload(Map.of("city", value("Moscow"))) .build())) // Truncated .get(); System.out.println(operationInfo); ``` ```csharp using Qdrant.Client.Grpc; var operationInfo = await client.UpsertAsync( collectionName: "test_collection", points: new List<PointStruct> { new() { Id = 1, Vectors = new float[] { 0.05f, 0.61f, 0.76f, 0.74f }, Payload = { ["city"] = "Berlin" } }, new() { Id = 2, Vectors = new float[] { 0.19f, 0.81f, 0.75f, 0.11f }, Payload = { ["city"] = "London" } }, new() { Id = 3, Vectors = new float[] { 0.36f, 0.55f, 0.47f, 0.94f }, Payload = { ["city"] = "Moscow" } }, // Truncated } ); Console.WriteLine(operationInfo); ``` **Response:** ```python operation_id=0 status=<UpdateStatus.COMPLETED: 'completed'> ``` ```typescript { operation_id: 0, status: 'completed' } ``` ```rust PointsOperationResponse { result: Some(UpdateResult { operation_id: 0, status: Completed, }), time: 0.006347708, } ``` ```java operation_id: 0 status: Completed ``` ```csharp { "operationId": "0", "status": "Completed" } ``` ## Run a query Let's ask a basic question - Which of our stored vectors are most similar to the query vector `[0.2, 0.1, 0.9, 0.7]`? ```python search_result = client.search( collection_name="test_collection", query_vector=[0.2, 0.1, 0.9, 0.7], limit=3 ) print(search_result) ``` ```typescript let searchResult = await client.search("test_collection", { vector: [0.2, 0.1, 0.9, 0.7], limit: 3, }); console.debug(searchResult); ``` ```rust use qdrant_client::qdrant::SearchPoints; let search_result = client .search_points(&SearchPoints { collection_name: "test_collection".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], limit: 3, with_payload: Some(true.into()), ..Default::default() }) .await?; dbg!(search_result); ``` ```java import java.util.List; import io.qdrant.client.grpc.Points.ScoredPoint; import io.qdrant.client.grpc.Points.SearchPoints; import static io.qdrant.client.WithPayloadSelectorFactory.enable; List<ScoredPoint> searchResult = client .searchAsync( SearchPoints.newBuilder() .setCollectionName("test_collection") .setLimit(3) .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .build()) .get(); System.out.println(searchResult); ``` ```csharp var searchResult = await client.SearchAsync( collectionName: "test_collection", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, limit: 3, payloadSelector: true ); Console.WriteLine(searchResult); ``` **Response:** ```python ScoredPoint(id=4, version=0, score=1.362, payload={"city": "New York"}, vector=None), ScoredPoint(id=1, version=0, score=1.273, payload={"city": "Berlin"}, vector=None), ScoredPoint(id=3, version=0, score=1.208, payload={"city": "Moscow"}, vector=None) ``` ```typescript [ { id: 4, version: 0, score: 1.362, payload: null, vector: null, }, { id: 1, version: 0, score: 1.273, payload: null, vector: null, }, { id: 3, version: 0, score: 1.208, payload: null, vector: null, }, ]; ``` ```rust SearchResponse { result: [ ScoredPoint { id: Some(PointId { point_id_options: Some(Num(4)), }), payload: {}, score: 1.362, version: 0, vectors: None, }, ScoredPoint { id: Some(PointId { point_id_options: Some(Num(1)), }), payload: {}, score: 1.273, version: 0, vectors: None, }, ScoredPoint { id: Some(PointId { point_id_options: Some(Num(3)), }), payload: {}, score: 1.208, version: 0, vectors: None, }, ], time: 0.003635125, } ``` ```java [id { num: 4 } payload { key: "city" value { string_value: "New York" } } score: 1.362 version: 1 , id { num: 1 } payload { key: "city" value { string_value: "Berlin" } } score: 1.273 version: 1 , id { num: 3 } payload { key: "city" value { string_value: "Moscow" } } score: 1.208 version: 1 ] ``` ```csharp [ { "id": { "num": "4" }, "payload": { "city": { "stringValue": "New York" } }, "score": 1.362, "version": "7" }, { "id": { "num": "1" }, "payload": { "city": { "stringValue": "Berlin" } }, "score": 1.273, "version": "7" }, { "id": { "num": "3" }, "payload": { "city": { "stringValue": "Moscow" } }, "score": 1.208, "version": "7" } ] ``` The results are returned in decreasing similarity order. Note that payload and vector data is missing in these results by default. See [payload and vector in the result](../concepts/search/#payload-and-vector-in-the-result) on how to enable it. ## Add a filter We can narrow down the results further by filtering by payload. Let's find the closest results that include "London". ```python from qdrant_client.models import Filter, FieldCondition, MatchValue search_result = client.search( collection_name="test_collection", query_vector=[0.2, 0.1, 0.9, 0.7], query_filter=Filter( must=[FieldCondition(key="city", match=MatchValue(value="London"))] ), with_payload=True, limit=3, ) print(search_result) ``` ```typescript searchResult = await client.search("test_collection", { vector: [0.2, 0.1, 0.9, 0.7], filter: { must: [{ key: "city", match: { value: "London" } }], }, with_payload: true, limit: 3, }); console.debug(searchResult); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, SearchPoints}; let search_result = client .search_points(&SearchPoints { collection_name: "test_collection".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], filter: Some(Filter::all([Condition::matches( "city", "London".to_string(), )])), limit: 2, ..Default::default() }) .await?; dbg!(search_result); ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; List<ScoredPoint> searchResult = client .searchAsync( SearchPoints.newBuilder() .setCollectionName("test_collection") .setLimit(3) .setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London"))) .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .build()) .get(); System.out.println(searchResult); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; var searchResult = await client.SearchAsync( collectionName: "test_collection", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, filter: MatchKeyword("city", "London"), limit: 3, payloadSelector: true ); Console.WriteLine(searchResult); ``` **Response:** ```python ScoredPoint(id=2, version=0, score=0.871, payload={"city": "London"}, vector=None) ``` ```typescript [ { id: 2, version: 0, score: 0.871, payload: { city: "London" }, vector: null, }, ]; ``` ```rust SearchResponse { result: [ ScoredPoint { id: Some( PointId { point_id_options: Some( Num( 2, ), ), }, ), payload: { "city": Value { kind: Some( StringValue( "London", ), ), }, }, score: 0.871, version: 0, vectors: None, }, ], time: 0.004001083, } ``` ```java [id { num: 2 } payload { key: "city" value { string_value: "London" } } score: 0.871 version: 1 ] ``` ```csharp [ { "id": { "num": "2" }, "payload": { "city": { "stringValue": "London" } }, "score": 0.871, "version": "7" } ] ``` <aside role="status">To make filtered search fast on real datasets, we highly recommend to create <a href="../concepts/indexing/#payload-index">payload indexes</a>!</aside> You have just conducted vector search. You loaded vectors into a database and queried the database with a vector of your own. Qdrant found the closest results and presented you with a similarity score. ## Next steps Now you know how Qdrant works. Getting started with [Qdrant Cloud](../cloud/quickstart-cloud/) is just as easy. [Create an account](https://qdrant.to/cloud) and use our SaaS completely free. We will take care of infrastructure maintenance and software updates. To move onto some more complex examples of vector search, read our [Tutorials](../tutorials/) and create your own app with the help of our [Examples](../examples/). **Note:** There is another way of running Qdrant locally. If you are a Python developer, we recommend that you try Local Mode in [Qdrant Client](https://github.com/qdrant/qdrant-client), as it only takes a few moments to get setup.
qdrant-landing/content/documentation/release-notes.md
--- title: Release notes weight: 42 type: external-link external_url: https://github.com/qdrant/qdrant/releases sitemapExclude: True ---
qdrant-landing/content/documentation/roadmap.md
--- title: Roadmap weight: 32 draft: true --- # Qdrant 2023 Roadmap Goals of the release: * **Maintain easy upgrades** - we plan to keep backward compatibility for at least one major version back. * That means that you can upgrade Qdrant without any downtime and without any changes in your client code within one major version. * Storage should be compatible between any two consequent versions, so you can upgrade Qdrant with automatic data migration between consecutive versions. * **Make billion-scale serving cheap** - qdrant already can serve billions of vectors, but we want to make it even more affordable. * **Easy scaling** - our plan is to make it easy to dynamically scale Qdrant, so you could go from 1 to 1B vectors seamlessly. * **Various similarity search scenarios** - we want to support more similarity search scenarios, e.g. sparse search, grouping requests, diverse search, etc. ## Milestones * :atom_symbol: Quantization support * [ ] Scalar quantization f32 -> u8 (4x compression) * [ ] Advanced quantization (8x and 16x compression) * [ ] Support for binary vectors --- * :arrow_double_up: Scalability * [ ] Automatic replication factor adjustment * [ ] Automatic shard distribution on cluster scaling * [ ] Repartitioning support --- * :eyes: Search scenarios * [ ] Diversity search - search for vectors that are different from each other * [ ] Sparse vectors search - search for vectors with a small number of non-zero values * [ ] Grouping requests - search within payload-defined groups * [ ] Different scenarios for recommendation API --- * Additionally * [ ] Extend full-text filtering support * [ ] Support for phrase queries * [ ] Support for logical operators * [ ] Simplify update of collection parameters
qdrant-landing/content/documentation/cloud/_index.md
--- title: Managed Cloud weight: 14 aliases: - /documentation/overview/qdrant-alternatives/documentation/cloud/ --- # About Qdrant Managed Cloud Qdrant Managed Cloud is our SaaS (software-as-a-service) solution, providing managed Qdrant database clusters on the cloud. We provide you the same fast and reliable similarity search engine, but without the need to maintain your own infrastructure. Transitioning to the Managed Cloud version of Qdrant does not change how you interact with the service. All you need is a [Qdrant Cloud account](https://qdrant.to/cloud/) and an [API key](/documentation/cloud/authentication/) for each request. You can also attach your own infrastructure as a Hybrid Cloud Environment. For details, see our [Hybrid Cloud](/documentation/hybrid-cloud/) documentation. ## Cluster configuration Each database cluster comes pre-configured with the following tools, features, and support services: - Allows the creation of highly available clusters with automatic failover. - Supports upgrades to later versions of Qdrant as they are released. - Upgrades are zero-downtime on highly available clusters. - Includes monitoring and logging to observe the health of each cluster. - Horizontally and vertically scalable. - Available natively on AWS and GCP, and Azure. - Available on your own infrastructure and other providers if you use the Hybrid Cloud. ## Getting started with Qdrant Managed Cloud To use Qdrant Managed Cloud, you need at least one cluster. You can create one in the following ways: 1. [**Create a Free Tier cluster**](/documentation/cloud/quickstart-cloud/) with one node and a default configuration (1 GB RAM, 0.5 CPU and 4 GB Disk). This option is perfect for prototyping. You don't need a credit card to join. 2. [**Configure a custom cluster**](/documentation/cloud/create-cluster/) with additional nodes and resources. For this option, you need billing information. If you're testing Qdrant, We recommend the Free Tier cluster. The capacity should be enough to serve up to 1 M vectors of 768 dimensions. To calculate your needs, refer to our documentation on [Capacity and sizing](/documentation/cloud/capacity-sizing/). ## Support & Troubleshooting All Qdrant Cloud users are welcome to join our [Discord community](https://qdrant.to/discord/). Our Support Engineers are available to help you anytime. Paid customers can also contact support directly. Links to the support portal are available in the Qdrant Cloud Console.
qdrant-landing/content/documentation/cloud/authentication.md
--- title: Authentication weight: 30 --- # Authentication This page shows you how to use the Qdrant Cloud Console to create a custom API key for a cluster. You will learn how to connect to your cluster using the new API key. ## Create API keys The API key is only shown once after creation. If you lose it, you will need to create a new one. However, we recommend rotating the keys from time to time. To create additional API keys do the following. 1. Go to the [Cloud Dashboard](https://qdrant.to/cloud). 2. Select **Access Management** to display available API keys. 3. Click **Create** and choose a cluster name from the dropdown menu. > **Note:** You can create a key that provides access to multiple clusters. Select desired clusters in the dropdown box. 4. Click **OK** and retrieve your API key. ## Authenticate via SDK Now that you have created your first cluster and key, you might want to access Qdrant Cloud from within your application. Our official Qdrant clients for Python, TypeScript, Go, Rust, .NET and Java all support the API key parameter. ```bash curl \ -X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \ --header 'api-key: <provide-your-own-key>' # Alternatively, you can use the `Authorization` header with the `Bearer` prefix curl \ -X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \ --header 'Authorization: Bearer <provide-your-own-key>' ``` ```python from qdrant_client import QdrantClient qdrant_client = QdrantClient( "xyz-example.eu-central.aws.cloud.qdrant.io", api_key="<paste-your-api-key-here>", ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "xyz-example.eu-central.aws.cloud.qdrant.io", apiKey: "<paste-your-api-key-here>", }); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("xyz-example.eu-central.aws.cloud.qdrant.io:6334") .with_api_key("<paste-your-api-key-here>") .build() .unwrap(); ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder( "xyz-example.eu-central.aws.cloud.qdrant.io", 6334, true) .withApiKey("<paste-your-api-key-here>") .build()); ``` ```csharp using Qdrant.Client; var client = new QdrantClient( host: "xyz-example.eu-central.aws.cloud.qdrant.io", https: true, apiKey: "<paste-your-api-key-here>" ); ```
qdrant-landing/content/documentation/cloud/aws-marketplace.md
--- title: AWS Marketplace weight: 60 --- # Qdrant Cloud on AWS Marketplace ## Overview Our [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-rtphb42tydtzg) listing streamlines access to Qdrant for users who rely on Amazon Web Services for hosting and application development. Please note that, while Qdrant's clusters run on AWS, you will still use the Qdrant Cloud infrastructure. ## Billing You don't need to use a credit card to sign up for Qdrant Cloud. Instead, all billing is processed through the AWS Marketplace and the usage of Qdrant is added to your existing billing for AWS services. It is common for AWS to abstract usage based pricing in the AWS marketplace, as there are too many factors to model when calculating billing from the AWS side. ![pricing](/docs/cloud/pricing.png) The payment is carried out via your AWS Account. To get a clearer idea for the pricing structure, please use our [Billing Calculator](https://cloud.qdrant.io/calculator). ## How to subscribe 1. Go to [Qdrant's AWS Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-rtphb42tydtzg). 2. Click the bright orange button - **View purchase options**. 3. On the next screen, under Purchase, click **Subscribe**. 4. Up top, on the green banner, click **Set up your account**. ![setup](/docs/cloud/setup.png) You will be transferred outside of AWS to [Qdrant Cloud](https://qdrant.to/cloud) via your unique AWS Offer ID. The Billing Details screen will open in Qdrant Cloud Console. Stay in this console if you want to create your first Qdrant Cluster hosted on AWS. > **Note:** You do not have to return to the AWS Control Panel. All Qdrant infrastructure is provisioned from the Qdrant Cloud Console. ## Next steps Now that you have signed up via AWS Marketplace, please read our instructions to get started: 1. Learn more about [cluster creation and basic config](../../cloud/create-cluster/) in Qdrant Cloud. 2. Learn how to [authenticate and access your cluster](../../cloud/authentication/). 3. Additional open source [documentation](/documentation/guides/common-errors/).
qdrant-landing/content/documentation/cloud/backups.md
--- title: Backups weight: 70 --- # Cloud Backups Qdrant organizes cloud instances as clusters. On occasion, you may need to restore your cluster because of application or system failure. You may already have a source of truth for your data in a regular database. If you have a problem, you could reindex the data into your Qdrant vector search cluster. However, this process can take time. For high availability critical projects we recommend replication. It guarantees the proper cluster functionality as long as at least one replica is running. For other use-cases such as disaster recovery, you can set up automatic or self-service backups. ## Prerequisites You can back up your Qdrant clusters though the Qdrant Cloud Dashboard at https://cloud.qdrant.io. This section assumes that you've already set up your cluster, as described in the following sections: - [Create a cluster](/documentation/cloud/create-cluster/) - Set up [Authentication](/documentation/cloud/authentication/) - Configure one or more [Collections](/documentation/concepts/collections/) ## Automatic backups You can set up automatic backups of your clusters with our Cloud UI. With the procedures listed in this page, you can set up snapshots on a daily/weekly/monthly basis. You can keep as many snapshots as you need. You can restore a cluster from the snapshot of your choice. > Note: When you restore a snapshot, consider the following: > - The affected cluster is not available while a snapshot is being restored. > - If you changed the cluster setup after the copy was created, the cluster resets to the previous configuration. > - The previous configuration includes: > - CPU > - Memory > - Node count > - Qdrant version ### Configure a backup After you have taken the prerequisite steps, you can configure a backup with the [Qdrant Cloud Dashboard](https://cloud.qdrant.io). To do so, take these steps: 1. Sign in to the dashboard 1. Select Clusters. 1. Select the cluster that you want to back up. ![Select a cluster](/documentation/cloud/select-cluster.png) 1. Find and select the **Backups** tab. 1. Now you can set up a backup schedule. The **Days of Retention** is the number of days after a backup snapshot is deleted. 1. Alternatively, you can select **Backup now** to take an immediate snapshot. ![Configure a cluster backup](/documentation/cloud/backup-schedule.png) ### Restore a backup If you have a backup, it appears in the list of **Available Backups**. You can choose to restore or delete the backups of your choice. ![Restore or delete a cluster backup](/documentation/cloud/restore-delete.png) <!-- I think we should move this to the Snapshot page, but I'll do it later --> ## Backups with a snapshot Qdrant also offers a snapshot API which allows you to create a snapshot of a specific collection or your entire cluster. For more information, see our [snapshot documentation](/documentation/concepts/snapshots/). Here is how you can take a snapshot and recover a collection: 1. Take a snapshot: - For a single node cluster, call the snapshot endpoint on the exposed URL. - For a multi node cluster call a snapshot on each node of the collection. Specifically, prepend `node-{num}-` to your cluster URL. Then call the [snapshot endpoint](../../concepts/snapshots/#create-snapshot) on the individual hosts. Start with node 0. - In the response, you'll see the name of the snapshot. 2. Delete and recreate the collection. 3. Recover the snapshot: - Call the [recover endpoint](../../concepts/snapshots/#recover-in-cluster-deployment). Set a location which points to the snapshot file (`file:///qdrant/snapshots/{collection_name}/{snapshot_file_name}`) for each host. ## Backup considerations Backups are incremental. For example, if you have two backups, backup number 2 contains only the data that changed since backup number 1. This reduces the total cost of your backups. You can create multiple backup schedules. When you restore a snapshot, any changes made after the date of the snapshot are lost.
qdrant-landing/content/documentation/cloud/capacity-sizing.md
--- title: Capacity and sizing weight: 40 aliases: - capacity --- # Capacity and sizing We have been asked a lot about the optimal cluster configuration to serve a number of vectors. The only right answer is “It depends”. It depends on a number of factors and options you can choose for your collections. ## Basic configuration If you need to keep all vectors in memory for maximum performance, there is a very rough formula for estimating the needed memory size looks like this: ```text memory_size = number_of_vectors * vector_dimension * 4 bytes * 1.5 ``` Extra 50% is needed for metadata (indexes, point versions, etc.) as well as for temporary segments constructed during the optimization process. If you need to have payloads along with the vectors, it is recommended to store it on the disc, and only keep [indexed fields](../../concepts/indexing/#payload-index) in RAM. Read more about the payload storage in the [Storage](../../concepts/storage/#payload-storage) section. ## Storage focused configuration If your priority is to serve large amount of vectors with an average search latency, it is recommended to configure [mmap storage](../../concepts/storage/#configuring-memmap-storage). In this case vectors will be stored on the disc in memory-mapped files, and only the most frequently used vectors will be kept in RAM. The amount of available RAM will significantly affect the performance of the search. As a rule of thumb, if you keep 2 times less vectors in RAM, the search latency will be 2 times lower. The speed of disks is also important. [Let us know](mailto:cloud@qdrant.io) if you have special requirements for a high-volume search. ## Sub-groups oriented configuration If your use case assumes that the vectors are split into multiple collections or sub-groups based on payload values, it is recommended to configure memory-map storage. For example, if you serve search for multiple users, but each of them has an subset of vectors which they use independently. In this scenario only the active subset of vectors will be kept in RAM, which allows the fast search for the most active and recent users. In this case you can estimate required memory size as follows: ```text memory_size = number_of_active_vectors * vector_dimension * 4 bytes * 1.5 ``` ## Disk space Clusters that support vector search require significant disk space. If you're running low on disk space in your cluster, you can use the UI at [cloud.qdrant.io](https://cloud.qdrant.io/) to **Scale Up** your cluster. <aside role="status">If you use the Qdrant UI to increase the disk space in your cluster, you cannot decrease that allocation later.</aside> If you're running low on disk space, consider the following advantages: - Larger Datasets: Supports larger datasets. With vector search, larger datasets can improve the relevance and quality of search results. - Improved Indexing: Supports the use of indexing strategies such as HNSW (Hierarchical Navigable Small World). - Caching: Improves speed when you cache frequently accessed data on disk. - Backups and Redundancy: Allows more frequent backups. Perhaps the most important advantage.
qdrant-landing/content/documentation/cloud/cluster-scaling.md
--- title: Cluster scaling weight: 50 --- # Cluster scaling The amount of data is always growing and at some point you might need to upgrade the capacity of your cluster. There are different options for how it can be done. ## Vertical scaling Vertical scaling, also known as vertical expansion, is the process of increasing the capacity of a cluster by adding more resources, such as memory, storage, or processing power. You can start with a minimal cluster configuration of 2GB of RAM and resize it up to 64GB of RAM (or even more if desired) over the time step by step with the growing amount of data in your application. If your cluster consists of several nodes each node will need to be scaled to the same size. Please note that vertical cluster scaling will require a short downtime period to restart your cluster. In order to avoid a downtime you can make use of data replication, which can be configured on the collection level. Vertical scaling can be initiated on the cluster detail page via the button "scale up". ## Horizontal scaling Vertical scaling can be an effective way to improve the performance of a cluster and extend the capacity, but it has some limitations. The main disadvantage of vertical scaling is that there are limits to how much a cluster can be expanded. At some point, adding more resources to a cluster can become impractical or cost-prohibitive. In such cases, horizontal scaling may be a more effective solution. Horizontal scaling, also known as horizontal expansion, is the process of increasing the capacity of a cluster by adding more nodes and distributing the load and data among them. The horizontal scaling at Qdrant starts on the collection level. You have to choose the number of shards you want to distribute your collection around while creating the collection. Please refer to the [sharding documentation](../../guides/distributed_deployment/#sharding) section for details. Important: The number of shards means the maximum amount of nodes you can add to your cluster. In the beginning, all the shards can reside on one node. With the growing amount of data you can add nodes to your cluster and move shards to the dedicated nodes using the [cluster setup API](../../guides/distributed_deployment/#cluster-scaling). We will be glad to consult you on an optimal strategy for scaling. [Let us know](mailto:cloud@qdrant.io) your needs and decide together on a proper solution. We plan to introduce an auto-scaling functionality. Since it is one of most desired features, it has a high priority on our Cloud roadmap.
qdrant-landing/content/documentation/cloud/create-cluster.md
--- title: Create a cluster weight: 20 --- # Create a cluster This page shows you how to use the Qdrant Cloud Console to create a custom Qdrant Cloud cluster. > **Prerequisite:** Please make sure you have provided billing information before creating a custom cluster. 1. Start in the **Clusters** section of the [Cloud Dashboard](https://cloud.qdrant.io/). 1. Select **Clusters** and then click **+ Create**. 1. In the **Create a cluster** screen select **Free** or **Standard** For more information on a free cluster, see the [Cloud quickstart](/documentation/cloud/quickstart-cloud/). The remaining steps assume you want a standard cluster. 1. Select a provider. Currently, you can deploy to: - Amazon Web Services (AWS) - Google Cloud Platform (GCP) - Microsoft Azure 1. Choose your data center region. If you have latency concerns or other topology-related requirements, [**let us know**](mailto:cloud@qdrant.io). 1. Configure RAM for each node (2 GB to 64 GB). > For more information, see our [**Capacity and Sizing**](/documentation/cloud/capacity-sizing/) guidance. If you need more capacity per node, [**let us know**](mailto:cloud@qdrant.io). 1. Choose the number of vCPUs per node (0.5 core to 16 cores). If you add more RAM, the menu provides different options for vCPUs. 1. Select the number of nodes you want the cluster to be deployed on. > Each node is automatically attached with a disk space offering enough space for your data if you decide to put the metadata or even the index on the disk storage. 1. Select the disk space for your deployment. You can choose from 8 GB to 2 TB. 1. Review your cluster configuration and pricing. 1. When you're ready, select **Create**. It takes some time to provision your cluster. Once provisioned, you can access your cluster on ports 443 and 6333 (REST) and 6334 (gRPC). ![Cluster configured in the UI](/docs/cloud/create-cluster-test.png) You should now see the new cluster in the **Clusters** menu. A custom cluster includes the following resources. The values in the table are maximums. | Resource | Value (max) | |------------|-------------| | RAM | 64 GB | | vCPU | 16 vCPU | | Disk space | 2 TB | | Nodes | 10 | ### Included features (paid) The features included with this cluster are: - Dedicated resources - Backup and disaster recovery - Horizontal and vertical scaling - Monitoring and log management Learn more about these features in the [Qdrant Cloud dashboard](https://cloud.qdrant.io/). ## Next steps You will need to connect to your new Qdrant Cloud cluster. Follow [**Authentication**](/documentation/cloud/authentication/) to create one or more API keys. Your new cluster is highly available and responsive to your application requirements and resource load. Read more in [**Cluster Scaling**](/documentation/cloud/cluster-scaling/).
qdrant-landing/content/documentation/cloud/gcp-marketplace.md
--- title: GCP Marketplace weight: 60 --- # Qdrant Cloud on GCP Marketplace Our [GCP Marketplace](https://console.cloud.google.com/marketplace/product/qdrant-public/qdrant) listing streamlines access to Qdrant for users who rely on the Google Cloud Platform for hosting and application development. While Qdrant's clusters run on GCP, you are using the Qdrant Cloud infrastructure. ## Billing You don't need a credit card to sign up for Qdrant Cloud. Instead, all billing is processed through the GCP Marketplace. Usage is added to your existing billing for GCP. Payment is made through your GCP Account. Our [Billing Calculator](https://cloud.qdrant.io/calculator) can provide more information about costs. Costs from cloud providers are based on usage. You can subscribe to Qdrant on the GCP Marketplace without paying more. ## How to subscribe 1. Go to the [GCP Marketplace listing for Qdrant](https://console.cloud.google.com/marketplace/product/qdrant-public/qdrant). 1. Select **Subscribe**. (If you have already subscribed, select **Manage on Provider**.) 1. On the next screen, choose options as required, and select **Subscribe**. 1. On the pop-up window that appers, select **Sign up with Qdrant**. GCP transfers you to the [Qdrant Cloud](https://cloud.qdrant.io/). The Billing Details screen opens in the Qdrant Cloud Console. If you do not already see a menu, select the "hamburger" icon (with three short horizontal lines) in the upper-left corner of the window. > **Note:** You do not have to return to GCP. All Qdrant infrastructure is provisioned from the Qdrant Cloud Console. ## Next steps Now that you have signed up through GCP, please read our instructions to get started: 1. Learn more about how you can [Create a cluster](/documentation/cloud/create-cluster/). 1. Learn how to [Authenticate](/documentation/cloud/authentication/) and access your cluster.
qdrant-landing/content/documentation/cloud/quickstart-cloud.md
--- title: Cloud Quickstart weight: 10 aliases: - ../cloud-quick-start - cloud-quick-start --- # Cloud Quickstart This page shows you how to use the Qdrant Cloud Console to create a free tier cluster and then connect to it with Qdrant Client. ## Create a Free Tier cluster 1. Start in the **Overview** section of the [Cloud Dashboard](https://cloud.qdrant.io/). 1. Find the dashboard menu in the left-hand pane. If you do not see it, select the icon with three horizonal lines in the upper-left of the screen 1. Select **Clusters**. On the Clusters page, select **Create**. 1. In the **Create a Cluster** page, select **Free** 1. Scroll down. Confirm your cluster configuration, and select **Create**. You should now see your new free tier cluster in the **Clusters** menu. A free tier cluster includes the following resources: | Resource | Value | |------------|-------| | RAM | 1 GB | | vCPU | 0.5 | | Disk space | 4 GB | | Nodes | 1 | ## Get an API key To use your cluster, you need an API key. Read our documentation on [Cloud Authentication](/documentation/cloud/authentication/) for the process. ## Test cluster access After creation, you will receive a code snippet to access your cluster. Your generated request should look very similar to this one: ```bash curl \ -X GET 'https://xyz-example.eu-central.aws.cloud.qdrant.io:6333' \ --header 'api-key: <paste-your-api-key-here>' ``` Open Terminal and run the request. You should get a response that looks like this: ```bash {"title":"qdrant - vector search engine","version":"1.8.1"} ``` > **Note:** You need to include the API key in the request header for every > request over REST or gRPC. ## Authenticate via SDK Now that you have created your first cluster and API key, you can access the Qdrant Cloud from within your application. Our official Qdrant clients for Python, TypeScript, Go, Rust, and .NET all support the API key parameter. ```python from qdrant_client import QdrantClient qdrant_client = QdrantClient( "xyz-example.eu-central.aws.cloud.qdrant.io", api_key="<paste-your-api-key-here>", ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "xyz-example.eu-central.aws.cloud.qdrant.io", apiKey: "<paste-your-api-key-here>", }); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("xyz-example.eu-central.aws.cloud.qdrant.io:6334") .with_api_key("<paste-your-api-key-here>") .build() .unwrap(); ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder( "xyz-example.eu-central.aws.cloud.qdrant.io", 6334, true) .withApiKey("<paste-your-api-key-here>") .build()); ``` ```csharp using Qdrant.Client; var client = new QdrantClient( host: "xyz-example.eu-central.aws.cloud.qdrant.io", https: true, apiKey: "<paste-your-api-key-here>" ); ```
qdrant-landing/content/documentation/concepts/_index.md
--- title: Concepts weight: 21 # If the index.md file is empty, the link to the section will be hidden from the sidebar --- # Concepts Think of these concepts as a glossary. Each of these concepts include a link to detailed information, usually with examples. If you're new to AI, these concepts can help you learn more about AI and the Qdrant approach. ## Collections [Collections](/documentation/concepts/collections/) define a named set of points that you can use for your search. ## Payload A [Payload](/documentation/concepts/payload/) describes information that you can store with vectors. ## Points [Points](/documentation/concepts/points/) are a record which consists of a vector and an optional payload. ## Search [Search](/documentation/concepts/search/) describes _similarity search_, which set up related objects close to each other in vector space. ## Explore [Explore](/documentation/concepts/explore/) includes several APIs for exploring data in your collections. ## Filtering [Filtering](/documentation/concepts/filtering/) defines various database-style clauses, conditions, and more. ## Optimizer [Optimizer](/documentation/concepts/optimizer/) describes options to rebuild database structures for faster search. They include a vacuum, a merge, and an indexing optimizer. ## Storage [Storage](/documentation/concepts/storage/) describes the configuration of storage in segments, which include indexes and an ID mapper. ## Indexing [Indexing](/documentation/concepts/indexing/) lists and describes available indexes. They include payload, vector, sparse vector, and a filterable index. ## Snapshots [Snapshots](/documentation/concepts/snapshots/) describe the backup/restore process (and more) for each node at specific times.
qdrant-landing/content/documentation/concepts/collections.md
--- title: Collections weight: 30 aliases: - ../collections - /concepts/collections/ - /documentation/frameworks/fondant/documentation/concepts/collections/ --- # Collections A collection is a named set of points (vectors with a payload) among which you can search. The vector of each point within the same collection must have the same dimensionality and be compared by a single metric. [Named vectors](#collection-with-multiple-vectors) can be used to have multiple vectors in a single point, each of which can have their own dimensionality and metric requirements. Distance metrics are used to measure similarities among vectors. The choice of metric depends on the way vectors obtaining and, in particular, on the method of neural network encoder training. Qdrant supports these most popular types of metrics: * Dot product: `Dot` - [[wiki]](https://en.wikipedia.org/wiki/Dot_product) * Cosine similarity: `Cosine` - [[wiki]](https://en.wikipedia.org/wiki/Cosine_similarity) * Euclidean distance: `Euclid` - [[wiki]](https://en.wikipedia.org/wiki/Euclidean_distance) * Manhattan distance: `Manhattan` - [[wiki]](https://en.wikipedia.org/wiki/Taxicab_geometry) <aside role="status">For search efficiency, Cosine similarity is implemented as dot-product over normalized vectors. Vectors are automatically normalized during upload</aside> In addition to metrics and vector size, each collection uses its own set of parameters that controls collection optimization, index construction, and vacuum. These settings can be changed at any time by a corresponding request. ## Setting up multitenancy **How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called [multitenancy](https://en.wikipedia.org/wiki/Multitenancy). It is efficient for most of users, but it requires additional configuration. [Learn how to set it up](../../tutorials/multiple-partitions/) **When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise. ## Create a collection ```http PUT /collections/{collection_name} { "vectors": { "size": 300, "distance": "Cosine" } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "size": 300, "distance": "Cosine" } }' ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 100, distance: "Cosine" }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig}, }; //The Rust client uses Qdrant's GRPC interface let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 100, distance: Distance::Cosine.into(), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.createCollectionAsync("{collection_name}", VectorParams.newBuilder().setDistance(Distance.Cosine).setSize(100).build()).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine } ); ``` In addition to the required options, you can also specify custom values for the following collection options: * `hnsw_config` - see [indexing](../indexing/#vector-index) for details. * `wal_config` - Write-Ahead-Log related configuration. See more details about [WAL](../storage/#versioning) * `optimizers_config` - see [optimizer](../optimizer/) for details. * `shard_number` - which defines how many shards the collection should have. See [distributed deployment](../../guides/distributed_deployment/#sharding) section for details. * `on_disk_payload` - defines where to store payload data. If `true` - payload will be stored on disk only. Might be useful for limiting the RAM usage in case of large payload. * `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details. Default parameters for the optional collection parameters are defined in [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml). See [schema definitions](https://api.qdrant.tech/api-reference/collections/create-collection) and a [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml) for more information about collection and vector parameters. *Available as of v1.2.0* Vectors all live in RAM for very quick access. The `on_disk` parameter can be set in the vector configuration. If true, all vectors will live on disk. This will enable the use of [memmaps](../../concepts/storage/#configuring-memmap-storage), which is suitable for ingesting a large amount of data. ### Create collection from another collection *Available as of v1.0.0* It is possible to initialize a collection from another existing collection. This might be useful for experimenting quickly with different configurations for the same data set. Make sure the vectors have the same `size` and `distance` function when setting up the vectors configuration in the new collection. If you used the previous sample code, `"size": 300` and `"distance": "Cosine"`. ```http PUT /collections/{collection_name} { "vectors": { "size": 100, "distance": "Cosine" }, "init_from": { "collection": "{from_collection_name}" } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "size": 300, "distance": "Cosine" }, "init_from": { "collection": {from_collection_name} } }' ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url="http://localhost:6333") client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE), init_from=models.InitFrom(collection="{from_collection_name}"), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 100, distance: "Cosine" }, init_from: { collection: "{from_collection_name}" }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 100, distance: Distance::Cosine.into(), ..Default::default() })), }), init_from_collection: Some("{from_collection_name}".to_string()), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(100) .setDistance(Distance.Cosine) .build())) .setInitFromCollection("{from_collection_name}") .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine }, initFromCollection: "{from_collection_name}" ); ``` ### Collection with multiple vectors *Available as of v0.10.0* It is possible to have multiple vectors per record. This feature allows for multiple vector storages per collection. To distinguish vectors in one record, they should have a unique name defined when creating the collection. Each named vector in this mode has its distance and size: ```http PUT /collections/{collection_name} { "vectors": { "image": { "size": 4, "distance": "Dot" }, "text": { "size": 8, "distance": "Cosine" } } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "image": { "size": 4, "distance": "Dot" }, "text": { "size": 8, "distance": "Cosine" } } }' ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.create_collection( collection_name="{collection_name}", vectors_config={ "image": models.VectorParams(size=4, distance=models.Distance.DOT), "text": models.VectorParams(size=8, distance=models.Distance.COSINE), }, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { image: { size: 4, distance: "Dot" }, text: { size: 8, distance: "Cosine" }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, VectorParams, VectorParamsMap, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::ParamsMap(VectorParamsMap { map: [ ( "image".to_string(), VectorParams { size: 4, distance: Distance::Dot.into(), ..Default::default() }, ), ( "text".to_string(), VectorParams { size: 8, distance: Distance::Cosine.into(), ..Default::default() }, ), ] .into(), })), }), ..Default::default() }) .await?; ``` ```java import java.util.Map; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( "{collection_name}", Map.of( "image", VectorParams.newBuilder().setSize(4).setDistance(Distance.Dot).build(), "text", VectorParams.newBuilder().setSize(8).setDistance(Distance.Cosine).build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParamsMap { Map = { ["image"] = new VectorParams { Size = 4, Distance = Distance.Dot }, ["text"] = new VectorParams { Size = 8, Distance = Distance.Cosine }, } } ); ``` For rare use cases, it is possible to create a collection without any vector storage. *Available as of v1.1.1* For each named vector you can optionally specify [`hnsw_config`](../indexing/#vector-index) or [`quantization_config`](../../guides/quantization/#setting-up-quantization-in-qdrant) to deviate from the collection configuration. This can be useful to fine-tune search performance on a vector level. *Available as of v1.2.0* Vectors all live in RAM for very quick access. On a per-vector basis you can set `on_disk` to true to store all vectors on disk at all times. This will enable the use of [memmaps](../../concepts/storage/#configuring-memmap-storage), which is suitable for ingesting a large amount of data. ### Vector datatypes *Available as of v1.9.0* Some embedding providers may provide embeddings in a pre-quantized format. One of the most notable examples is the [Cohere int8 & binary embeddings](https://cohere.com/blog/int8-binary-embeddings). Qdrant has direct support for uint8 embeddings, which you can also use in combination with binary quantization. To create a collection with uint8 embeddings, you can use the following configuration: ```http PUT /collections/{collection_name} { "vectors": { "size": 1024, "distance": "Cosine", "datatype": "uint8" } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "size": 1024, "distance": "Cosine", "datatype": "uint8" } }' ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams( size=1024, distance=models.Distance.COSINE, datatype=models.Datatype.UINT8, ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { image: { size: 1024, distance: "Cosine", datatype: "uint8" }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Datatype, Distance, VectorParams, VectorsConfig}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; qdrant_client .create_collection(&CreateCollection { collection_name: "{collection_name}".into(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 1024, distance: Distance::Cosine.into(), datatype: Some(Datatype::Uint8.into()), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.grpc.Collections.Datatype; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync("{collection_name}", VectorParams.newBuilder() .setSize(1024) .setDistance(Distance.Cosine) .setDatatype(Datatype.Uint8) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 1024, Distance = Distance.Cosine, Datatype = Datatype.Uint8 } ); ``` Vectors with `uint8` datatype are stored in a more compact format, which can save memory and improve search speed at the cost of some precision. If you choose to use the `uint8` datatype, elements of the vector will be stored as unsigned 8-bit integers, which can take values **from 0 to 255**. ### Collection with sparse vectors *Available as of v1.7.0* Qdrant supports sparse vectors as a first-class citizen. Sparse vectors are useful for text search, where each word is represented as a separate dimension. Collections can contain sparse vectors as additional [named vectors](#collection-with-multiple-vectors) along side regular dense vectors in a single point. Unlike dense vectors, sparse vectors must be named. And additionally, sparse vectors and dense vectors must have different names within a collection. ```http PUT /collections/{collection_name} { "sparse_vectors": { "text": { }, } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ "sparse_vectors": { "text": { } } }' ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.create_collection( collection_name="{collection_name}", sparse_vectors_config={ "text": models.SparseVectorParams(), }, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { sparse_vectors: { text: { }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{CreateCollection, SparseVectorConfig, SparseVectorParams}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), sparse_vectors_config: Some(SparseVectorConfig { map: [( "text".to_string(), SparseVectorParams { ..Default::default() }, )] .into(), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.SparseVectorConfig; import io.qdrant.client.grpc.Collections.SparseVectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setSparseVectorsConfig( SparseVectorConfig.newBuilder() .putMap("text", SparseVectorParams.getDefaultInstance())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", sparseVectorsConfig: ("text", new SparseVectorParams()) ); ``` Outside of a unique name, there are no required configuration parameters for sparse vectors. The distance function for sparse vectors is always `Dot` and does not need to be specified. However, there are optional parameters to tune the underlying [sparse vector index](../indexing/#sparse-vector-index). ### Check collection existence *Available as of v1.8.0* ```http GET http://localhost:6333/collections/{collection_name}/exists ``` ```bash curl -X GET http://localhost:6333/collections/{collection_name}/exists ``` ```python client.collection_exists(collection_name="{collection_name}") ``` ```typescript client.collectionExists("{collection_name}"); ``` ```rust client.collection_exists("{collection_name}").await?; ``` ```java client.collectionExistsAsync("{collection_name}").get(); ``` ```csharp await client.CollectionExistsAsync("{collection_name}"); ``` ### Delete collection ```http DELETE http://localhost:6333/collections/{collection_name} ``` ```bash curl -X DELETE http://localhost:6333/collections/{collection_name} ``` ```python client.delete_collection(collection_name="{collection_name}") ``` ```typescript client.deleteCollection("{collection_name}"); ``` ```rust client.delete_collection("{collection_name}").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.deleteCollectionAsync("{collection_name}").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.DeleteCollectionAsync("{collection_name}"); ``` ### Update collection parameters Dynamic parameter updates may be helpful, for example, for more efficient initial loading of vectors. For example, you can disable indexing during the upload process, and enable it immediately after the upload is finished. As a result, you will not waste extra computation resources on rebuilding the index. The following command enables indexing for segments that have more than 10000 kB of vectors stored: ```http PATCH /collections/{collection_name} { "optimizers_config": { "indexing_threshold": 10000 } } ``` ```bash curl -X PATCH http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ "optimizers_config": { "indexing_threshold": 10000 } }' ``` ```python client.update_collection( collection_name="{collection_name}", optimizer_config=models.OptimizersConfigDiff(indexing_threshold=10000), ) ``` ```typescript client.updateCollection("{collection_name}", { optimizers_config: { indexing_threshold: 10000, }, }); ``` ```rust use qdrant_client::qdrant::OptimizersConfigDiff; client .update_collection( "{collection_name}", Some(&OptimizersConfigDiff { indexing_threshold: Some(10000), ..Default::default() }), None, None, None, None, None, ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.UpdateCollection; client.updateCollectionAsync( UpdateCollection.newBuilder() .setCollectionName("{collection_name}") .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setIndexingThreshold(10000).build()) .build()); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpdateCollectionAsync( collectionName: "{collection_name}", optimizersConfig: new OptimizersConfigDiff { IndexingThreshold = 10000 } ); ``` The following parameters can be updated: * `optimizers_config` - see [optimizer](../optimizer/) for details. * `hnsw_config` - see [indexing](../indexing/#vector-index) for details. * `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details. * `vectors` - vector-specific configuration, including individual `hnsw_config`, `quantization_config` and `on_disk` settings. * `params` - other collection parameters, including `write_consistency_factor` and `on_disk_payload`. Full API specification is available in [schema definitions](https://api.qdrant.tech/api-reference/collections/update-collection). Calls to this endpoint may be blocking as it waits for existing optimizers to finish. We recommended against using this in a production database as it may introduce huge overhead due to the rebuilding of the index. #### Update vector parameters *Available as of v1.4.0* <aside role="status">To update vector parameters using the collection update API, you must always specify a vector name. If your collection does not have named vectors, use an empty (<code>""</code>) name.</aside> Qdrant 1.4 adds support for updating more collection parameters at runtime. HNSW index, quantization and disk configurations can now be changed without recreating a collection. Segments (with index and quantized data) will automatically be rebuilt in the background to match updated parameters. To put vector data on disk for a collection that **does not have** named vectors, use `""` as name: ```http PATCH /collections/{collection_name} { "vectors": { "": { "on_disk": true } } } ``` ```bash curl -X PATCH http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "": { "on_disk": true } } }' ``` To put vector data on disk for a collection that **does have** named vectors: Note: To create a vector name, follow the procedure from our [Points](/documentation/concepts/points/#create-vector-name). ```http PATCH /collections/{collection_name} { "vectors": { "my_vector": { "on_disk": true } } } ``` ```bash curl -X PATCH http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "my_vector": { "on_disk": true } } }' ``` In the following example the HNSW index and quantization parameters are updated, both for the whole collection, and for `my_vector` specifically: ```http PATCH /collections/{collection_name} { "vectors": { "my_vector": { "hnsw_config": { "m": 32, "ef_construct": 123 }, "quantization_config": { "product": { "compression": "x32", "always_ram": true } }, "on_disk": true } }, "hnsw_config": { "ef_construct": 123 }, "quantization_config": { "scalar": { "type": "int8", "quantile": 0.8, "always_ram": false } } } ``` ```bash curl -X PATCH http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "my_vector": { "hnsw_config": { "m": 32, "ef_construct": 123 }, "quantization_config": { "product": { "compression": "x32", "always_ram": true } }, "on_disk": true } }, "hnsw_config": { "ef_construct": 123 }, "quantization_config": { "scalar": { "type": "int8", "quantile": 0.8, "always_ram": false } } }' ``` ```python client.update_collection( collection_name="{collection_name}", vectors_config={ "my_vector": models.VectorParamsDiff( hnsw_config=models.HnswConfigDiff( m=32, ef_construct=123, ), quantization_config=models.ProductQuantization( product=models.ProductQuantizationConfig( compression=models.CompressionRatio.X32, always_ram=True, ), ), on_disk=True, ), }, hnsw_config=models.HnswConfigDiff( ef_construct=123, ), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, quantile=0.8, always_ram=False, ), ), ) ``` ```typescript client.updateCollection("{collection_name}", { vectors: { my_vector: { hnsw_config: { m: 32, ef_construct: 123, }, quantization_config: { product: { compression: "x32", always_ram: true, }, }, on_disk: true, }, }, hnsw_config: { ef_construct: 123, }, quantization_config: { scalar: { type: "int8", quantile: 0.8, always_ram: true, }, }, }); ``` ```rust use qdrant_client::client::QdrantClient; use qdrant_client::qdrant::{ quantization_config_diff::Quantization, vectors_config_diff::Config, HnswConfigDiff, QuantizationConfigDiff, QuantizationType, ScalarQuantization, VectorParamsDiff, VectorsConfigDiff, }; client .update_collection( "{collection_name}", None, None, None, Some(&HnswConfigDiff { ef_construct: Some(123), ..Default::default() }), Some(&VectorsConfigDiff { config: Some(Config::ParamsMap( qdrant_client::qdrant::VectorParamsDiffMap { map: HashMap::from([( ("my_vector".into()), VectorParamsDiff { hnsw_config: Some(HnswConfigDiff { m: Some(32), ef_construct: Some(123), ..Default::default() }), ..Default::default() }, )]), }, )), }), Some(&QuantizationConfigDiff { quantization: Some(Quantization::Scalar(ScalarQuantization { r#type: QuantizationType::Int8 as i32, quantile: Some(0.8), always_ram: Some(true), ..Default::default() })), }), ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.HnswConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.UpdateCollection; import io.qdrant.client.grpc.Collections.VectorParamsDiff; import io.qdrant.client.grpc.Collections.VectorParamsDiffMap; import io.qdrant.client.grpc.Collections.VectorsConfigDiff; client .updateCollectionAsync( UpdateCollection.newBuilder() .setCollectionName("{collection_name}") .setHnswConfig(HnswConfigDiff.newBuilder().setEfConstruct(123).build()) .setVectorsConfig( VectorsConfigDiff.newBuilder() .setParamsMap( VectorParamsDiffMap.newBuilder() .putMap( "my_vector", VectorParamsDiff.newBuilder() .setHnswConfig( HnswConfigDiff.newBuilder() .setM(3) .setEfConstruct(123) .build()) .build()))) .setQuantizationConfig( QuantizationConfigDiff.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setQuantile(0.8f) .setAlwaysRam(true) .build())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpdateCollectionAsync( collectionName: "{collection_name}", hnswConfig: new HnswConfigDiff { EfConstruct = 123 }, vectorsConfig: new VectorParamsDiffMap { Map = { { "my_vector", new VectorParamsDiff { HnswConfig = new HnswConfigDiff { M = 3, EfConstruct = 123 } } } } }, quantizationConfig: new QuantizationConfigDiff { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, Quantile = 0.8f, AlwaysRam = true } } ); ``` ## Collection info Qdrant allows determining the configuration parameters of an existing collection to better understand how the points are distributed and indexed. ```http GET /collections/{collection_name} ``` ```bash curl -X GET http://localhost:6333/collections/{collection_name} ``` ```python client.get_collection(collection_name="{collection_name}") ``` ```typescript client.getCollection("{collection_name}"); ``` ```rust client.collection_info("{collection_name}").await?; ``` ```java client.getCollectionInfoAsync("{collection_name}").get(); ``` ```csharp await client.GetCollectionInfoAsync("{collection_name}"); ``` <details> <summary>Expected result</summary> ```json { "result": { "status": "green", "optimizer_status": "ok", "vectors_count": 1068786, "indexed_vectors_count": 1024232, "points_count": 1068786, "segments_count": 31, "config": { "params": { "vectors": { "size": 384, "distance": "Cosine" }, "shard_number": 1, "replication_factor": 1, "write_consistency_factor": 1, "on_disk_payload": false }, "hnsw_config": { "m": 16, "ef_construct": 100, "full_scan_threshold": 10000, "max_indexing_threads": 0 }, "optimizer_config": { "deleted_threshold": 0.2, "vacuum_min_vector_number": 1000, "default_segment_number": 0, "max_segment_size": null, "memmap_threshold": null, "indexing_threshold": 20000, "flush_interval_sec": 5, "max_optimization_threads": 1 }, "wal_config": { "wal_capacity_mb": 32, "wal_segments_ahead": 0 } }, "payload_schema": {} }, "status": "ok", "time": 0.00010143 } ``` </details> <br/> If you insert the vectors into the collection, the `status` field may become `yellow` whilst it is optimizing. It will become `green` once all the points are successfully processed. The following color statuses are possible: - 🟢 `green`: collection is ready - 🟡 `yellow`: collection is optimizing - ⚫ `grey`: collection is pending optimization ([help](#grey-collection-status)) - 🔴 `red`: an error occurred which the engine could not recover from ### Grey collection status _Available as of v1.9.0_ A collection may have the grey ⚫ status or show "optimizations pending, awaiting update operation" as optimization status. This state is normally caused by restarting a Qdrant instance while optimizations were ongoing. It means the collection has optimizations pending, but they are paused. You must send any update operation to trigger and start the optimizations again. For example: ```http PATCH /collections/{collection_name} { "optimizers_config": {} } ``` ```bash curl -X PATCH http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ "optimizers_config": {} }' ``` ```python client.update_collection( collection_name="{collection_name}", optimizer_config=models.OptimizersConfigDiff(), ) ``` ```typescript client.updateCollection("{collection_name}", { optimizers_config: {}, }); ``` ```rust use qdrant_client::qdrant::OptimizersConfigDiff; client .update_collection( "{collection_name}", Some(&OptimizersConfigDiff::default()), None, None, None, None, None, ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.UpdateCollection; client.updateCollectionAsync( UpdateCollection.newBuilder() .setCollectionName("{collection_name}") .setOptimizersConfig( OptimizersConfigDiff.getDefaultInstance()) .build()); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpdateCollectionAsync( collectionName: "{collection_name}", optimizersConfig: new OptimizersConfigDiff { } ); ``` ### Approximate point and vector counts You may be interested in the count attributes: - `points_count` - total number of objects (vectors and their payloads) stored in the collection - `vectors_count` - total number of vectors in a collection, useful if you have multiple vectors per point - `indexed_vectors_count` - total number of vectors stored in the HNSW or sparse index. Qdrant does not store all the vectors in the index, but only if an index segment might be created for a given configuration. The above counts are not exact, but should be considered approximate. Depending on how you use Qdrant these may give very different numbers than what you may expect. It's therefore important **not** to rely on them. More specifically, these numbers represent the count of points and vectors in Qdrant's internal storage. Internally, Qdrant may temporarily duplicate points as part of automatic optimizations. It may keep changed or deleted points for a bit. And it may delay indexing of new points. All of that is for optimization reasons. Updates you do are therefore not directly reflected in these numbers. If you see a wildly different count of points, it will likely resolve itself once a new round of automatic optimizations has completed. To clarify: these numbers don't represent the exact amount of points or vectors you have inserted, nor does it represent the exact number of distinguishable points or vectors you can query. If you want to know exact counts, refer to the [count API](../points/#counting-points). _Note: these numbers may be removed in a future version of Qdrant._ ### Indexing vectors in HNSW In some cases, you might be surprised the value of `indexed_vectors_count` is lower than `vectors_count`. This is an intended behaviour and depends on the [optimizer configuration](../optimizer/). A new index segment is built if the size of non-indexed vectors is higher than the value of `indexing_threshold`(in kB). If your collection is very small or the dimensionality of the vectors is low, there might be no HNSW segment created and `indexed_vectors_count` might be equal to `0`. It is possible to reduce the `indexing_threshold` for an existing collection by [updating collection parameters](#update-collection-parameters). ## Collection aliases In a production environment, it is sometimes necessary to switch different versions of vectors seamlessly. For example, when upgrading to a new version of the neural network. There is no way to stop the service and rebuild the collection with new vectors in these situations. Aliases are additional names for existing collections. All queries to the collection can also be done identically, using an alias instead of the collection name. Thus, it is possible to build a second collection in the background and then switch alias from the old to the new collection. Since all changes of aliases happen atomically, no concurrent requests will be affected during the switch. ### Create alias ```http POST /collections/aliases { "actions": [ { "create_alias": { "collection_name": "example_collection", "alias_name": "production_collection" } } ] } ``` ```bash curl -X POST http://localhost:6333/collections/aliases \ -H 'Content-Type: application/json' \ --data-raw '{ "actions": [ { "create_alias": { "collection_name": "example_collection", "alias_name": "production_collection" } } ] }' ``` ```python client.update_collection_aliases( change_aliases_operations=[ models.CreateAliasOperation( create_alias=models.CreateAlias( collection_name="example_collection", alias_name="production_collection" ) ) ] ) ``` ```typescript client.updateCollectionAliases({ actions: [ { create_alias: { collection_name: "example_collection", alias_name: "production_collection", }, }, ], }); ``` ```rust client.create_alias("example_collection", "production_collection").await?; ``` ```java client.createAliasAsync("production_collection", "example_collection").get(); ``` ```csharp await client.CreateAliasAsync(aliasName: "production_collection", collectionName: "example_collection"); ``` ### Remove alias ```bash curl -X POST http://localhost:6333/collections/aliases \ -H 'Content-Type: application/json' \ --data-raw '{ "actions": [ { "delete_alias": { "alias_name": "production_collection" } } ] }' ``` ```http POST /collections/aliases { "actions": [ { "delete_alias": { "alias_name": "production_collection" } } ] } ``` ```python client.update_collection_aliases( change_aliases_operations=[ models.DeleteAliasOperation( delete_alias=models.DeleteAlias(alias_name="production_collection") ), ] ) ``` ```typescript client.updateCollectionAliases({ actions: [ { delete_alias: { alias_name: "production_collection", }, }, ], }); ``` ```rust client.delete_alias("production_collection").await?; ``` ```java client.deleteAliasAsync("production_collection").get(); ``` ```csharp await client.DeleteAliasAsync("production_collection"); ``` ### Switch collection Multiple alias actions are performed atomically. For example, you can switch underlying collection with the following command: ```http POST /collections/aliases { "actions": [ { "delete_alias": { "alias_name": "production_collection" } }, { "create_alias": { "collection_name": "example_collection", "alias_name": "production_collection" } } ] } ``` ```bash curl -X POST http://localhost:6333/collections/aliases \ -H 'Content-Type: application/json' \ --data-raw '{ "actions": [ { "delete_alias": { "alias_name": "production_collection" } }, { "create_alias": { "collection_name": "example_collection", "alias_name": "production_collection" } } ] }' ``` ```python client.update_collection_aliases( change_aliases_operations=[ models.DeleteAliasOperation( delete_alias=models.DeleteAlias(alias_name="production_collection") ), models.CreateAliasOperation( create_alias=models.CreateAlias( collection_name="example_collection", alias_name="production_collection" ) ), ] ) ``` ```typescript client.updateCollectionAliases({ actions: [ { delete_alias: { alias_name: "production_collection", }, }, { create_alias: { collection_name: "example_collection", alias_name: "production_collection", }, }, ], }); ``` ```rust client.delete_alias("production_collection").await?; client.create_alias("example_collection", "production_collection").await?; ``` ```java client.deleteAliasAsync("production_collection").get(); client.createAliasAsync("production_collection", "example_collection").get(); ``` ```csharp await client.DeleteAliasAsync("production_collection"); await client.CreateAliasAsync(aliasName: "production_collection", collectionName: "example_collection"); ``` ### List collection aliases ```http GET /collections/{collection_name}/aliases ``` ```bash curl -X GET http://localhost:6333/collections/{collection_name}/aliases ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url="http://localhost:6333") client.get_collection_aliases(collection_name="{collection_name}") ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.getCollectionAliases("{collection_name}"); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.list_collection_aliases("{collection_name}").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.listCollectionAliasesAsync("{collection_name}").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.ListCollectionAliasesAsync("{collection_name}"); ``` ### List all aliases ```http GET /aliases ``` ```bash curl -X GET http://localhost:6333/aliases ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url="http://localhost:6333") client.get_aliases() ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.getAliases(); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.list_aliases().await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.listAliasesAsync().get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.ListAliasesAsync(); ``` ### List all collections ```http GET /collections ``` ```bash curl -X GET http://localhost:6333/collections ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url="http://localhost:6333") client.get_collections() ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.getCollections(); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.list_collections().await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.listCollectionsAsync().get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.ListCollectionsAsync(); ```
qdrant-landing/content/documentation/concepts/explore.md
--- title: Explore weight: 55 aliases: - ../explore --- # Explore the data After mastering the concepts in [search](../search/), you can start exploring your data in other ways. Qdrant provides a stack of APIs that allow you to find similar vectors in a different fashion, as well as to find the most dissimilar ones. These are useful tools for recommendation systems, data exploration, and data cleaning. ## Recommendation API In addition to the regular search, Qdrant also allows you to search based on multiple positive and negative examples. The API is called ***recommend***, and the examples can be point IDs, so that you can leverage the already encoded objects; and, as of v1.6, you can also use raw vectors as input, so that you can create your vectors on the fly without uploading them as points. REST API - API Schema definition is available [here](https://api.qdrant.tech/api-reference/search/recommend-points) ```http POST /collections/{collection_name}/points/recommend { "positive": [100, 231], "negative": [718, [0.2, 0.3, 0.4, 0.5]], "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "strategy": "average_vector", "limit": 3 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.recommend( collection_name="{collection_name}", positive=[100, 231], negative=[718, [0.2, 0.3, 0.4, 0.5]], strategy=models.RecommendStrategy.AVERAGE_VECTOR, query_filter=models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue( value="London", ), ) ] ), limit=3, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.recommend("{collection_name}", { positive: [100, 231], negative: [718, [0.2, 0.3, 0.4, 0.5]], strategy: "average_vector", filter: { must: [ { key: "city", match: { value: "London", }, }, ], }, limit: 3, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{Condition, Filter, RecommendPoints, RecommendStrategy}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .recommend(&RecommendPoints { collection_name: "{collection_name}".to_string(), positive: vec![100.into(), 200.into()], positive_vectors: vec![vec![100.0, 231.0].into()], negative: vec![718.into()], negative_vectors: vec![vec![0.2, 0.3, 0.4, 0.5].into()], strategy: Some(RecommendStrategy::AverageVector.into()), filter: Some(Filter::must([Condition::matches( "city", "London".to_string(), )])), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorFactory.vector; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.RecommendPoints; import io.qdrant.client.grpc.Points.RecommendStrategy; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .recommendAsync( RecommendPoints.newBuilder() .setCollectionName("{collection_name}") .addAllPositive(List.of(id(100), id(200))) .addAllPositiveVectors(List.of(vector(100.0f, 231.0f))) .addAllNegative(List.of(id(718))) .addAllPositiveVectors(List.of(vector(0.2f, 0.3f, 0.4f, 0.5f))) .setStrategy(RecommendStrategy.AverageVector) .setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London"))) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.RecommendAsync( "{collection_name}", positive: new ulong[] { 100, 231 }, negative: new ulong[] { 718 }, filter: MatchKeyword("city", "London"), limit: 3 ); ``` Example result of this API would be ```json { "result": [ { "id": 10, "score": 0.81 }, { "id": 14, "score": 0.75 }, { "id": 11, "score": 0.73 } ], "status": "ok", "time": 0.001 } ``` The algorithm used to get the recommendations is selected from the available `strategy` options. Each of them has its own strengths and weaknesses, so experiment and choose the one that works best for your case. ### Average vector strategy The default and first strategy added to Qdrant is called `average_vector`. It preprocesses the input examples to create a single vector that is used for the search. Since the preprocessing step happens very fast, the performance of this strategy is on-par with regular search. The intuition behind this kind of recommendation is that each vector component represents an independent feature of the data, so, by averaging the examples, we should get a good recommendation. The way to produce the searching vector is by first averaging all the positive and negative examples separately, and then combining them into a single vector using the following formula: ```rust avg_positive + avg_positive - avg_negative ``` In the case of not having any negative examples, the search vector will simply be equal to `avg_positive`. This is the default strategy that's going to be set implicitly, but you can explicitly define it by setting `"strategy": "average_vector"` in the recommendation request. ### Best score strategy *Available as of v1.6.0* A new strategy introduced in v1.6, is called `best_score`. It is based on the idea that the best way to find similar vectors is to find the ones that are closer to a positive example, while avoiding the ones that are closer to a negative one. The way it works is that each candidate is measured against every example, then we select the best positive and best negative scores. The final score is chosen with this step formula: ```rust let score = if best_positive_score > best_negative_score { best_positive_score; } else { -(best_negative_score * best_negative_score); }; ``` <aside role="alert"> The performance of <code>best_score</code> strategy will be linearly impacted by the amount of examples. </aside> Since we are computing similarities to every example at each step of the search, the performance of this strategy will be linearly impacted by the amount of examples. This means that the more examples you provide, the slower the search will be. However, this strategy can be very powerful and should be more embedding-agnostic. <aside role="status"> Accuracy may be impacted with this strategy. To improve it, increasing the <code>ef</code> search parameter to something above 32 will already be much better than the default 16, e.g: <code>"params": { "ef": 64 }</code> </aside> To use this algorithm, you need to set `"strategy": "best_score"` in the recommendation request. #### Using only negative examples A beneficial side-effect of `best_score` strategy is that you can use it with only negative examples. This will allow you to find the most dissimilar vectors to the ones you provide. This can be useful for finding outliers in your data, or for finding the most dissimilar vectors to a given one. Combining negative-only examples with filtering can be a powerful tool for data exploration and cleaning. ### Multiple vectors *Available as of v0.10.0* If the collection was created with multiple vectors, the name of the vector should be specified in the recommendation request: ```http POST /collections/{collection_name}/points/recommend { "positive": [100, 231], "negative": [718], "using": "image", "limit": 10 } ``` ```python client.recommend( collection_name="{collection_name}", positive=[100, 231], negative=[718], using="image", limit=10, ) ``` ```typescript client.recommend("{collection_name}", { positive: [100, 231], negative: [718], using: "image", limit: 10, }); ``` ```rust use qdrant_client::qdrant::RecommendPoints; client .recommend(&RecommendPoints { collection_name: "{collection_name}".to_string(), positive: vec![100.into(), 231.into()], negative: vec![718.into()], using: Some("image".to_string()), limit: 10, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.grpc.Points.RecommendPoints; client .recommendAsync( RecommendPoints.newBuilder() .setCollectionName("{collection_name}") .addAllPositive(List.of(id(100), id(231))) .addAllNegative(List.of(id(718))) .setUsing("image") .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.RecommendAsync( collectionName: "{collection_name}", positive: new ulong[] { 100, 231 }, negative: new ulong[] { 718 }, usingVector: "image", limit: 10 ); ``` Parameter `using` specifies which stored vectors to use for the recommendation. ### Lookup vectors from another collection *Available as of v0.11.6* If you have collections with vectors of the same dimensionality, and you want to look for recommendations in one collection based on the vectors of another collection, you can use the `lookup_from` parameter. It might be useful, e.g. in the item-to-user recommendations scenario. Where user and item embeddings, although having the same vector parameters (distance type and dimensionality), are usually stored in different collections. ```http POST /collections/{collection_name}/points/recommend { "positive": [100, 231], "negative": [718], "using": "image", "limit": 10, "lookup_from": { "collection":"{external_collection_name}", "vector":"{external_vector_name}" } } ``` ```python client.recommend( collection_name="{collection_name}", positive=[100, 231], negative=[718], using="image", limit=10, lookup_from=models.LookupLocation( collection="{external_collection_name}", vector="{external_vector_name}" ), ) ``` ```typescript client.recommend("{collection_name}", { positive: [100, 231], negative: [718], using: "image", limit: 10, lookup_from: { "collection" : "{external_collection_name}", "vector" : "{external_vector_name}" }, }); ``` ```rust use qdrant_client::qdrant::{LookupLocation, RecommendPoints}; client .recommend(&RecommendPoints { collection_name: "{collection_name}".to_string(), positive: vec![100.into(), 231.into()], negative: vec![718.into()], using: Some("image".to_string()), limit: 10, lookup_from: Some(LookupLocation { collection_name: "{external_collection_name}".to_string(), vector_name: Some("{external_vector_name}".to_string()), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.grpc.Points.LookupLocation; import io.qdrant.client.grpc.Points.RecommendPoints; client .recommendAsync( RecommendPoints.newBuilder() .setCollectionName("{collection_name}") .addAllPositive(List.of(id(100), id(231))) .addAllNegative(List.of(id(718))) .setUsing("image") .setLimit(10) .setLookupFrom( LookupLocation.newBuilder() .setCollectionName("{external_collection_name}") .setVectorName("{external_vector_name}") .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.RecommendAsync( collectionName: "{collection_name}", positive: new ulong[] { 100, 231 }, negative: new ulong[] { 718 }, usingVector: "image", limit: 10, lookupFrom: new LookupLocation { CollectionName = "{external_collection_name}", VectorName = "{external_vector_name}", } ); ``` Vectors are retrieved from the external collection by ids provided in the `positive` and `negative` lists. These vectors then used to perform the recommendation in the current collection, comparing against the "using" or default vector. ## Batch recommendation API *Available as of v0.10.0* Similar to the batch search API in terms of usage and advantages, it enables the batching of recommendation requests. ```http POST /collections/{collection_name}/points/recommend/batch { "searches": [ { "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "negative": [718], "positive": [100, 231], "limit": 10 }, { "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "negative": [300], "positive": [200, 67], "limit": 10 } ] } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") filter_ = models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue( value="London", ), ) ] ) recommend_queries = [ models.RecommendRequest( positive=[100, 231], negative=[718], filter=filter_, limit=3 ), models.RecommendRequest(positive=[200, 67], negative=[300], filter=filter_, limit=3), ] client.recommend_batch(collection_name="{collection_name}", requests=recommend_queries) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); const filter = { must: [ { key: "city", match: { value: "London", }, }, ], }; const searches = [ { positive: [100, 231], negative: [718], filter, limit: 3, }, { positive: [200, 67], negative: [300], filter, limit: 3, }, ]; client.recommend_batch("{collection_name}", { searches, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{Condition, Filter, RecommendBatchPoints, RecommendPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; let filter = Filter::must([Condition::matches("city", "London".to_string())]); let recommend_queries = vec![ RecommendPoints { collection_name: "{collection_name}".to_string(), positive: vec![100.into(), 231.into()], negative: vec![718.into()], filter: Some(filter.clone()), limit: 3, ..Default::default() }, RecommendPoints { collection_name: "{collection_name}".to_string(), positive: vec![200.into(), 67.into()], negative: vec![300.into()], filter: Some(filter), limit: 3, ..Default::default() }, ]; client .recommend_batch(&RecommendBatchPoints { collection_name: "{collection_name}".to_string(), recommend_points: recommend_queries, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.RecommendPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); Filter filter = Filter.newBuilder().addMust(matchKeyword("city", "London")).build(); List<RecommendPoints> recommendQueries = List.of( RecommendPoints.newBuilder() .addAllPositive(List.of(id(100), id(231))) .addAllNegative(List.of(id(718))) .setFilter(filter) .setLimit(3) .build(), RecommendPoints.newBuilder() .addAllPositive(List.of(id(200), id(67))) .addAllNegative(List.of(id(300))) .setFilter(filter) .setLimit(3) .build()); client.recommendBatchAsync("{collection_name}", recommendQueries, null).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); var filter = MatchKeyword("city", "london"); await client.RecommendBatchAsync( collectionName: "{collection_name}", recommendSearches: [ new() { CollectionName = "{collection_name}", Positive = { new PointId[] { 100, 231 } }, Negative = { new PointId[] { 718 } }, Limit = 3, Filter = filter, }, new() { CollectionName = "{collection_name}", Positive = { new PointId[] { 200, 67 } }, Negative = { new PointId[] { 300 } }, Limit = 3, Filter = filter, } ] ); ``` The result of this API contains one array per recommendation requests. ```json { "result": [ [ { "id": 10, "score": 0.81 }, { "id": 14, "score": 0.75 }, { "id": 11, "score": 0.73 } ], [ { "id": 1, "score": 0.92 }, { "id": 3, "score": 0.89 }, { "id": 9, "score": 0.75 } ] ], "status": "ok", "time": 0.001 } ``` ## Discovery API *Available as of v1.7* REST API Schema definition available [here](https://api.qdrant.tech/api-reference/search/discover-points) In this API, Qdrant introduces the concept of `context`, which is used for splitting the space. Context is a set of positive-negative pairs, and each pair divides the space into positive and negative zones. In that mode, the search operation prefers points based on how many positive zones they belong to (or how much they avoid negative zones). The interface for providing context is similar to the recommendation API (ids or raw vectors). Still, in this case, they need to be provided in the form of positive-negative pairs. Discovery API lets you do two new types of search: - **Discovery search**: Uses the context (the pairs of positive-negative vectors) and a target to return the points more similar to the target, but constrained by the context. - **Context search**: Using only the context pairs, get the points that live in the best zone, where loss is minimized The way positive and negative examples should be arranged in the context pairs is completely up to you. So you can have the flexibility of trying out different permutation techniques based on your model and data. <aside role="alert">The speed of search is linearly related to the amount of examples you provide in the query.</aside> ### Discovery search This type of search works specially well for combining multimodal, vector-constrained searches. Qdrant already has extensive support for filters, which constrain the search based on its payload, but using discovery search, you can also constrain the vector space in which the search is performed. ![Discovery search](/docs/discovery-search.png) The formula for the discovery score can be expressed as: $$ \text{rank}(v^+, v^-) = \begin{cases} 1, &\quad s(v^+) \geq s(v^-) \\\\ -1, &\quad s(v^+) < s(v^-) \end{cases} $$ where $v^+$ represents a positive example, $v^-$ represents a negative example, and $s(v)$ is the similarity score of a vector $v$ to the target vector. The discovery score is then computed as: $$ \text{discovery score} = \text{sigmoid}(s(v_t))+ \sum \text{rank}(v_i^+, v_i^-), $$ where $s(v)$ is the similarity function, $v_t$ is the target vector, and again $v_i^+$ and $v_i^-$ are the positive and negative examples, respectively. The sigmoid function is used to normalize the score between 0 and 1 and the sum of ranks is used to penalize vectors that are closer to the negative examples than to the positive ones. In other words, the sum of individual ranks determines how many positive zones a point is in, while the closeness hierarchy comes second. Example: ```http POST /collections/{collection_name}/points/discover { "target": [0.2, 0.1, 0.9, 0.7], "context": [ { "positive": 100, "negative": 718 }, { "positive": 200, "negative": 300 } ], "limit": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") discover_queries = [ models.DiscoverRequest( target=[0.2, 0.1, 0.9, 0.7], context=[ models.ContextExamplePair( positive=100, negative=718, ), models.ContextExamplePair( positive=200, negative=300, ), ], limit=10, ), ] ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.discover("{collection_name}", { target: [0.2, 0.1, 0.9, 0.7], context: [ { positive: 100, negative: 718, }, { positive: 200, negative: 300, }, ], limit: 10, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ target_vector::Target, vector_example::Example, ContextExamplePair, DiscoverPoints, TargetVector, VectorExample, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .discover(&DiscoverPoints { collection_name: "{collection_name}".to_string(), target: Some(TargetVector { target: Some(Target::Single(VectorExample { example: Some(Example::Vector(vec![0.2, 0.1, 0.9, 0.7].into())), })), }), context: vec![ ContextExamplePair { positive: Some(VectorExample { example: Some(Example::Id(100.into())), }), negative: Some(VectorExample { example: Some(Example::Id(718.into())), }), }, ContextExamplePair { positive: Some(VectorExample { example: Some(Example::Id(200.into())), }), negative: Some(VectorExample { example: Some(Example::Id(300.into())), }), }, ], limit: 10, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorFactory.vector; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.ContextExamplePair; import io.qdrant.client.grpc.Points.DiscoverPoints; import io.qdrant.client.grpc.Points.TargetVector; import io.qdrant.client.grpc.Points.VectorExample; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .discoverAsync( DiscoverPoints.newBuilder() .setCollectionName("{collection_name}") .setTarget( TargetVector.newBuilder() .setSingle( VectorExample.newBuilder() .setVector(vector(0.2f, 0.1f, 0.9f, 0.7f)) .build())) .addAllContext( List.of( ContextExamplePair.newBuilder() .setPositive(VectorExample.newBuilder().setId(id(100))) .setNegative(VectorExample.newBuilder().setId(id(718))) .build(), ContextExamplePair.newBuilder() .setPositive(VectorExample.newBuilder().setId(id(200))) .setNegative(VectorExample.newBuilder().setId(id(300))) .build())) .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.DiscoverAsync( collectionName: "{collection_name}", target: new TargetVector { Single = new VectorExample { Vector = new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, } }, context: [ new() { Positive = new VectorExample { Id = 100 }, Negative = new VectorExample { Id = 718 } }, new() { Positive = new VectorExample { Id = 200 }, Negative = new VectorExample { Id = 300 } } ], limit: 10 ); ``` <aside role="status"> Notes about discovery search: * When providing ids as examples, they will be excluded from the results. * Score is always in descending order (larger is better), regardless of the metric used. * Since the space is hard-constrained by the context, accuracy is normal to drop when using default settings. To mitigate this, increasing the `ef` search parameter to something above 64 will already be much better than the default 16, e.g: `"params": { "ef": 128 }` </aside> ### Context search Conversely, in the absence of a target, a rigid integer-by-integer function doesn't provide much guidance for the search when utilizing a proximity graph like HNSW. Instead, context search employs a function derived from the [triplet-loss](/articles/triplet-loss/) concept, which is usually applied during model training. For context search, this function is adapted to steer the search towards areas with fewer negative examples. ![Context search](/docs/context-search.png) We can directly associate the score function to a loss function, where 0.0 is the maximum score a point can have, which means it is only in positive areas. As soon as a point exists closer to a negative example, its loss will simply be the difference of the positive and negative similarities. $$ \text{context score} = \sum \min(s(v^+_i) - s(v^-_i), 0.0) $$ Where $v^+_i$ and $v^-_i$ are the positive and negative examples of each pair, and $s(v)$ is the similarity function. Using this kind of search, you can expect the output to not necessarily be around a single point, but rather, to be any point that isn’t closer to a negative example, which creates a constrained diverse result. So, even when the API is not called [`recommend`](#recommendation-api), recommendation systems can also use this approach and adapt it for their specific use-cases. Example: ```http POST /collections/{collection_name}/points/discover { "context": [ { "positive": 100, "negative": 718 }, { "positive": 200, "negative": 300 } ], "limit": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") discover_queries = [ models.DiscoverRequest( context=[ models.ContextExamplePair( positive=100, negative=718, ), models.ContextExamplePair( positive=200, negative=300, ), ], limit=10, ), ] ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.discover("{collection_name}", { context: [ { positive: 100, negative: 718, }, { positive: 200, negative: 300, }, ], limit: 10, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vector_example::Example, ContextExamplePair, DiscoverPoints, VectorExample}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .discover(&DiscoverPoints { collection_name: "{collection_name}".to_string(), context: vec![ ContextExamplePair { positive: Some(VectorExample { example: Some(Example::Id(100.into())), }), negative: Some(VectorExample { example: Some(Example::Id(718.into())), }), }, ContextExamplePair { positive: Some(VectorExample { example: Some(Example::Id(200.into())), }), negative: Some(VectorExample { example: Some(Example::Id(300.into())), }), }, ], limit: 10, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.ContextExamplePair; import io.qdrant.client.grpc.Points.DiscoverPoints; import io.qdrant.client.grpc.Points.VectorExample; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .discoverAsync( DiscoverPoints.newBuilder() .setCollectionName("{collection_name}") .addAllContext( List.of( ContextExamplePair.newBuilder() .setPositive(VectorExample.newBuilder().setId(id(100))) .setNegative(VectorExample.newBuilder().setId(id(718))) .build(), ContextExamplePair.newBuilder() .setPositive(VectorExample.newBuilder().setId(id(200))) .setNegative(VectorExample.newBuilder().setId(id(300))) .build())) .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.DiscoverAsync( collectionName: "{collection_name}", context: [ new() { Positive = new VectorExample { Id = 100 }, Negative = new VectorExample { Id = 718 } }, new() { Positive = new VectorExample { Id = 200 }, Negative = new VectorExample { Id = 300 } } ], limit: 10 ); ``` <aside role="status"> Notes about context search: * When providing ids as examples, they will be excluded from the results. * Score is always in descending order (larger is better), regardless of the metric used. * Best possible score is `0.0`, and it is normal that many points get this score. </aside>
qdrant-landing/content/documentation/concepts/filtering.md
--- title: Filtering weight: 60 aliases: - ../filtering --- # Filtering With Qdrant, you can set conditions when searching or retrieving points. For example, you can impose conditions on both the [payload](../payload/) and the `id` of the point. Setting additional conditions is important when it is impossible to express all the features of the object in the embedding. Examples include a variety of business requirements: stock availability, user location, or desired price range. ## Filtering clauses Qdrant allows you to combine conditions in clauses. Clauses are different logical operations, such as `OR`, `AND`, and `NOT`. Clauses can be recursively nested into each other so that you can reproduce an arbitrary boolean expression. Let's take a look at the clauses implemented in Qdrant. Suppose we have a set of points with the following payload: ```json [ { "id": 1, "city": "London", "color": "green" }, { "id": 2, "city": "London", "color": "red" }, { "id": 3, "city": "London", "color": "blue" }, { "id": 4, "city": "Berlin", "color": "red" }, { "id": 5, "city": "Moscow", "color": "green" }, { "id": 6, "city": "Moscow", "color": "blue" } ] ``` ### Must Example: ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [ { "key": "city", "match": { "value": "London" } }, { "key": "color", "match": { "value": "red" } } ] } ... } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue(value="London"), ), models.FieldCondition( key="color", match=models.MatchValue(value="red"), ), ] ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.scroll("{collection_name}", { filter: { must: [ { key: "city", match: { value: "London" }, }, { key: "color", match: { value: "red" }, }, ], }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{Condition, Filter, ScrollPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([ Condition::matches("city", "london".to_string()), Condition::matches("color", "red".to_string()), ])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addAllMust( List.of(matchKeyword("city", "London"), matchKeyword("color", "red"))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); // & operator combines two conditions in an AND conjunction(must) await client.ScrollAsync( collectionName: "{collection_name}", filter: MatchKeyword("city", "London") & MatchKeyword("color", "red") ); ``` Filtered points would be: ```json [{ "id": 2, "city": "London", "color": "red" }] ``` When using `must`, the clause becomes `true` only if every condition listed inside `must` is satisfied. In this sense, `must` is equivalent to the operator `AND`. ### Should Example: ```http POST /collections/{collection_name}/points/scroll { "filter": { "should": [ { "key": "city", "match": { "value": "London" } }, { "key": "color", "match": { "value": "red" } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( should=[ models.FieldCondition( key="city", match=models.MatchValue(value="London"), ), models.FieldCondition( key="color", match=models.MatchValue(value="red"), ), ] ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { should: [ { key: "city", match: { value: "London" }, }, { key: "color", match: { value: "red" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::should([ Condition::matches("city", "london".to_string()), Condition::matches("color", "red".to_string()), ])), ..Default::default() }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; import java.util.List; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addAllShould( List.of(matchKeyword("city", "London"), matchKeyword("color", "red"))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); // | operator combines two conditions in an OR disjunction(should) await client.ScrollAsync( collectionName: "{collection_name}", filter: MatchKeyword("city", "London") | MatchKeyword("color", "red") ); ``` Filtered points would be: ```json [ { "id": 1, "city": "London", "color": "green" }, { "id": 2, "city": "London", "color": "red" }, { "id": 3, "city": "London", "color": "blue" }, { "id": 4, "city": "Berlin", "color": "red" } ] ``` When using `should`, the clause becomes `true` if at least one condition listed inside `should` is satisfied. In this sense, `should` is equivalent to the operator `OR`. ### Must Not Example: ```http POST /collections/{collection_name}/points/scroll { "filter": { "must_not": [ { "key": "city", "match": { "value": "London" } }, { "key": "color", "match": { "value": "red" } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must_not=[ models.FieldCondition(key="city", match=models.MatchValue(value="London")), models.FieldCondition(key="color", match=models.MatchValue(value="red")), ] ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must_not: [ { key: "city", match: { value: "London" }, }, { key: "color", match: { value: "red" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must_not([ Condition::matches("city", "london".to_string()), Condition::matches("color", "red".to_string()), ])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addAllMustNot( List.of(matchKeyword("city", "London"), matchKeyword("color", "red"))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); // The ! operator negates the condition(must not) await client.ScrollAsync( collectionName: "{collection_name}", filter: !(MatchKeyword("city", "London") & MatchKeyword("color", "red")) ); ``` Filtered points would be: ```json [ { "id": 5, "city": "Moscow", "color": "green" }, { "id": 6, "city": "Moscow", "color": "blue" } ] ``` When using `must_not`, the clause becomes `true` if none if the conditions listed inside `should` is satisfied. In this sense, `must_not` is equivalent to the expression `(NOT A) AND (NOT B) AND (NOT C)`. ### Clauses combination It is also possible to use several clauses simultaneously: ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ], "must_not": [ { "key": "color", "match": { "value": "red" } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.FieldCondition(key="city", match=models.MatchValue(value="London")), ], must_not=[ models.FieldCondition(key="color", match=models.MatchValue(value="red")), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must: [ { key: "city", match: { value: "London" }, }, ], must_not: [ { key: "color", match: { value: "red" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter { must: vec![Condition::matches("city", "London".to_string())], must_not: vec![Condition::matches("color", "red".to_string())], ..Default::default() }), ..Default::default() }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addMust(matchKeyword("city", "London")) .addMustNot(matchKeyword("color", "red")) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: MatchKeyword("city", "London") & !MatchKeyword("color", "red") ); ``` Filtered points would be: ```json [ { "id": 1, "city": "London", "color": "green" }, { "id": 3, "city": "London", "color": "blue" } ] ``` In this case, the conditions are combined by `AND`. Also, the conditions could be recursively nested. Example: ```http POST /collections/{collection_name}/points/scroll { "filter": { "must_not": [ { "must": [ { "key": "city", "match": { "value": "London" } }, { "key": "color", "match": { "value": "red" } } ] } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must_not=[ models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue(value="London") ), models.FieldCondition( key="color", match=models.MatchValue(value="red") ), ], ), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must_not: [ { must: [ { key: "city", match: { value: "London" }, }, { key: "color", match: { value: "red" }, }, ], }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must_not([Filter::must([ Condition::matches("city", "London".to_string()), Condition::matches("color", "red".to_string()), ]) .into()])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.filter; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addMustNot( filter( Filter.newBuilder() .addAllMust( List.of( matchKeyword("city", "London"), matchKeyword("color", "red"))) .build())) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: new Filter { MustNot = { MatchKeyword("city", "London") & MatchKeyword("color", "red") } } ); ``` Filtered points would be: ```json [ { "id": 1, "city": "London", "color": "green" }, { "id": 3, "city": "London", "color": "blue" }, { "id": 4, "city": "Berlin", "color": "red" }, { "id": 5, "city": "Moscow", "color": "green" }, { "id": 6, "city": "Moscow", "color": "blue" } ] ``` ## Filtering conditions Different types of values in payload correspond to different kinds of queries that we can apply to them. Let's look at the existing condition variants and what types of data they apply to. ### Match ```json { "key": "color", "match": { "value": "red" } } ``` ```python models.FieldCondition( key="color", match=models.MatchValue(value="red"), ) ``` ```typescript { key: 'color', match: {value: 'red'} } ``` ```rust Condition::matches("color", "red".to_string()) ``` ```java matchKeyword("color", "red"); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; MatchKeyword("color", "red"); ``` For the other types, the match condition will look exactly the same, except for the type used: ```json { "key": "count", "match": { "value": 0 } } ``` ```python models.FieldCondition( key="count", match=models.MatchValue(value=0), ) ``` ```typescript { key: 'count', match: {value: 0} } ``` ```rust Condition::matches("count", 0) ``` ```java import static io.qdrant.client.ConditionFactory.match; match("count", 0); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Match("count", 0); ``` The simplest kind of condition is one that checks if the stored value equals the given one. If several values are stored, at least one of them should match the condition. You can apply it to [keyword](../payload/#keyword), [integer](../payload/#integer) and [bool](../payload/#bool) payloads. ### Match Any *Available as of v1.1.0* In case you want to check if the stored value is one of multiple values, you can use the Match Any condition. Match Any works as a logical OR for the given values. It can also be described as a `IN` operator. You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads. Example: ```json { "key": "color", "match": { "any": ["black", "yellow"] } } ``` ```python models.FieldCondition( key="color", match=models.MatchAny(any=["black", "yellow"]), ) ``` ```typescript { key: 'color', match: {any: ['black', 'yellow']} } ``` ```rust Condition::matches("color", vec!["black".to_string(), "yellow".to_string()]) ``` ```java import static io.qdrant.client.ConditionFactory.matchKeywords; matchKeywords("color", List.of("black", "yellow")); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Match("color", ["black", "yellow"]); ``` In this example, the condition will be satisfied if the stored value is either `black` or `yellow`. If the stored value is an array, it should have at least one value matching any of the given values. E.g. if the stored value is `["black", "green"]`, the condition will be satisfied, because `"black"` is in `["black", "yellow"]`. ### Match Except *Available as of v1.2.0* In case you want to check if the stored value is not one of multiple values, you can use the Match Except condition. Match Except works as a logical NOR for the given values. It can also be described as a `NOT IN` operator. You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads. Example: ```json { "key": "color", "match": { "except": ["black", "yellow"] } } ``` ```python models.FieldCondition( key="color", match=models.MatchExcept(**{"except": ["black", "yellow"]}), ) ``` ```typescript { key: 'color', match: {except: ['black', 'yellow']} } ``` ```rust Condition::matches( "color", !MatchValue::from(vec!["black".to_string(), "yellow".to_string()]), ) ``` ```java import static io.qdrant.client.ConditionFactory.matchExceptKeywords; matchExceptKeywords("color", List.of("black", "yellow")); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Match("color", ["black", "yellow"]); ``` In this example, the condition will be satisfied if the stored value is neither `black` nor `yellow`. If the stored value is an array, it should have at least one value not matching any of the given values. E.g. if the stored value is `["black", "green"]`, the condition will be satisfied, because `"green"` does not match `"black"` nor `"yellow"`. ### Nested key *Available as of v1.1.0* Payloads being arbitrary JSON object, it is likely that you will need to filter on a nested field. For convenience, we use a syntax similar to what can be found in the [Jq](https://stedolan.github.io/jq/manual/#Basicfilters) project. Suppose we have a set of points with the following payload: ```json [ { "id": 1, "country": { "name": "Germany", "cities": [ { "name": "Berlin", "population": 3.7, "sightseeing": ["Brandenburg Gate", "Reichstag"] }, { "name": "Munich", "population": 1.5, "sightseeing": ["Marienplatz", "Olympiapark"] } ] } }, { "id": 2, "country": { "name": "Japan", "cities": [ { "name": "Tokyo", "population": 9.3, "sightseeing": ["Tokyo Tower", "Tokyo Skytree"] }, { "name": "Osaka", "population": 2.7, "sightseeing": ["Osaka Castle", "Universal Studios Japan"] } ] } } ] ``` You can search on a nested field using a dot notation. ```http POST /collections/{collection_name}/points/scroll { "filter": { "should": [ { "key": "country.name", "match": { "value": "Germany" } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( should=[ models.FieldCondition( key="country.name", match=models.MatchValue(value="Germany") ), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { should: [ { key: "country.name", match: { value: "Germany" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::should([Condition::matches( "country.name", "Germany".to_string(), )])), ..Default::default() }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addShould(matchKeyword("country.name", "Germany")) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync(collectionName: "{collection_name}", filter: MatchKeyword("country.name", "Germany")); ``` You can also search through arrays by projecting inner values using the `[]` syntax. ```http POST /collections/{collection_name}/points/scroll { "filter": { "should": [ { "key": "country.cities[].population", "range": { "gte": 9.0, } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( should=[ models.FieldCondition( key="country.cities[].population", range=models.Range( gt=None, gte=9.0, lt=None, lte=None, ), ), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { should: [ { key: "country.cities[].population", range: { gt: null, gte: 9.0, lt: null, lte: null, }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, Range, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::should([Condition::range( "country.cities[].population", Range { gte: Some(9.0), ..Default::default() }, )])), ..Default::default() }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.range; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.Range; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addShould( range( "country.cities[].population", Range.newBuilder().setGte(9.0).build())) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: Range("country.cities[].population", new Qdrant.Client.Grpc.Range { Gte = 9.0 }) ); ``` This query would only output the point with id 2 as only Japan has a city with population greater than 9.0. And the leaf nested field can also be an array. ```http POST /collections/{collection_name}/points/scroll { "filter": { "should": [ { "key": "country.cities[].sightseeing", "match": { "value": "Osaka Castle" } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( should=[ models.FieldCondition( key="country.cities[].sightseeing", match=models.MatchValue(value="Osaka Castle"), ), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { should: [ { key: "country.cities[].sightseeing", match: { value: "Osaka Castle" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::should([Condition::matches( "country.cities[].sightseeing", "Osaka Castle".to_string(), )])), ..Default::default() }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addShould(matchKeyword("country.cities[].sightseeing", "Germany")) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: MatchKeyword("country.cities[].sightseeing", "Germany") ); ``` This query would only output the point with id 2 as only Japan has a city with the "Osaka castke" as part of the sightseeing. ### Nested object filter *Available as of v1.2.0* By default, the conditions are taking into account the entire payload of a point. For instance, given two points with the following payload: ```json [ { "id": 1, "dinosaur": "t-rex", "diet": [ { "food": "leaves", "likes": false}, { "food": "meat", "likes": true} ] }, { "id": 2, "dinosaur": "diplodocus", "diet": [ { "food": "leaves", "likes": true}, { "food": "meat", "likes": false} ] } ] ``` The following query would match both points: ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [ { "key": "diet[].food", "match": { "value": "meat" } }, { "key": "diet[].likes", "match": { "value": true } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.FieldCondition( key="diet[].food", match=models.MatchValue(value="meat") ), models.FieldCondition( key="diet[].likes", match=models.MatchValue(value=True) ), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must: [ { key: "diet[].food", match: { value: "meat" }, }, { key: "diet[].likes", match: { value: true }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([ Condition::matches("diet[].food", "meat".to_string()), Condition::matches("diet[].likes", true), ])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.match; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addAllMust( List.of(matchKeyword("diet[].food", "meat"), match("diet[].likes", true))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: MatchKeyword("diet[].food", "meat") & Match("diet[].likes", true) ); ``` This happens because both points are matching the two conditions: - the "t-rex" matches food=meat on `diet[1].food` and likes=true on `diet[1].likes` - the "diplodocus" matches food=meat on `diet[1].food` and likes=true on `diet[0].likes` To retrieve only the points which are matching the conditions on an array element basis, that is the point with id 1 in this example, you would need to use a nested object filter. Nested object filters allow arrays of objects to be queried independently of each other. It is achieved by using the `nested` condition type formed by a payload key to focus on and a filter to apply. The key should point to an array of objects and can be used with or without the bracket notation ("data" or "data[]"). ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [{ "nested": { "key": "diet", "filter":{ "must": [ { "key": "food", "match": { "value": "meat" } }, { "key": "likes", "match": { "value": true } } ] } } }] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.NestedCondition( nested=models.Nested( key="diet", filter=models.Filter( must=[ models.FieldCondition( key="food", match=models.MatchValue(value="meat") ), models.FieldCondition( key="likes", match=models.MatchValue(value=True) ), ] ), ) ) ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must: [ { nested: { key: "diet", filter: { must: [ { key: "food", match: { value: "meat" }, }, { key: "likes", match: { value: true }, }, ], }, }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([NestedCondition { key: "diet".to_string(), filter: Some(Filter::must([ Condition::matches("food", "meat".to_string()), Condition::matches("likes", true), ])), } .into()])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.match; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.ConditionFactory.nested; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addMust( nested( "diet", Filter.newBuilder() .addAllMust( List.of( matchKeyword("food", "meat"), match("likes", true))) .build())) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: Nested("diet", MatchKeyword("food", "meat") & Match("likes", true)) ); ``` The matching logic is modified to be applied at the level of an array element within the payload. Nested filters work in the same way as if the nested filter was applied to a single element of the array at a time. Parent document is considered to match the condition if at least one element of the array matches the nested filter. **Limitations** The `has_id` condition is not supported within the nested object filter. If you need it, place it in an adjacent `must` clause. ```http POST /collections/{collection_name}/points/scroll { "filter":{ "must":[ { "nested":{ "key":"diet", "filter":{ "must":[ { "key":"food", "match":{ "value":"meat" } }, { "key":"likes", "match":{ "value":true } } ] } } }, { "has_id":[ 1 ] } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.NestedCondition( nested=models.Nested( key="diet", filter=models.Filter( must=[ models.FieldCondition( key="food", match=models.MatchValue(value="meat") ), models.FieldCondition( key="likes", match=models.MatchValue(value=True) ), ] ), ) ), models.HasIdCondition(has_id=[1]), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must: [ { nested: { key: "diet", filter: { must: [ { key: "food", match: { value: "meat" }, }, { key: "likes", match: { value: true }, }, ], }, }, }, { has_id: [1], }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([ NestedCondition { key: "diet".to_string(), filter: Some(Filter::must([ Condition::matches("food", "meat".to_string()), Condition::matches("likes", true), ])), } .into(), Condition::has_id([1]), ])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.hasId; import static io.qdrant.client.ConditionFactory.match; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.ConditionFactory.nested; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addMust( nested( "diet", Filter.newBuilder() .addAllMust( List.of( matchKeyword("food", "meat"), match("likes", true))) .build())) .addMust(hasId(id(1))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: Nested("diet", MatchKeyword("food", "meat") & Match("likes", true)) & HasId(1) ); ``` ### Full Text Match *Available as of v0.10.0* A special case of the `match` condition is the `text` match condition. It allows you to search for a specific substring, token or phrase within the text field. Exact texts that will match the condition depend on full-text index configuration. Configuration is defined during the index creation and describe at [full-text index](../indexing/#full-text-index). If there is no full-text index for the field, the condition will work as exact substring match. ```json { "key": "description", "match": { "text": "good cheap" } } ``` ```python models.FieldCondition( key="description", match=models.MatchText(text="good cheap"), ) ``` ```typescript { key: 'description', match: {text: 'good cheap'} } ``` ```rust // If the match string contains a white-space, full text match is performed. // Otherwise a keyword match is performed. Condition::matches("description", "good cheap".to_string()) ``` ```java import static io.qdrant.client.ConditionFactory.matchText; matchText("description", "good cheap"); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; MatchText("description", "good cheap"); ``` If the query has several words, then the condition will be satisfied only if all of them are present in the text. ### Range ```json { "key": "price", "range": { "gt": null, "gte": 100.0, "lt": null, "lte": 450.0 } } ``` ```python models.FieldCondition( key="price", range=models.Range( gt=None, gte=100.0, lt=None, lte=450.0, ), ) ``` ```typescript { key: 'price', range: { gt: null, gte: 100.0, lt: null, lte: 450.0 } } ``` ```rust Condition::range( "price", Range { gt: None, gte: Some(100.0), lt: None, lte: Some(450.0), }, ) ``` ```java import static io.qdrant.client.ConditionFactory.range; import io.qdrant.client.grpc.Points.Range; range("price", Range.newBuilder().setGte(100.0).setLte(450).build()); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Range("price", new Qdrant.Client.Grpc.Range { Gte = 100.0, Lte = 450 }); ``` The `range` condition sets the range of possible values for stored payload values. If several values are stored, at least one of them should match the condition. Comparisons that can be used: - `gt` - greater than - `gte` - greater than or equal - `lt` - less than - `lte` - less than or equal Can be applied to [float](../payload/#float) and [integer](../payload/#integer) payloads. ### Datetime Range The datetime range is a unique range condition, used for [datetime](../payload/#datetime) payloads, which supports RFC 3339 formats. You do not need to convert dates to UNIX timestaps. During comparison, timestamps are parsed and converted to UTC. _Available as of v1.8.0_ ```json { "key": "date", "range": { "gt": "2023-02-08T10:49:00Z", "gte": null, "lt": null, "lte": "2024-01-31 10:14:31Z" } } ``` ```python models.FieldCondition( key="date", range=models.DatetimeRange( gt="2023-02-08T10:49:00Z", gte=None, lt=None, lte="2024-01-31T10:14:31Z", ), ) ``` ```typescript { key: 'date', range: { gt: '2023-02-08T10:49:00Z', gte: null, lt: null, lte: '2024-01-31T10:14:31Z' } } ``` ```rust Condition::datetime_range( "date", DatetimeRange { gt: Some(Timestamp::date_time(2023, 2, 8, 10, 49, 0).unwrap()), gte: None, lt: None, lte: Some(Timestamp::date_time(2024, 1, 31, 10, 14, 31).unwrap()), }, ) ``` ```java import static io.qdrant.client.ConditionFactory.datetimeRange; import com.google.protobuf.Timestamp; import io.qdrant.client.grpc.Points.DatetimeRange; import java.time.Instant; long gt = Instant.parse("2023-02-08T10:49:00Z").getEpochSecond(); long lte = Instant.parse("2024-01-31T10:14:31Z").getEpochSecond(); datetimeRange("date", DatetimeRange.newBuilder() .setGt(Timestamp.newBuilder().setSeconds(gt)) .setLte(Timestamp.newBuilder().setSeconds(lte)) .build()); ``` ```csharp using Qdrant.Client.Grpc; Conditions.DatetimeRange( field: "date", gt: new DateTime(2023, 2, 8, 10, 49, 0, DateTimeKind.Utc), lte: new DateTime(2024, 1, 31, 10, 14, 31, DateTimeKind.Utc) ); ``` ### Geo #### Geo Bounding Box ```json { "key": "location", "geo_bounding_box": { "bottom_right": { "lon": 13.455868, "lat": 52.495862 }, "top_left": { "lon": 13.403683, "lat": 52.520711 } } } ``` ```python models.FieldCondition( key="location", geo_bounding_box=models.GeoBoundingBox( bottom_right=models.GeoPoint( lon=13.455868, lat=52.495862, ), top_left=models.GeoPoint( lon=13.403683, lat=52.520711, ), ), ) ``` ```typescript { key: 'location', geo_bounding_box: { bottom_right: { lon: 13.455868, lat: 52.495862 }, top_left: { lon: 13.403683, lat: 52.520711 } } } ``` ```rust Condition::geo_bounding_box( "location", GeoBoundingBox { bottom_right: Some(GeoPoint { lon: 13.455868, lat: 52.495862, }), top_left: Some(GeoPoint { lon: 13.403683, lat: 52.520711, }), }, ) ``` ```java import static io.qdrant.client.ConditionFactory.geoBoundingBox; geoBoundingBox("location", 52.520711, 13.403683, 52.495862, 13.455868); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; GeoBoundingBox("location", 52.520711, 13.403683, 52.495862, 13.455868); ``` It matches with `location`s inside a rectangle with the coordinates of the upper left corner in `bottom_right` and the coordinates of the lower right corner in `top_left`. #### Geo Radius ```json { "key": "location", "geo_radius": { "center": { "lon": 13.403683, "lat": 52.520711 }, "radius": 1000.0 } } ``` ```python models.FieldCondition( key="location", geo_radius=models.GeoRadius( center=models.GeoPoint( lon=13.403683, lat=52.520711, ), radius=1000.0, ), ) ``` ```typescript { key: 'location', geo_radius: { center: { lon: 13.403683, lat: 52.520711 }, radius: 1000.0 } } ``` ```rust Condition::geo_radius( "location", GeoRadius { center: Some(GeoPoint { lon: 13.403683, lat: 52.520711, }), radius: 1000.0, }, ) ``` ```java import static io.qdrant.client.ConditionFactory.geoRadius; geoRadius("location", 52.520711, 13.403683, 1000.0f); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; GeoRadius("location", 52.520711, 13.403683, 1000.0f); ``` It matches with `location`s inside a circle with the `center` at the center and a radius of `radius` meters. If several values are stored, at least one of them should match the condition. These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo). #### Geo Polygon Geo Polygons search is useful for when you want to find points inside an irregularly shaped area, for example a country boundary or a forest boundary. A polygon always has an exterior ring and may optionally include interior rings. A lake with an island would be an example of an interior ring. If you wanted to find points in the water but not on the island, you would make an interior ring for the island. When defining a ring, you must pick either a clockwise or counterclockwise ordering for your points. The first and last point of the polygon must be the same. Currently, we only support unprojected global coordinates (decimal degrees longitude and latitude) and we are datum agnostic. ```json { "key": "location", "geo_polygon": { "exterior": { "points": [ { "lon": -70.0, "lat": -70.0 }, { "lon": 60.0, "lat": -70.0 }, { "lon": 60.0, "lat": 60.0 }, { "lon": -70.0, "lat": 60.0 }, { "lon": -70.0, "lat": -70.0 } ] }, "interiors": [ { "points": [ { "lon": -65.0, "lat": -65.0 }, { "lon": 0.0, "lat": -65.0 }, { "lon": 0.0, "lat": 0.0 }, { "lon": -65.0, "lat": 0.0 }, { "lon": -65.0, "lat": -65.0 } ] } ] } } ``` ```python models.FieldCondition( key="location", geo_polygon=models.GeoPolygon( exterior=models.GeoLineString( points=[ models.GeoPoint( lon=-70.0, lat=-70.0, ), models.GeoPoint( lon=60.0, lat=-70.0, ), models.GeoPoint( lon=60.0, lat=60.0, ), models.GeoPoint( lon=-70.0, lat=60.0, ), models.GeoPoint( lon=-70.0, lat=-70.0, ), ] ), interiors=[ models.GeoLineString( points=[ models.GeoPoint( lon=-65.0, lat=-65.0, ), models.GeoPoint( lon=0.0, lat=-65.0, ), models.GeoPoint( lon=0.0, lat=0.0, ), models.GeoPoint( lon=-65.0, lat=0.0, ), models.GeoPoint( lon=-65.0, lat=-65.0, ), ] ) ], ), ) ``` ```typescript { key: 'location', geo_polygon: { exterior: { points: [ { lon: -70.0, lat: -70.0 }, { lon: 60.0, lat: -70.0 }, { lon: 60.0, lat: 60.0 }, { lon: -70.0, lat: 60.0 }, { lon: -70.0, lat: -70.0 } ] }, interiors: { points: [ { lon: -65.0, lat: -65.0 }, { lon: 0.0, lat: -65.0 }, { lon: 0.0, lat: 0.0 }, { lon: -65.0, lat: 0.0 }, { lon: -65.0, lat: -65.0 } ] } } } ``` ```rust Condition::geo_polygon( "location", GeoPolygon { exterior: Some(GeoLineString { points: vec![ GeoPoint { lon: -70.0, lat: -70.0, }, GeoPoint { lon: 60.0, lat: -70.0, }, GeoPoint { lon: 60.0, lat: 60.0, }, GeoPoint { lon: -70.0, lat: 60.0, }, GeoPoint { lon: -70.0, lat: -70.0, }, ], }), interiors: vec![GeoLineString { points: vec![ GeoPoint { lon: -65.0, lat: -65.0, }, GeoPoint { lon: 0.0, lat: -65.0, }, GeoPoint { lon: 0.0, lat: 0.0 }, GeoPoint { lon: -65.0, lat: 0.0, }, GeoPoint { lon: -65.0, lat: -65.0, }, ], }], }, ) ``` ```java import static io.qdrant.client.ConditionFactory.geoPolygon; import io.qdrant.client.grpc.Points.GeoLineString; import io.qdrant.client.grpc.Points.GeoPoint; geoPolygon( "location", GeoLineString.newBuilder() .addAllPoints( List.of( GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build(), GeoPoint.newBuilder().setLon(60.0).setLat(-70.0).build(), GeoPoint.newBuilder().setLon(60.0).setLat(60.0).build(), GeoPoint.newBuilder().setLon(-70.0).setLat(60.0).build(), GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build())) .build(), List.of( GeoLineString.newBuilder() .addAllPoints( List.of( GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build(), GeoPoint.newBuilder().setLon(0.0).setLat(-65.0).build(), GeoPoint.newBuilder().setLon(0.0).setLat(0.0).build(), GeoPoint.newBuilder().setLon(-65.0).setLat(0.0).build(), GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build())) .build())); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; GeoPolygon( field: "location", exterior: new GeoLineString { Points = { new GeoPoint { Lat = -70.0, Lon = -70.0 }, new GeoPoint { Lat = 60.0, Lon = -70.0 }, new GeoPoint { Lat = 60.0, Lon = 60.0 }, new GeoPoint { Lat = -70.0, Lon = 60.0 }, new GeoPoint { Lat = -70.0, Lon = -70.0 } } }, interiors: [ new() { Points = { new GeoPoint { Lat = -65.0, Lon = -65.0 }, new GeoPoint { Lat = 0.0, Lon = -65.0 }, new GeoPoint { Lat = 0.0, Lon = 0.0 }, new GeoPoint { Lat = -65.0, Lon = 0.0 }, new GeoPoint { Lat = -65.0, Lon = -65.0 } } } ] ); ``` A match is considered any point location inside or on the boundaries of the given polygon's exterior but not inside any interiors. If several location values are stored for a point, then any of them matching will include that point as a candidate in the resultset. These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo). ### Values count In addition to the direct value comparison, it is also possible to filter by the amount of values. For example, given the data: ```json [ { "id": 1, "name": "product A", "comments": ["Very good!", "Excellent"] }, { "id": 2, "name": "product B", "comments": ["meh", "expected more", "ok"] } ] ``` We can perform the search only among the items with more than two comments: ```json { "key": "comments", "values_count": { "gt": 2 } } ``` ```python models.FieldCondition( key="comments", values_count=models.ValuesCount(gt=2), ) ``` ```typescript { key: 'comments', values_count: {gt: 2} } ``` ```rust Condition::values_count( "comments", ValuesCount { gt: Some(2), ..Default::default() }, ) ``` ```java import static io.qdrant.client.ConditionFactory.valuesCount; import io.qdrant.client.grpc.Points.ValuesCount; valuesCount("comments", ValuesCount.newBuilder().setGt(2).build()); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; ValuesCount("comments", new ValuesCount { Gt = 2 }); ``` The result would be: ```json [{ "id": 2, "name": "product B", "comments": ["meh", "expected more", "ok"] }] ``` If stored value is not an array - it is assumed that the amount of values is equals to 1. ### Is Empty Sometimes it is also useful to filter out records that are missing some value. The `IsEmpty` condition may help you with that: ```json { "is_empty": { "key": "reports" } } ``` ```python models.IsEmptyCondition( is_empty=models.PayloadField(key="reports"), ) ``` ```typescript { is_empty: { key: "reports"; } } ``` ```rust Condition::is_empty("reports") ``` ```java import static io.qdrant.client.ConditionFactory.isEmpty; isEmpty("reports"); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; IsEmpty("reports"); ``` This condition will match all records where the field `reports` either does not exist, or has `null` or `[]` value. <aside role="status">The <b>IsEmpty</b> is often useful together with the logical negation <b>must_not</b>. In this case all non-empty values will be selected.</aside> ### Is Null It is not possible to test for `NULL` values with the <b>match</b> condition. We have to use `IsNull` condition instead: ```json { "is_null": { "key": "reports" } } ``` ```python models.IsNullCondition( is_null=models.PayloadField(key="reports"), ) ``` ```typescript { is_null: { key: "reports"; } } ``` ```rust Condition::is_null("reports") ``` ```java import static io.qdrant.client.ConditionFactory.isNull; isNull("reports"); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; IsNull("reports"); ``` This condition will match all records where the field `reports` exists and has `NULL` value. ### Has id This type of query is not related to payload, but can be very useful in some situations. For example, the user could mark some specific search results as irrelevant, or we want to search only among the specified points. ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [ { "has_id": [1,3,5,7,9,11] } ] } ... } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.HasIdCondition(has_id=[1, 3, 5, 7, 9, 11]), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must: [ { has_id: [1, 3, 5, 7, 9, 11], }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([Condition::has_id([1, 3, 5, 7, 9, 11])])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.hasId; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addMust(hasId(List.of(id(1), id(3), id(5), id(7), id(9), id(11)))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync(collectionName: "{collection_name}", filter: HasId([1, 3, 5, 7, 9, 11])); ``` Filtered points would be: ```json [ { "id": 1, "city": "London", "color": "green" }, { "id": 3, "city": "London", "color": "blue" }, { "id": 5, "city": "Moscow", "color": "green" } ] ```
qdrant-landing/content/documentation/concepts/indexing.md
--- title: Indexing weight: 90 aliases: - ../indexing --- # Indexing A key feature of Qdrant is the effective combination of vector and traditional indexes. It is essential to have this because for vector search to work effectively with filters, having vector index only is not enough. In simpler terms, a vector index speeds up vector search, and payload indexes speed up filtering. The indexes in the segments exist independently, but the parameters of the indexes themselves are configured for the whole collection. Not all segments automatically have indexes. Their necessity is determined by the [optimizer](../optimizer/) settings and depends, as a rule, on the number of stored points. ## Payload Index Payload index in Qdrant is similar to the index in conventional document-oriented databases. This index is built for a specific field and type, and is used for quick point requests by the corresponding filtering condition. The index is also used to accurately estimate the filter cardinality, which helps the [query planning](../search/#query-planning) choose a search strategy. Creating an index requires additional computational resources and memory, so choosing fields to be indexed is essential. Qdrant does not make this choice but grants it to the user. To mark a field as indexable, you can use the following: ```http PUT /collections/{collection_name}/index { "field_name": "name_of_the_field_to_index", "field_schema": "keyword" } ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url="http://localhost:6333") client.create_payload_index( collection_name="{collection_name}", field_name="name_of_the_field_to_index", field_schema="keyword", ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createPayloadIndex("{collection_name}", { field_name: "name_of_the_field_to_index", field_schema: "keyword", }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::FieldType}; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_field_index( "{collection_name}", "name_of_the_field_to_index", FieldType::Keyword, None, None, ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadSchemaType; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createPayloadIndexAsync( "{collection_name}", "name_of_the_field_to_index", PayloadSchemaType.Keyword, null, null, null, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.CreatePayloadIndexAsync(collectionName: "{collection_name}", fieldName: "name_of_the_field_to_index"); ``` You can use dot notation to specify a nested field for indexing. Similar to specifying [nested filters](../filtering/#nested-key). Available field types are: * `keyword` - for [keyword](../payload/#keyword) payload, affects [Match](../filtering/#match) filtering conditions. * `integer` - for [integer](../payload/#integer) payload, affects [Match](../filtering/#match) and [Range](../filtering/#range) filtering conditions. * `float` - for [float](../payload/#float) payload, affects [Range](../filtering/#range) filtering conditions. * `bool` - for [bool](../payload/#bool) payload, affects [Match](../filtering/#match) filtering conditions (available as of v1.4.0). * `geo` - for [geo](../payload/#geo) payload, affects [Geo Bounding Box](../filtering/#geo-bounding-box) and [Geo Radius](../filtering/#geo-radius) filtering conditions. * `datetime` - for [datetime](../payload/#datetime) payload, affects [Range](../filtering/#range) filtering conditions (available as of v1.8.0). * `text` - a special kind of index, available for [keyword](../payload/#keyword) / string payloads, affects [Full Text search](../filtering/#full-text-match) filtering conditions. Payload index may occupy some additional memory, so it is recommended to only use index for those fields that are used in filtering conditions. If you need to filter by many fields and the memory limits does not allow to index all of them, it is recommended to choose the field that limits the search result the most. As a rule, the more different values a payload value has, the more efficiently the index will be used. ### Full-text index *Available as of v0.10.0* Qdrant supports full-text search for string payload. Full-text index allows you to filter points by the presence of a word or a phrase in the payload field. Full-text index configuration is a bit more complex than other indexes, as you can specify the tokenization parameters. Tokenization is the process of splitting a string into tokens, which are then indexed in the inverted index. To create a full-text index, you can use the following: ```http PUT /collections/{collection_name}/index { "field_name": "name_of_the_field_to_index", "field_schema": { "type": "text", "tokenizer": "word", "min_token_len": 2, "max_token_len": 20, "lowercase": true } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.create_payload_index( collection_name="{collection_name}", field_name="name_of_the_field_to_index", field_schema=models.TextIndexParams( type="text", tokenizer=models.TokenizerType.WORD, min_token_len=2, max_token_len=15, lowercase=True, ), ) ``` ```typescript import { QdrantClient, Schemas } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createPayloadIndex("{collection_name}", { field_name: "name_of_the_field_to_index", field_schema: { type: "text", tokenizer: "word", min_token_len: 2, max_token_len: 15, lowercase: true, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ payload_index_params::IndexParams, FieldType, PayloadIndexParams, TextIndexParams, TokenizerType, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_field_index( "{collection_name}", "name_of_the_field_to_index", FieldType::Text, Some(&PayloadIndexParams { index_params: Some(IndexParams::TextIndexParams(TextIndexParams { tokenizer: TokenizerType::Word as i32, min_token_len: Some(2), max_token_len: Some(10), lowercase: Some(true), })), }), None, ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; import io.qdrant.client.grpc.Collections.TextIndexParams; import io.qdrant.client.grpc.Collections.TokenizerType; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createPayloadIndexAsync( "{collection_name}", "name_of_the_field_to_index", PayloadSchemaType.Text, PayloadIndexParams.newBuilder() .setTextIndexParams( TextIndexParams.newBuilder() .setTokenizer(TokenizerType.Word) .setMinTokenLen(2) .setMaxTokenLen(10) .setLowercase(true) .build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreatePayloadIndexAsync( collectionName: "{collection_name}", fieldName: "name_of_the_field_to_index", schemaType: PayloadSchemaType.Text, indexParams: new PayloadIndexParams { TextIndexParams = new TextIndexParams { Tokenizer = TokenizerType.Word, MinTokenLen = 2, MaxTokenLen = 10, Lowercase = true } } ); ``` Available tokenizers are: * `word` - splits the string into words, separated by spaces, punctuation marks, and special characters. * `whitespace` - splits the string into words, separated by spaces. * `prefix` - splits the string into words, separated by spaces, punctuation marks, and special characters, and then creates a prefix index for each word. For example: `hello` will be indexed as `h`, `he`, `hel`, `hell`, `hello`. * `multilingual` - special type of tokenizer based on [charabia](https://github.com/meilisearch/charabia) package. It allows proper tokenization and lemmatization for multiple languages, including those with non-latin alphabets and non-space delimiters. See [charabia documentation](https://github.com/meilisearch/charabia) for full list of supported languages supported normalization options. In the default build configuration, qdrant does not include support for all languages, due to the increasing size of the resulting binary. Chinese, Japanese and Korean languages are not enabled by default, but can be enabled by building qdrant from source with `--features multiling-chinese,multiling-japanese,multiling-korean` flags. See [Full Text match](../filtering/#full-text-match) for examples of querying with full-text index. ### Parameterized index *Available as of v1.8.0* We've added a parameterized variant to the `integer` index, which allows you to fine-tune indexing and search performance. Both the regular and parameterized `integer` indexes use the following flags: - `lookup`: enables support for direct lookup using [Match](/documentation/concepts/filtering/#match) filters. - `range`: enables support for [Range](/documentation/concepts/filtering/#range) filters. The regular `integer` index assumes both `lookup` and `range` are `true`. In contrast, to configure a parameterized index, you would set only one of these filters to `true`: | `lookup` | `range` | Result | |----------|---------|-----------------------------| | `true` | `true` | Regular integer index | | `true` | `false` | Parameterized integer index | | `false` | `true` | Parameterized integer index | | `false` | `false` | No integer index | The parameterized index can enhance performance in collections with millions of points. We encourage you to try it out. If it does not enhance performance in your use case, you can always restore the regular `integer` index. Note: If you set `"lookup": true` with a range filter, that may lead to significant performance issues. For example, the following code sets up a parameterized integer index which supports only range filters: ```http PUT /collections/{collection_name}/index { "field_name": "name_of_the_field_to_index", "field_schema": { "type": "integer", "lookup": false, "range": true } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.create_payload_index( collection_name="{collection_name}", field_name="name_of_the_field_to_index", field_schema=models.IntegerIndexParams( type=models.IntegerIndexType.INTEGER, lookup=False, range=True, ), ) ``` ```typescript import { QdrantClient, Schemas } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createPayloadIndex("{collection_name}", { field_name: "name_of_the_field_to_index", field_schema: { type: "integer", lookup: false, range: true, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ payload_index_params::IndexParams, FieldType, PayloadIndexParams, IntegerIndexParams, TokenizerType, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_field_index( "{collection_name}", "name_of_the_field_to_index", FieldType::Integer, Some(&PayloadIndexParams { index_params: Some(IndexParams::IntegerIndexParams(IntegerIndexParams { lookup: false, range: true, })), }), None, ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.IntegerIndexParams; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createPayloadIndexAsync( "{collection_name}", "name_of_the_field_to_index", PayloadSchemaType.Integer, PayloadIndexParams.newBuilder() .setIntegerIndexParams( IntegerIndexParams.newBuilder().setLookup(false).setRange(true).build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreatePayloadIndexAsync( collectionName: "{collection_name}", fieldName: "name_of_the_field_to_index", schemaType: PayloadSchemaType.Integer, indexParams: new PayloadIndexParams { IntegerIndexParams = new() { Lookup = false, Range = true } } ); ``` ## Vector Index A vector index is a data structure built on vectors through a specific mathematical model. Through the vector index, we can efficiently query several vectors similar to the target vector. Qdrant currently only uses HNSW as a dense vector index. [HNSW](https://arxiv.org/abs/1603.09320) (Hierarchical Navigable Small World Graph) is a graph-based indexing algorithm. It builds a multi-layer navigation structure for an image according to certain rules. In this structure, the upper layers are more sparse and the distances between nodes are farther. The lower layers are denser and the distances between nodes are closer. The search starts from the uppermost layer, finds the node closest to the target in this layer, and then enters the next layer to begin another search. After multiple iterations, it can quickly approach the target position. In order to improve performance, HNSW limits the maximum degree of nodes on each layer of the graph to `m`. In addition, you can use `ef_construct` (when building index) or `ef` (when searching targets) to specify a search range. The corresponding parameters could be configured in the configuration file: ```yaml storage: # Default parameters of HNSW Index. Could be overridden for each collection or named vector individually hnsw_index: # Number of edges per node in the index graph. # Larger the value - more accurate the search, more space required. m: 16 # Number of neighbours to consider during the index building. # Larger the value - more accurate the search, more time required to build index. ef_construct: 100 # Minimal size (in KiloBytes) of vectors for additional payload-based indexing. # If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used - # in this case full-scan search should be preferred by query planner and additional indexing is not required. # Note: 1Kb = 1 vector of size 256 full_scan_threshold: 10000 ``` And so in the process of creating a [collection](../collections/). The `ef` parameter is configured during [the search](../search/) and by default is equal to `ef_construct`. HNSW is chosen for several reasons. First, HNSW is well-compatible with the modification that allows Qdrant to use filters during a search. Second, it is one of the most accurate and fastest algorithms, according to [public benchmarks](https://github.com/erikbern/ann-benchmarks). *Available as of v1.1.1* The HNSW parameters can also be configured on a collection and named vector level by setting [`hnsw_config`](../indexing/#vector-index) to fine-tune search performance. ## Sparse Vector Index *Available as of v1.7.0* Sparse vectors in Qdrant are indexed with a special data structure, which is optimized for vectors that have a high proportion of zeroes. In some ways, this indexing method is similar to the inverted index, which is used in text search engines. - A sparse vector index in Qdrant is exact, meaning it does not use any approximation algorithms. - All sparse vectors added to the collection are immediately indexed in the mutable version of a sparse index. With Qdrant, you can benefit from a more compact and efficient immutable sparse index, which is constructed during the same optimization process as the dense vector index. This approach is particularly useful for collections storing both dense and sparse vectors. To configure a sparse vector index, create a collection with the following parameters: ```http PUT /collections/{collection_name} { "sparse_vectors": { "text": { "index": { "on_disk": false } } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.create_collection( collection_name="{collection_name}", sparse_vectors={ "text": models.SparseVectorIndexParams( index=models.SparseVectorIndexType( on_disk=False, ), ), }, ) ``` ```typescript import { QdrantClient, Schemas } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { sparse_vectors: { "splade-model-name": { index: { on_disk: false } } } }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::collections::SparseVectorIndexConfig}; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), sparse_vectors_config: Some(SparseVectorConfig { map: [ ( "splade-model-name".to_string(), SparseVectorParams { index: Some(SparseIndexConfig { on_disk: false, ..Default::default() }), ..Default::default() } ) ].into_iter().collect() }), ..Default::default() }).await; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.createCollectionAsync( Collections.CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setSparseVectorsConfig( Collections.SparseVectorConfig.newBuilder().putMap( "splade-model-name", Collections.SparseVectorParams.newBuilder() .setIndex( Collections.SparseIndexConfig .newBuilder() .setOnDisk(false) .build() ).build() ).build() ).build() ).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", sparseVectorsConfig: ("splade-model-name", new SparseVectorParams{ Index = new SparseIndexConfig { OnDisk = false, } }) ); ``` The following parameters may affect performance: - `on_disk: true` - The index is stored on disk, which lets you save memory. This may slow down search performance. - `on_disk: false` - The index is still persisted on disk, but it is also loaded into memory for faster search. Unlike a dense vector index, a sparse vector index does not require a pre-defined vector size. It automatically adjusts to the size of the vectors added to the collection. **Note:** A sparse vector index only supports dot-product similarity searches. It does not support other distance metrics. ## Filtrable Index Separately, a payload index and a vector index cannot solve the problem of search using the filter completely. In the case of weak filters, you can use the HNSW index as it is. In the case of stringent filters, you can use the payload index and complete rescore. However, for cases in the middle, this approach does not work well. On the one hand, we cannot apply a full scan on too many vectors. On the other hand, the HNSW graph starts to fall apart when using too strict filters. ![HNSW fail](/docs/precision_by_m.png) ![hnsw graph](/docs/graph.gif) You can find more information on why this happens in our [blog post](https://blog.vasnetsov.com/posts/categorical-hnsw/). Qdrant solves this problem by extending the HNSW graph with additional edges based on the stored payload values. Extra edges allow you to efficiently search for nearby vectors using the HNSW index and apply filters as you search in the graph. This approach minimizes the overhead on condition checks since you only need to calculate the conditions for a small fraction of the points involved in the search.
qdrant-landing/content/documentation/concepts/optimizer.md
--- title: Optimizer weight: 70 aliases: - ../optimizer --- # Optimizer It is much more efficient to apply changes in batches than perform each change individually, as many other databases do. Qdrant here is no exception. Since Qdrant operates with data structures that are not always easy to change, it is sometimes necessary to rebuild those structures completely. Storage optimization in Qdrant occurs at the segment level (see [storage](../storage/)). In this case, the segment to be optimized remains readable for the time of the rebuild. ![Segment optimization](/docs/optimization.svg) The availability is achieved by wrapping the segment into a proxy that transparently handles data changes. Changed data is placed in the copy-on-write segment, which has priority for retrieval and subsequent updates. ## Vacuum Optimizer The simplest example of a case where you need to rebuild a segment repository is to remove points. Like many other databases, Qdrant does not delete entries immediately after a query. Instead, it marks records as deleted and ignores them for future queries. This strategy allows us to minimize disk access - one of the slowest operations. However, a side effect of this strategy is that, over time, deleted records accumulate, occupy memory and slow down the system. To avoid these adverse effects, Vacuum Optimizer is used. It is used if the segment has accumulated too many deleted records. The criteria for starting the optimizer are defined in the configuration file. Here is an example of parameter values: ```yaml storage: optimizers: # The minimal fraction of deleted vectors in a segment, required to perform segment optimization deleted_threshold: 0.2 # The minimal number of vectors in a segment, required to perform segment optimization vacuum_min_vector_number: 1000 ``` ## Merge Optimizer The service may require the creation of temporary segments. Such segments, for example, are created as copy-on-write segments during optimization itself. It is also essential to have at least one small segment that Qdrant will use to store frequently updated data. On the other hand, too many small segments lead to suboptimal search performance. There is the Merge Optimizer, which combines the smallest segments into one large segment. It is used if too many segments are created. The criteria for starting the optimizer are defined in the configuration file. Here is an example of parameter values: ```yaml storage: optimizers: # If the number of segments exceeds this value, the optimizer will merge the smallest segments. max_segment_number: 5 ``` ## Indexing Optimizer Qdrant allows you to choose the type of indexes and data storage methods used depending on the number of records. So, for example, if the number of points is less than 10000, using any index would be less efficient than a brute force scan. The Indexing Optimizer is used to implement the enabling of indexes and memmap storage when the minimal amount of records is reached. The criteria for starting the optimizer are defined in the configuration file. Here is an example of parameter values: ```yaml storage: optimizers: # Maximum size (in kilobytes) of vectors to store in-memory per segment. # Segments larger than this threshold will be stored as read-only memmaped file. # Memmap storage is disabled by default, to enable it, set this threshold to a reasonable value. # To disable memmap storage, set this to `0`. # Note: 1Kb = 1 vector of size 256 memmap_threshold_kb: 200000 # Maximum size (in kilobytes) of vectors allowed for plain index, exceeding this threshold will enable vector indexing # Default value is 20,000, based on <https://github.com/google-research/google-research/blob/master/scann/docs/algorithms.md>. # To disable vector indexing, set to `0`. # Note: 1kB = 1 vector of size 256. indexing_threshold_kb: 20000 ``` In addition to the configuration file, you can also set optimizer parameters separately for each [collection](../collections/). Dynamic parameter updates may be useful, for example, for more efficient initial loading of points. You can disable indexing during the upload process with these settings and enable it immediately after it is finished. As a result, you will not waste extra computation resources on rebuilding the index.