text
stringlengths 25
143k
| source
stringlengths 12
112
|
---|---|
---
title: Setup Hybrid Cloud
weight: 1
---
# Creating a Hybrid Cloud Environment
The following instruction set will show you how to properly set up a **Qdrant cluster** in your **Hybrid Cloud Environment**.
To learn how Hybrid Cloud works, [read the overview document](/documentation/hybrid-cloud/).
## Prerequisites
- **Kubernetes cluster:** To create a Hybrid Cloud Environment, you need a [standard compliant](https://www.cncf.io/training/certification/software-conformance/) Kubernetes cluster. You can run this cluster in any cloud, on-premise or edge environment, with distributions that range from AWS EKS to VMWare vSphere.
- **Storage:** For storage, you need to set up the Kubernetes cluster with a Container Storage Interface (CSI) driver that provides block storage. For vertical scaling, the CSI driver needs to support volume expansion. For backups and restores, the driver needs to support CSI snapshots and restores.
<aside role="status">Network storage systems like NFS or object storage systems such as S3 are not supported.</aside>
- **Permissions:** To install the Qdrant Kubernetes Operator you need to have `cluster-admin` access in your Kubernetes cluster.
- **Connection:** The Qdrant Kubernetes Operator in your cluster needs to be able to connect to Qdrant Cloud. It will create an outgoing connection to `cloud.qdrant.io` on port `443`.
- **Locations:** By default, the Qdrant Cloud Agent and Operator pulls Helm charts and container images from `registry.cloud.qdrant.io`. The Qdrant database container image is pulled from `docker.io`.
> **Note:** You can also mirror these images and charts into your own registry and pull them from there.
### CLI tools
During the onboarding, you will need to deploy the Qdrant Kubernetes Operator and Agent using Helm. Make sure you have the following tools installed:
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* [helm](https://helm.sh/docs/intro/install/)
You will need to have access to the Kubernetes cluster with `kubectl` and `helm` configured to connect to it. Please refer the documentation of your Kubernetes distribution for more information.
### Required artifacts
Container images:
- `docker.io/qdrant/qdrant`
- `registry.cloud.qdrant.io/qdrant/qdrant-cloud-agent`
- `registry.cloud.qdrant.io/qdrant/qdrant-operator`
- `registry.cloud.qdrant.io/qdrant/cluster-manager`
- `registry.cloud.qdrant.io/qdrant/prometheus`
- `registry.cloud.qdrant.io/qdrant/prometheus-config-reloader`
- `registry.cloud.qdrant.io/qdrant/kube-state-metrics`
Open Containers Initiative (OCI) Helm charts:
- `registry.cloud.qdrant.io/qdrant-charts/qdrant-cloud-agent`
- `registry.cloud.qdrant.io/qdrant-charts/qdrant-operator`
- `registry.cloud.qdrant.io/qdrant-charts/qdrant-cluster-manager`
- `registry.cloud.qdrant.io/qdrant-charts/prometheus`
## Installation
1. To set up Hybrid Cloud, open the Qdrant Cloud Console at [cloud.qdrant.io](https://cloud.qdrant.io). On the dashboard, select **Hybrid Cloud**.
2. Before creating your first Hybrid Cloud Environment, you have to provide billing information and accept the Hybrid Cloud license agreement. The installation wizard will guide you through the process.
> **Note:** You will only be charged for the Qdrant cluster you create in a Hybrid Cloud Environment, but not for the environment itself.
3. Now you can specify the following:
- **Name:** A name for the Hybrid Cloud Environment
- **Kubernetes Namespace:** The Kubernetes namespace for the operator and agent. Once you select a namespace, you can't change it.
You can also configure the StorageClass and VolumeSnapshotClass to use for the Qdrant databases, if you want to deviate from the default settings of your cluster.
4. You can then enter the YAML configuration for your Kubernetes operator. Qdrant supports a specific list of configuration options, as described in the [Qdrant Operator configuration](/documentation/hybrid-cloud/operator-configuration/) section.
5. (Optional) If you have special requirements for any of the following, activate the **Show advanced configuration** option:
- If you use a proxy to connect from your infrastructure to the Qdrant Cloud API, you can specify the proxy URL, credentials and cetificates.
- Container registry URL for Qdrant Operator and Agent images. The default is <https://registry.cloud.qdrant.io/qdrant/>.
- Helm chart repository URL for the Qdrant Operator and Agent. The default is <oci://registry.cloud.qdrant.io/qdrant-charts>.
- Log level for the operator and agent
6. Once complete, click **Create**.
> **Note:** All settings but the Kubernetes namespace can be changed later.
### Generate Installation Command
After creating your Hybrid Cloud, select **Generate Installation Command** to generate a script that you can run in your Kubernetes cluster which will perform the initial installation of the Kubernetes operator and agent. It will:
- Create the Kubernetes namespace, if not present
- Set up the necessary secrets with credentials to access the Qdrant container registry and the Qdrant Cloud API.
- Sign in to the Helm registry at `registry.cloud.qdrant.io`
- Install the Qdrant cloud agent and Kubernetes operator chart
You need this command only for the initial installation. After that, you can update the agent and operator using the Qdrant Cloud Console.
> **Note:** If you generate the installation command a second time, it will re-generate the included secrets, and you will have to apply the command again to update them.
## Deleting a Hybrid Cloud Environment
To delete a Hybrid Cloud Environment, first delete all Qdrant database clusters in it. Then you can delete the environment itself.
To clean up your Kubernetes cluster, after deleting the Hybrid Cloud Environment, you can use the following command:
```shell
helm -n the-qdrant-namespace delete qdrant-cloud-agent
helm -n the-qdrant-namespace delete qdrant-prometheus
helm -n the-qdrant-namespace delete qdrant-operator
kubectl -n the-qdrant-namespace patch HelmRelease.cd.qdrant.io qdrant-cloud-agent -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl -n the-qdrant-namespace patch HelmRelease.cd.qdrant.io qdrant-prometheus -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl -n the-qdrant-namespace patch HelmRelease.cd.qdrant.io qdrant-operator -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl -n the-qdrant-namespace patch HelmChart.cd.qdrant.io the-qdrant-namespace-qdrant-cloud-agent -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl -n the-qdrant-namespace patch HelmChart.cd.qdrant.io the-qdrant-namespace-qdrant-prometheus -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl -n the-qdrant-namespace patch HelmChart.cd.qdrant.io the-qdrant-namespace-qdrant-operator -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl -n the-qdrant-namespace patch HelmRepository.cd.qdrant.io qdrant-cloud -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl delete namespace the-qdrant-namespace
kubectl get crd -o name | grep qdrant | xargs -n 1 kubectl delete
```
| documentation/hybrid-cloud/hybrid-cloud-setup.md |
---
title: Configure the Qdrant Operator
weight: 3
---
# Configuring Qdrant Operator: Advanced Options
The Qdrant Operator has several configuration options, which can be configured in the advanced section of your Hybrid Cloud Environment.
The following YAML shows all configuration options with their default values:
```yaml
# Retention for the backup history of Qdrant clusters
backupHistoryRetentionDays: 2
# Timeout configuration for the Qdrant operator operations
operationTimeout: 7200 # 2 hours
handlerTimeout: 21600 # 6 hours
backupTimeout: 12600 # 3.5 hours
# Incremental backoff configuration for the Qdrant operator operations
backOff:
minDelay: 5
maxDelay: 300
increment: 5
# node_selector: {}
# tolerations: []
# Default ingress configuration for a Qdrant cluster
ingress:
enabled: false
provider: KubernetesIngress # or NginxIngress
# kubernetesIngress:
# ingressClassName: ""
# Default storage configuration for a Qdrant cluster
#storage:
# Default VolumeSnapshotClass for a Qdrant cluster
# snapshot_class: "csi-snapclass"
# Default StorageClass for a Qdrant cluster, uses cluster default StorageClass if not set
# default_storage_class_names:
# StorageClass for DB volumes
# db: ""
# StorageClass for snapshot volumes
# snapshots: ""
# Default scheduling configuration for a Qdrant cluster
#scheduling:
# default_topology_spread_constraints: []
# default_pod_disruption_budget: {}
qdrant:
# Default security context for Qdrant cluster
# securityContext:
# enabled: false
# user: ""
# fsGroup: ""
# group: ""
# Default Qdrant image configuration
# image:
# pull_secret: ""
# pull_policy: IfNotPresent
# repository: qdrant/qdrant
# Default Qdrant log_level
# log_level: INFO
# Default network policies to create for a qdrant cluster
networkPolicies:
ingress:
- ports:
- protocol: TCP
port: 6333
- protocol: TCP
port: 6334
# Allow DNS resolution from qdrant pods at Kubernetes internal DNS server
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
```
| documentation/hybrid-cloud/operator-configuration.md |
---
title: Networking, Logging & Monitoring
weight: 4
---
# Configuring Networking, Logging & Monitoring in Qdrant Hybrid Cloud
## Configure network policies
For security reasons, each database cluster is secured with network policies. By default, database pods only allow egress traffic between each and allow ingress traffic to ports 6333 (rest) and 6334 (grpc) from within the Kubernetes cluster.
You can modify the default network policies in the Hybrid Cloud environment configuration:
```yaml
qdrant:
networkPolicies:
ingress:
- from:
- ipBlock:
cidr: 192.168.0.0/22
- podSelector:
matchLabels:
app: client-app
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: client-namespace
- podSelector:
matchLabels:
app: traefik
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- port: 6333
protocol: TCP
- port: 6334
protocol: TCP
```
## Logging
You can access the logs with kubectl or the Kubernetes log management tool of your choice. For example:
```bash
kubectl -n qdrant-namespace logs -l app=qdrant,cluster-id=9a9f48c7-bb90-4fb2-816f-418a46a74b24
```
**Configuring log levels:** You can configure log levels for the databases individually in the configuration section of the Qdrant Cluster detail page. The log level for the **Qdrant Cloud Agent** and **Operator** can be set in the [Hybrid Cloud Environment configuration](/documentation/hybrid-cloud/operator-configuration/).
## Monitoring
The Qdrant Cloud console gives you access to basic metrics about CPU, memory and disk usage of your Qdrant clusters. You can also access Prometheus metrics endpoint of your Qdrant databases. Finally, you can use a Kubernetes workload monitoring tool of your choice to monitor your Qdrant clusters.
| documentation/hybrid-cloud/networking-logging-monitoring.md |
---
title: Deployment Platforms
weight: 5
---
# Qdrant Hybrid Cloud: Hosting Platforms & Deployment Options
This page provides an overview of how to deploy Qdrant Hybrid Cloud on various managed Kubernetes platforms.
For a general list of prerequisites and installation steps, see our [Hybrid Cloud setup guide](/documentation/hybrid-cloud/hybrid-cloud-setup/).
![Akamai](/documentation/cloud/cloud-providers/akamai.jpg)
## Akamai (Linode)
[The Linode Kubernetes Engine (LKE)](https://www.linode.com/products/kubernetes/) is a managed container orchestration engine built on top of Kubernetes. LKE enables you to quickly deploy and manage your containerized applications without needing to build (and maintain) your own Kubernetes cluster. All LKE instances are equipped with a fully managed control plane at no additional cost.
First, consult Linode's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on LKE**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on Linode Kubernetes Engine
- [Getting Started with LKE](https://www.linode.com/docs/products/compute/kubernetes/get-started/)
- [LKE Guides](https://www.linode.com/docs/products/compute/kubernetes/guides/)
- [LKE API Reference](https://www.linode.com/docs/api/)
At the time of writing, Linode [does not support CSI Volume Snapshots](https://github.com/linode/linode-blockstorage-csi-driver/issues/107).
![AWS](/documentation/cloud/cloud-providers/aws.jpg)
## Amazon Web Services (AWS)
[Amazon Elastic Kubernetes Service (Amazon EKS)](https://aws.amazon.com/eks/) is a managed service to run Kubernetes in the AWS cloud and on-premises data centers which can then be paired with Qdrant's hybrid cloud. With Amazon EKS, you can take advantage of all the performance, scale, reliability, and availability of AWS infrastructure, as well as integrations with AWS networking and security services.
First, consult AWS' managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on AWS**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on Amazon Elastic Kubernetes Service
- [Getting Started with Amazon EKS](https://docs.aws.amazon.com/eks/)
- [Amazon EKS User Guide](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)
- [Amazon EKS API Reference](https://docs.aws.amazon.com/eks/latest/APIReference/Welcome.html)
Your EKS cluster needs the EKS EBS CSI driver or a similar storage driver:
- [Amazon EBS CSI Driver](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html)
To allow vertical scaling, you need a StorageClass with volume expansion enabled:
- [Amazon EBS CSI Volume Resizing](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/examples/kubernetes/resizing/README.md)
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: ebs-sc
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
```
To allow backups and restores, your EKS cluster needs the CSI snapshot controller:
- [Amazon EBS CSI Snapshot Controller](https://docs.aws.amazon.com/eks/latest/userguide/csi-snapshot-controller.html)
And you need to create a VolumeSnapshotClass:
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-snapclass
deletionPolicy: Delete
driver: ebs.csi.aws.com
```
![Civo](/documentation/cloud/cloud-providers/civo.jpg)
## Civo
[Civo Kubernetes](https://www.civo.com/kubernetes) is a robust, scalable, and managed Kubernetes service. Civo supplies a CNCF-compliant Kubernetes cluster and makes it easy to provide standard Kubernetes applications and containerized workloads. User-defined Kubernetes clusters can be created as self-service without complications using the Civo Portal.
First, consult Civo's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Civo**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on Civo Kubernetes
- [Getting Started with Civo Kubernetes](https://www.civo.com/docs/kubernetes)
- [Civo Tutorials](https://www.civo.com/learn)
- [Frequently Asked Questions on Civo](https://www.civo.com/docs/faq)
To allow backups and restores, you need to create a VolumeSnapshotClass:
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-snapclass
deletionPolicy: Delete
driver: csi.civo.com
```
![Digital Ocean](/documentation/cloud/cloud-providers/digital-ocean.jpg)
## Digital Ocean
[DigitalOcean Kubernetes (DOKS)](https://www.digitalocean.com/products/kubernetes) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and volumes.
First, consult Digital Ocean's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on DigitalOcean**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on DigitalOcean Kubernetes
- [Getting Started with DOKS](https://docs.digitalocean.com/products/kubernetes/getting-started/quickstart/)
- [DOKS - How To Guides](https://docs.digitalocean.com/products/kubernetes/how-to/)
- [DOKS - Reference Manual](https://docs.digitalocean.com/products/kubernetes/reference/)
![Gcore](/documentation/cloud/cloud-providers/gcore.svg)
## Gcore
[Gcore Managed Kubernetes](https://gcore.com/cloud/managed-kubernetes) is a managed container orchestration engine built on top of Kubernetes. Gcore enables you to quickly deploy and manage your containerized applications without needing to build (and maintain) your own Kubernetes cluster. All Gcore instances are equipped with a fully managed control plane at no additional cost.
First, consult Gcore's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Gcore**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on Gcore Kubernetes Engine
- [Getting Started with Kubnetes on Gcore](https://gcore.com/docs/cloud/kubernetes/about-gcore-kubernetes)
![Google Cloud Platform](/documentation/cloud/cloud-providers/gcp.jpg)
## Google Cloud Platform (GCP)
[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) is a managed Kubernetes service that you can use to deploy and operate containerized applications at scale using Google's infrastructure. GKE provides the operational power of Kubernetes while managing many of the underlying components, such as the control plane and nodes, for you.
First, consult GCP's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on GCP**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on the Google Kubernetes Engine
- [Getting Started with GKE](https://cloud.google.com/kubernetes-engine/docs/quickstart)
- [GKE Tutorials](https://cloud.google.com/kubernetes-engine/docs/tutorials)
- [GKE Documentation](https://cloud.google.com/kubernetes-engine/docs/)
To allow backups and restores, your GKE cluster needs the CSI VolumeSnapshot controller and class:
- [Google GKE Volume Snapshots](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots)
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-snapclass
deletionPolicy: Delete
driver: pd.csi.storage.gke.io
```
![Microsoft Azure](/documentation/cloud/cloud-providers/azure.jpg)
## Mircrosoft Azure
With [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-in/products/kubernetes-service), you can start developing and deploying cloud-native apps in Azure, data centres, or at the edge. Get unified management and governance for on-premises, edge, and multi-cloud Kubernetes clusters. Interoperate with Azure security, identity, cost management, and migration services.
First, consult Azure's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Azure**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on Azure Kubernetes Service
- [Getting Started with AKS](https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks-start-here)
- [AKS Documentation](https://learn.microsoft.com/en-in/azure/aks/)
- [Best Practices with AKS](https://learn.microsoft.com/en-in/azure/aks/best-practices)
To allow backups and restores, your AKS cluster needs the CSI VolumeSnapshot controller and class:
- [Azure AKS Volume Snapshots](https://learn.microsoft.com/en-us/azure/aks/azure-disk-csi#create-a-volume-snapshot)
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-snapclass
deletionPolicy: Delete
driver: disk.csi.azure.com
```
![Oracle Cloud Infrastructure](/documentation/cloud/cloud-providers/oracle.jpg)
## Oracle Cloud Infrastructure
[Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://www.oracle.com/in/cloud/cloud-native/container-engine-kubernetes/) is a managed Kubernetes solution that enables you to deploy Kubernetes clusters while ensuring stable operations for both the control plane and the worker nodes through automatic scaling, upgrades, and security patching. Additionally, OKE offers a completely serverless Kubernetes experience with virtual nodes.
First, consult OCI's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on OCI**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on OCI Container Engine
- [Getting Started with OCI](https://docs.oracle.com/en-us/iaas/Content/ContEng/home.htm)
- [Frequently Asked Questions on OCI](https://www.oracle.com/in/cloud/cloud-native/container-engine-kubernetes/faq/)
- [OCI Product Updates](https://docs.oracle.com/en-us/iaas/releasenotes/services/conteng/)
To allow backups and restores, your OCI cluster needs the CSI VolumeSnapshot controller and class:
- [Prerequisites for Creating Volume Snapshots
](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingpersistentvolumeclaim_topic-Provisioning_PVCs_on_BV.htm#contengcreatingpersistentvolumeclaim_topic-Provisioning_PVCs_on_BV-PV_From_Snapshot_CSI__section_volume-snapshot-prerequisites)
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-snapclass
deletionPolicy: Delete
driver: blockvolume.csi.oraclecloud.com
```
![OVHcloud](/documentation/cloud/cloud-providers/ovh.jpg)
## OVHcloud
[Service Managed Kubernetes](https://www.ovhcloud.com/en-in/public-cloud/kubernetes/), powered by OVH Public Cloud Instances, a leading European cloud provider. With OVHcloud Load Balancers and disks built in. OVHcloud Managed Kubernetes provides high availability, compliance, and CNCF conformance, allowing you to focus on your containerized software layers with total reversibility.
First, consult OVHcloud's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on OVHcloud**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on Service Managed Kubernetes by OVHcloud
- [Getting Started with OVH Managed Kubernetes](https://help.ovhcloud.com/csm/en-in-documentation-public-cloud-containers-orchestration-managed-kubernetes-k8s-getting-started)
- [OVH Managed Kubernetes Documentation](https://help.ovhcloud.com/csm/en-in-documentation-public-cloud-containers-orchestration-managed-kubernetes-k8s)
- [OVH Managed Kubernetes Tutorials](https://help.ovhcloud.com/csm/en-in-documentation-public-cloud-containers-orchestration-managed-kubernetes-k8s-tutorials)
![Red Hat](/documentation/cloud/cloud-providers/redhat.jpg)
## Red Hat OpenShift
[Red Hat OpenShift Kubernetes Engine](https://www.redhat.com/en/technologies/cloud-computing/openshift/kubernetes-engine) provides you with the basic functionality of Red Hat OpenShift. It offers a subset of the features that Red Hat OpenShift Container Platform offers, like full access to an enterprise-ready Kubernetes environment and an extensive compatibility test matrix with many of the software elements that you might use in your data centre.
First, consult Red Hat's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Red Hat OpenShift**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on OpenShift Kubernetes Engine
- [Getting Started with Red Hat OpenShift Kubernetes](https://docs.openshift.com/container-platform/4.15/getting_started/kubernetes-overview.html)
- [Red Hat OpenShift Kubernetes Documentation](https://docs.openshift.com/container-platform/4.15/welcome/index.html)
- [Installing on Container Platforms](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/html/installing/index)
Qdrant databases need a persistent storage solution. See [Openshift Storage Overview](https://docs.openshift.com/container-platform/4.15/storage/index.html).
To allow vertical scaling, you need a StorageClass with [volume expansion enabled](https://docs.openshift.com/container-platform/4.15/storage/expanding-persistent-volumes.html).
To allow backups and restores, your OpenShift cluster needs the [CSI snapshot controller](https://docs.openshift.com/container-platform/4.15/storage/container_storage_interface/persistent-storage-csi-snapshots.html), and you need to create a VolumeSnapshotClass.
![Scaleway](/documentation/cloud/cloud-providers/scaleway.jpg)
## Scaleway
[Scaleway Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) and [Kosmos](https://www.scaleway.com/en/kubernetes-kosmos/) are managed Kubernetes services from [Scaleway](https://www.scaleway.com/en/). They abstract away the complexities of managing and operating a Kubernetes cluster. The primary difference being, Kapsule clusters are composed solely of Scaleway Instances. Whereas, a Kosmos cluster is a managed multi-cloud Kubernetes engine that allows you to connect instances from any cloud provider to a single managed Control-Plane.
First, consult Scaleway's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Scaleway**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on Scaleway Kubernetes
- [Getting Started with Scaleway Kubernetes](https://www.scaleway.com/en/docs/containers/kubernetes/quickstart/#how-to-add-a-scaleway-pool-to-a-kubernetes-cluster)
- [Scaleway Kubernetes Documentation](https://www.scaleway.com/en/docs/containers/kubernetes/)
- [Frequently Asked Questions on Scaleway Kubernetes](https://www.scaleway.com/en/docs/faq/kubernetes/)
![STACKIT](/documentation/cloud/cloud-providers/stackit.jpg)
## STACKIT
[STACKIT Kubernetes Engine (SKE)](https://www.stackit.de/en/product/kubernetes/) is a robust, scalable, and managed Kubernetes service. SKE supplies a CNCF-compliant Kubernetes cluster and makes it easy to provide standard Kubernetes applications and containerized workloads. User-defined Kubernetes clusters can be created as self-service without complications using the STACKIT Portal.
First, consult STACKIT's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on STACKIT**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on STACKIT Kubernetes Engine
- [Getting Started with SKE](https://docs.stackit.cloud/stackit/en/getting-started-ske-10125565.html)
- [SKE Tutorials](https://docs.stackit.cloud/stackit/en/tutorials-ske-66683162.html)
- [Frequently Asked Questions on SKE](https://docs.stackit.cloud/stackit/en/faq-known-issues-of-ske-28476393.html)
To allow backups and restores, you need to create a VolumeSnapshotClass:
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-snapclass
deletionPolicy: Delete
driver: cinder.csi.openstack.org
```
![Vultr](/documentation/cloud/cloud-providers/vultr.jpg)
## Vultr
[Vultr Kubernetes Engine (VKE)](https://www.vultr.com/kubernetes/) is a fully-managed product offering with predictable pricing that makes Kubernetes easy to use. Vultr manages the control plane and worker nodes and provides integration with other managed services such as Load Balancers, Block Storage, and DNS.
First, consult Vultr's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Vultr**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
### More on Vultr Kubernetes Engine
- [VKE Guide](https://docs.vultr.com/vultr-kubernetes-engine)
- [VKE Documentation](https://docs.vultr.com/)
- [Frequently Asked Questions on VKE](https://docs.vultr.com/vultr-kubernetes-engine#frequently-asked-questions)
At the time of writing, Vultr does not support CSI Volume Snapshots.
![Kubernetes](/documentation/cloud/cloud-providers/kubernetes.jpg)
## Generic Kubernetes Support (on-premises, cloud, edge)
Qdrant Hybrid Cloud works with any Kubernetes cluster that meets the [standard compliance](https://www.cncf.io/training/certification/software-conformance/) requirements.
This includes for example:
- [VMWare Tanzu](https://tanzu.vmware.com/kubernetes-grid)
- [Red Hat OpenShift](https://www.openshift.com/)
- [SUSE Rancher](https://www.rancher.com/)
- [Canonical Kubernetes](https://ubuntu.com/kubernetes)
- [RKE](https://rancher.com/docs/rke/latest/en/)
- [RKE2](https://docs.rke2.io/)
- [K3s](https://k3s.io/)
Qdrant databases need persistent block storage. Most storage solutions provide a CSI driver that can be used with Kubernetes. See [CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html) for more information.
To allow vertical scaling, you need a StorageClass with volume expansion enabled. See [Volume Expansion](https://kubernetes.io/docs/concepts/storage/storage-classes/#allow-volume-expansion) for more information.
To allow backups and restores, your CSI driver needs to support volume snapshots cluster needs the CSI VolumeSnapshot controller and class. See [CSI Volume Snapshots](https://kubernetes-csi.github.io/docs/snapshot-controller.html) for more information.
## Next Steps
Once you've got a Kubernetes cluster deployed on a platform of your choosing, you can begin setting up Qdrant Hybrid Cloud. Head to our Qdrant Hybrid Cloud [setup guide](/documentation/hybrid-cloud/hybrid-cloud-setup/) for instructions.
| documentation/hybrid-cloud/platform-deployment-options.md |
---
title: Create a Cluster
weight: 2
---
# Creating a Qdrant Cluster in Hybrid Cloud
Once you have created a Hybrid Cloud Environment, you can create a Qdrant cluster in that enviroment. Use the same process to [Create a cluster](/documentation/cloud/create-cluster/). Make sure to select your Hybrid Cloud Environment as the target.
Note that in the "Kubernetes Configuration" section you can additionally configure:
* Node selectors for the Qdrant database pods
* Toleration for the Qdrant database pods
* Additional labels for the Qdrant database pods
* A service type and annotations for the Qdrant database service
These settings can also be changed after the cluster is created on the cluster detail page.
### Authentication to your Qdrant clusters
In Hybrid Cloud the authentication information is provided by Kubernetes secrets.
You can configure authentication for your Qdrant clusters in the "Configuration" section of the Qdrant Cluster detail page. There you can configure the Kubernetes secret name and key to be used as an API key and/or read-only API key.
One way to create a secret is with kubectl:
```shell
kubectl create secret generic qdrant-api-key --from-literal=api-key=your-secret-api-key --namespace the-qdrant-namespace
```
The resulting secret will look like this:
```yaml
apiVersion: v1
data:
api-key: ...
kind: Secret
metadata:
name: qdrant-api-key
namespace: the-qdrant-namespace
type: kubernetes.io/tls
```
With this command the secret name would be `qdrant-api-key` and the key would be `api-key`.
If you want to retrieve the secret again, you can also use `kubectl`:
```shell
kubectl get secret qdrant-api-key -o jsonpath="{.data.api-key}" --namespace the-qdrant-namespace | base64 --decode
```
### Exposing Qdrant clusters to your client applications
You can expose your Qdrant clusters to your client applications using Kubernetes services and ingresses. By default, a `ClusterIP` service is created for each Qdrant cluster.
Within your Kubernetes cluster, you can access the Qdrant cluster using the service name and port:
```
http://qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24.qdrant-namespace.svc:6333
```
This endpoint is also visible on the cluster detail page.
If you want to access the database from your local developer machine, you can use `kubectl port-forward` to forward the service port to your local machine:
```
kubectl --namespace your-qdrant-namespace port-forward service/qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24 6333:6333
```
You can also expose the database outside the Kubernetes cluster with a `LoadBalancer` (if supported in your Kubernetes environment) or `NodePort` service or an ingress.
The service type and necessary annotations can be configured in the "Kubernetes Configuration" section during cluster creation, or on the cluster detail page.
Especially if you create a LoadBalancer Service, you may need to provide annotations for the loadbalancer configration. Please refer to the documention of your cloud provider for more details.
Examples:
* [AWS EKS LoadBalancer annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/)
* [Azure AKS Public LoadBalancer annotations](https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard)
* [Azure AKS Internal LoadBalancer annotations](https://learn.microsoft.com/en-us/azure/aks/internal-lb)
* [GCP GKE LoadBalancer annotations](https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer-parameters)
You could also create a Loadbalancer service manually like this:
```yaml
apiVersion: v1
kind: Service
metadata:
name: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24-lb
namespace: qdrant-namespace
spec:
type: LoadBalancer
ports:
- name: http
port: 6333
- name: grpc
port: 6334
selector:
app: qdrant
cluster-id: 9a9f48c7-bb90-4fb2-816f-418a46a74b24
```
An ingress could look like this:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24
namespace: qdrant-namespace
spec:
rules:
- host: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24.your-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24
port:
number: 6333
```
Please refer to the Kubernetes, ingress controller and cloud provider documention for more details.
If you expose the database like this, you will be able to see this also reflected as an endpoint on the cluster detail page. And will see the Qdrant database dashboard link pointing to it.
### Configuring TLS
If you want to configure TLS for accessing your Qdrant database in Hybrid Cloud, there are two options:
* You can offload TLS at the ingress or loadbalancer level.
* You can configure TLS directly in the Qdrant database.
If you want to configure TLS directly in the Qdrant database, you can reference a secret containing the TLS certificate and key in the "Configuration" section of the Qdrant Cluster detail page.
To create such a secret, you can use `kubectl`:
```shell
kubectl create secret tls qdrant-tls --cert=mydomain.com.crt --key=mydomain.com.key --namespace the-qdrant-namespace
```
The resulting secret will look like this:
```yaml
apiVersion: v1
data:
tls.crt: ...
tls.key: ...
kind: Secret
metadata:
name: qdrant-tls
namespace: the-qdrant-namespace
type: kubernetes.io/tls
```
With this command the secret name to enter into the UI would be `qdrant-tls` and the keys would be `tls.crt` and `tls.key`. | documentation/hybrid-cloud/hybrid-cloud-cluster-creation.md |
---
title: Hybrid Cloud
weight: 9
---
# Qdrant Hybrid Cloud
Seamlessly deploy and manage your vector database across diverse environments, ensuring performance, security, and cost efficiency for AI-driven applications.
[Qdrant Hybrid Cloud](/hybrid-cloud/) integrates Kubernetes clusters from any setting - cloud, on-premises, or edge - into a unified, enterprise-grade managed service.
You can use [Qdrant Cloud's UI](/documentation/cloud/create-cluster/) to create and manage your database clusters, while they still remain within your infrastructure. **All Qdrant databases will operate solely within your network, using your storage and compute resources. All user data will stay securely within your environment and won't be accessible by the Qdrant Cloud platform, or anyone else outside your organization.**
Qdrant Hybrid Cloud ensures data privacy, deployment flexibility, low latency, and delivers cost savings, elevating standards for vector search and AI applications.
**How it works:** Qdrant Hybrid Cloud relies on Kubernetes and works with any standard compliant Kubernetes distribution. When you onboard a Kubernetes cluster as a Hybrid Cloud Environment, you can deploy the Qdrant Kubernetes Operator and Cloud Agent into this cluster. These will manage Qdrant databases within your Kubernetes cluster and establish an outgoing connection to Qdrant Cloud to transport telemetry and receive management instructions. You can then benefit from the same cloud management features and transport telemetry that is available with any managed Qdrant Cloud cluster.
<aside role="status">Qdrant Cloud does not connect to the API of your Kubernetes cluster, cloud provider, or any other platform APIs.</aside>
**Setup instructions:** To begin using Qdrant Hybrid Cloud, [read our installation guide](/documentation/hybrid-cloud/hybrid-cloud-setup/).
## Hybrid Cloud architecture
The Hybrid Cloud onboarding will install a Kubernetes Operator and Cloud Agent into your Kubernetes cluster.
The Cloud Agent will establish an outgoing connection to `cloud.qdrant.io` on port `443` to transport telemetry and receive management instructions. It will also interact with the Kubernetes API through a ServiceAccount to create, read, update and delete the necessary Qdrant CRs (Custom Resources) based on the configuration setup in the Qdrant Cloud Console.
The Qdrant Kubernetes Operator will manage the Qdrant databases within your Kubernetes cluster. Based on the Qdrant CRs, it will interact with the Kubernetes API through a ServiceAccount to create and manage the necessary resources to deploy and run Qdrant databases, such as Pods, Services, ConfigMaps, and Secrets.
Both component's access is limited to the Kubernetes namespace that you chose during the onboarding process.
After the initial onboarding, the lifecycle of these components will be controlled by the Qdrant Cloud platform via the built-in Helm controller.
You don't need to expose your Kubernetes Cluster to the Qdrant Cloud platform, you don't need to open any ports for incoming traffic, and you don't need to provide any Kubernetes or cloud provider credentials to the Qdrant Cloud platform.
![hybrid-cloud-architecture](/blog/hybrid-cloud/hybrid-cloud-architecture.png)
| documentation/hybrid-cloud/_index.md |
---
title: Multitenancy
weight: 12
aliases:
- ../tutorials/multiple-partitions
- /tutorials/multiple-partitions/
---
# Configure Multitenancy
**How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called multitenancy. It is efficient for most of users, but it requires additional configuration. This document will show you how to set it up.
**When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise.
## Partition by payload
When an instance is shared between multiple users, you may need to partition vectors by user. This is done so that each user can only access their own vectors and can't see the vectors of other users.
> ### NOTE
>
> The key doesn't necessarily need to be named `group_id`. You can choose a name that best suits your data structure and naming conventions.
1. Add a `group_id` field to each vector in the collection.
```http
PUT /collections/{collection_name}/points
{
"points": [
{
"id": 1,
"payload": {"group_id": "user_1"},
"vector": [0.9, 0.1, 0.1]
},
{
"id": 2,
"payload": {"group_id": "user_1"},
"vector": [0.1, 0.9, 0.1]
},
{
"id": 3,
"payload": {"group_id": "user_2"},
"vector": [0.1, 0.1, 0.9]
},
]
}
```
```python
client.upsert(
collection_name="{collection_name}",
points=[
models.PointStruct(
id=1,
payload={"group_id": "user_1"},
vector=[0.9, 0.1, 0.1],
),
models.PointStruct(
id=2,
payload={"group_id": "user_1"},
vector=[0.1, 0.9, 0.1],
),
models.PointStruct(
id=3,
payload={"group_id": "user_2"},
vector=[0.1, 0.1, 0.9],
),
],
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.upsert("{collection_name}", {
points: [
{
id: 1,
payload: { group_id: "user_1" },
vector: [0.9, 0.1, 0.1],
},
{
id: 2,
payload: { group_id: "user_1" },
vector: [0.1, 0.9, 0.1],
},
{
id: 3,
payload: { group_id: "user_2" },
vector: [0.1, 0.1, 0.9],
},
],
});
```
```rust
use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.upsert_points(UpsertPointsBuilder::new(
"{collection_name}",
vec![
PointStruct::new(1, vec![0.9, 0.1, 0.1], [("group_id", "user_1".into())]),
PointStruct::new(2, vec![0.1, 0.9, 0.1], [("group_id", "user_1".into())]),
PointStruct::new(3, vec![0.1, 0.1, 0.9], [("group_id", "user_2".into())]),
],
))
.await?;
```
```java
import java.util.List;
import java.util.Map;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.PointStruct;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.upsertAsync(
"{collection_name}",
List.of(
PointStruct.newBuilder()
.setId(id(1))
.setVectors(vectors(0.9f, 0.1f, 0.1f))
.putAllPayload(Map.of("group_id", value("user_1")))
.build(),
PointStruct.newBuilder()
.setId(id(2))
.setVectors(vectors(0.1f, 0.9f, 0.1f))
.putAllPayload(Map.of("group_id", value("user_1")))
.build(),
PointStruct.newBuilder()
.setId(id(3))
.setVectors(vectors(0.1f, 0.1f, 0.9f))
.putAllPayload(Map.of("group_id", value("user_2")))
.build()))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpsertAsync(
collectionName: "{collection_name}",
points: new List<PointStruct>
{
new()
{
Id = 1,
Vectors = new[] { 0.9f, 0.1f, 0.1f },
Payload = { ["group_id"] = "user_1" }
},
new()
{
Id = 2,
Vectors = new[] { 0.1f, 0.9f, 0.1f },
Payload = { ["group_id"] = "user_1" }
},
new()
{
Id = 3,
Vectors = new[] { 0.1f, 0.1f, 0.9f },
Payload = { ["group_id"] = "user_2" }
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "{collection_name}",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewIDNum(1),
Vectors: qdrant.NewVectors(0.9, 0.1, 0.1),
Payload: qdrant.NewValueMap(map[string]any{"group_id": "user_1"}),
},
{
Id: qdrant.NewIDNum(2),
Vectors: qdrant.NewVectors(0.1, 0.9, 0.1),
Payload: qdrant.NewValueMap(map[string]any{"group_id": "user_1"}),
},
{
Id: qdrant.NewIDNum(3),
Vectors: qdrant.NewVectors(0.1, 0.1, 0.9),
Payload: qdrant.NewValueMap(map[string]any{"group_id": "user_2"}),
},
},
})
```
2. Use a filter along with `group_id` to filter vectors for each user.
```http
POST /collections/{collection_name}/points/query
{
"query": [0.1, 0.1, 0.9],
"filter": {
"must": [
{
"key": "group_id",
"match": {
"value": "user_1"
}
}
]
},
"limit": 10
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[0.1, 0.1, 0.9],
query_filter=models.Filter(
must=[
models.FieldCondition(
key="group_id",
match=models.MatchValue(
value="user_1",
),
)
]
),
limit=10,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.1, 0.1, 0.9],
filter: {
must: [{ key: "group_id", match: { value: "user_1" } }],
},
limit: 10,
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, QueryPointsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.1, 0.1, 0.9])
.limit(10)
.filter(Filter::must([Condition::matches(
"group_id",
"user_1".to_string(),
)])),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.QueryFactory.nearest;
import static io.qdrant.client.ConditionFactory.matchKeyword;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder().addMust(matchKeyword("group_id", "user_1")).build())
.setQuery(nearest(0.1f, 0.1f, 0.9f))
.setLimit(10)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.1f, 0.1f, 0.9f },
filter: MatchKeyword("group_id", "user_1"),
limit: 10
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.1, 0.1, 0.9),
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("group_id", "user_1"),
},
},
})
```
## Calibrate performance
The speed of indexation may become a bottleneck in this case, as each user's vector will be indexed into the same collection. To avoid this bottleneck, consider _bypassing the construction of a global vector index_ for the entire collection and building it only for individual groups instead.
By adopting this strategy, Qdrant will index vectors for each user independently, significantly accelerating the process.
To implement this approach, you should:
1. Set `payload_m` in the HNSW configuration to a non-zero value, such as 16.
2. Set `m` in hnsw config to 0. This will disable building global index for the whole collection.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine"
},
"hnsw_config": {
"payload_m": 16,
"m": 0
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
hnsw_config=models.HnswConfigDiff(
payload_m=16,
m=0,
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
},
hnsw_config: {
payload_m: 16,
m: 0,
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, HnswConfigDiffBuilder, VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
.hnsw_config(HnswConfigDiffBuilder::default().payload_m(16).m(0)),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.HnswConfigDiff;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.build())
.build())
.setHnswConfig(HnswConfigDiff.newBuilder().setPayloadM(16).setM(0).build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
hnswConfig: new HnswConfigDiff { PayloadM = 16, M = 0 }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
}),
HnswConfig: &qdrant.HnswConfigDiff{
PayloadM: qdrant.PtrOf(uint64(16)),
M: qdrant.PtrOf(uint64(0)),
},
})
```
3. Create keyword payload index for `group_id` field.
<aside role="alert">
<code>is_tenant</code> parameter is available as of v1.11.0. Previous versions should use default options for keyword index creation.
</aside>
```http
PUT /collections/{collection_name}/index
{
"field_name": "group_id",
"field_schema": {
"type": "keyword",
"is_tenant": true
}
}
```
```python
client.create_payload_index(
collection_name="{collection_name}",
field_name="group_id",
field_schema=models.KeywordIndexParams(
type="keywprd",
is_tenant=True,
),
)
```
```typescript
client.createPayloadIndex("{collection_name}", {
field_name: "group_id",
field_schema: {
type: "keyword",
is_tenant: true,
},
});
```
```rust
use qdrant_client::qdrant::{
CreateFieldIndexCollectionBuilder,
KeywordIndexParamsBuilder,
FieldType
};
use qdrant_client::{Qdrant, QdrantError};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.create_field_index(
CreateFieldIndexCollectionBuilder::new(
"{collection_name}",
"group_id",
FieldType::Keyword,
).field_index_params(
KeywordIndexParamsBuilder::default()
.is_tenant(true)
)
).await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.PayloadIndexParams;
import io.qdrant.client.grpc.Collections.PayloadSchemaType;
import io.qdrant.client.grpc.Collections.KeywordIndexParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createPayloadIndexAsync(
"{collection_name}",
"group_id",
PayloadSchemaType.Keyword,
PayloadIndexParams.newBuilder()
.setKeywordIndexParams(
KeywordIndexParams.newBuilder()
.setIsTenant(true)
.build())
.build(),
null,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.CreatePayloadIndexAsync(
collectionName: "{collection_name}",
fieldName: "group_id",
schemaType: PayloadSchemaType.Keyword,
indexParams: new PayloadIndexParams
{
KeywordIndexParams = new KeywordIndexParams
{
IsTenant = true
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
CollectionName: "{collection_name}",
FieldName: "group_id",
FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(),
FieldIndexParams: qdrant.NewPayloadIndexParams(
&qdrant.KeywordIndexParams{
IsTenant: qdrant.PtrOf(true),
}),
})
```
`is_tenant=true` parameter is optional, but specifying it provides storage with additional inforamtion about the usage patterns the collection is going to use.
When specified, storage structure will be organized in a way to co-locate vectors of the same tenant together, which can significantly improve performance in some cases.
## Limitations
One downside to this approach is that global requests (without the `group_id` filter) will be slower since they will necessitate scanning all groups to identify the nearest neighbors.
| documentation/guides/multiple-partitions.md |
---
title: Administration
weight: 10
aliases:
- ../administration
---
# Administration
Qdrant exposes administration tools which enable to modify at runtime the behavior of a qdrant instance without changing its configuration manually.
## Locking
A locking API enables users to restrict the possible operations on a qdrant process.
It is important to mention that:
- The configuration is not persistent therefore it is necessary to lock again following a restart.
- Locking applies to a single node only. It is necessary to call lock on all the desired nodes in a distributed deployment setup.
Lock request sample:
```http
POST /locks
{
"error_message": "write is forbidden",
"write": true
}
```
Write flags enables/disables write lock.
If the write lock is set to true, qdrant doesn't allow creating new collections or adding new data to the existing storage.
However, deletion operations or updates are not forbidden under the write lock.
This feature enables administrators to prevent a qdrant process from using more disk space while permitting users to search and delete unnecessary data.
You can optionally provide the error message that should be used for error responses to users.
## Recovery mode
*Available as of v1.2.0*
Recovery mode can help in situations where Qdrant fails to start repeatedly.
When starting in recovery mode, Qdrant only loads collection metadata to prevent
going out of memory. This allows you to resolve out of memory situations, for
example, by deleting a collection. After resolving Qdrant can be restarted
normally to continue operation.
In recovery mode, collection operations are limited to
[deleting](../../concepts/collections/#delete-collection) a
collection. That is because only collection metadata is loaded during recovery.
To enable recovery mode with the Qdrant Docker image you must set the
environment variable `QDRANT_ALLOW_RECOVERY_MODE=true`. The container will try
to start normally first, and restarts in recovery mode if initialisation fails
due to an out of memory error. This behavior is disabled by default.
If using a Qdrant binary, recovery mode can be enabled by setting a recovery
message in an environment variable, such as
`QDRANT__STORAGE__RECOVERY_MODE="My recovery message"`.
| documentation/guides/administration.md |
---
title: Troubleshooting
weight: 170
aliases:
- ../tutorials/common-errors
- /documentation/troubleshooting/
---
# Solving common errors
## Too many files open (OS error 24)
Each collection segment needs some files to be open. At some point you may encounter the following errors in your server log:
```text
Error: Too many files open (OS error 24)
```
In such a case you may need to increase the limit of the open files. It might be done, for example, while you launch the Docker container:
```bash
docker run --ulimit nofile=10000:10000 qdrant/qdrant:latest
```
The command above will set both soft and hard limits to `10000`.
If you are not using Docker, the following command will change the limit for the current user session:
```bash
ulimit -n 10000
```
Please note, the command should be executed before you run Qdrant server.
## Can't open Collections meta Wal
When starting a Qdrant instance as part of a distributed deployment, you may
come across an error message similar to this:
```bash
Can't open Collections meta Wal: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }
```
It means that Qdrant cannot start because a collection cannot be loaded. Its
associated [WAL](../../concepts/storage/#versioning) files are currently
unavailable, likely because the same files are already being used by another
Qdrant instance.
Each node must have their own separate storage directory, volume or mount.
The formed cluster will take care of sharing all data with each node, putting it
all in the correct places for you. If using Kubernetes, each node must have
their own volume. If using Docker, each node must have their own storage mount
or volume. If using Qdrant directly, each node must have their own storage
directory.
| documentation/guides/common-errors.md |
---
title: Configuration
weight: 160
aliases:
- ../configuration
- /guides/configuration/
---
# Configuration
To change or correct Qdrant's behavior, default collection settings, and network interface parameters, you can use configuration files.
The default configuration file is located at [config/config.yaml](https://github.com/qdrant/qdrant/blob/master/config/config.yaml).
To change the default configuration, add a new configuration file and specify
the path with `--config-path path/to/custom_config.yaml`. If running in
production mode, you could also choose to overwrite `config/production.yaml`.
See [ordering](#order-and-priority) for details on how configurations are
loaded.
The [Installation](../installation/) guide contains examples of how to set up Qdrant with a custom configuration for the different deployment methods.
## Order and priority
*Effective as of v1.2.1*
Multiple configurations may be loaded on startup. All of them are merged into a
single effective configuration that is used by Qdrant.
Configurations are loaded in the following order, if present:
1. Embedded base configuration ([source](https://github.com/qdrant/qdrant/blob/master/config/config.yaml))
2. File `config/config.yaml`
3. File `config/{RUN_MODE}.yaml` (such as `config/production.yaml`)
4. File `config/local.yaml`
5. Config provided with `--config-path PATH` (if set)
6. [Environment variables](#environment-variables)
This list is from least to most significant. Properties in later configurations
will overwrite those loaded before it. For example, a property set with
`--config-path` will overwrite those in other files.
Most of these files are included by default in the Docker container. But it is
likely that they are absent on your local machine if you run the `qdrant` binary
manually.
If file 2 or 3 are not found, a warning is shown on startup.
If file 5 is provided but not found, an error is shown on startup.
Other supported configuration file formats and extensions include: `.toml`, `.json`, `.ini`.
## Environment variables
It is possible to set configuration properties using environment variables.
Environment variables are always the most significant and cannot be overwritten
(see [ordering](#order-and-priority)).
All environment variables are prefixed with `QDRANT__` and are separated with
`__`.
These variables:
```bash
QDRANT__LOG_LEVEL=INFO
QDRANT__SERVICE__HTTP_PORT=6333
QDRANT__SERVICE__ENABLE_TLS=1
QDRANT__TLS__CERT=./tls/cert.pem
QDRANT__TLS__CERT_TTL=3600
```
result in this configuration:
```yaml
log_level: INFO
service:
http_port: 6333
enable_tls: true
tls:
cert: ./tls/cert.pem
cert_ttl: 3600
```
To run Qdrant locally with a different HTTP port you could use:
```bash
QDRANT__SERVICE__HTTP_PORT=1234 ./qdrant
```
## Configuration file example
```yaml
log_level: INFO
storage:
# Where to store all the data
storage_path: ./storage
# Where to store snapshots
snapshots_path: ./snapshots
snapshots_config:
# "local" or "s3" - where to store snapshots
snapshots_storage: local
# s3_config:
# bucket: ""
# region: ""
# access_key: ""
# secret_key: ""
# endpoint_url: ""
# Where to store temporary files
# If null, temporary snapshot are stored in: storage/snapshots_temp/
temp_path: null
# If true - point's payload will not be stored in memory.
# It will be read from the disk every time it is requested.
# This setting saves RAM by (slightly) increasing the response time.
# Note: those payload values that are involved in filtering and are indexed - remain in RAM.
on_disk_payload: true
# Maximum number of concurrent updates to shard replicas
# If `null` - maximum concurrency is used.
update_concurrency: null
# Write-ahead-log related configuration
wal:
# Size of a single WAL segment
wal_capacity_mb: 32
# Number of WAL segments to create ahead of actual data requirement
wal_segments_ahead: 0
# Normal node - receives all updates and answers all queries
node_type: "Normal"
# Listener node - receives all updates, but does not answer search/read queries
# Useful for setting up a dedicated backup node
# node_type: "Listener"
performance:
# Number of parallel threads used for search operations. If 0 - auto selection.
max_search_threads: 0
# Max number of threads (jobs) for running optimizations across all collections, each thread runs one job.
# If 0 - have no limit and choose dynamically to saturate CPU.
# Note: each optimization job will also use `max_indexing_threads` threads by itself for index building.
max_optimization_threads: 0
# CPU budget, how many CPUs (threads) to allocate for an optimization job.
# If 0 - auto selection, keep 1 or more CPUs unallocated depending on CPU size
# If negative - subtract this number of CPUs from the available CPUs.
# If positive - use this exact number of CPUs.
optimizer_cpu_budget: 0
# Prevent DDoS of too many concurrent updates in distributed mode.
# One external update usually triggers multiple internal updates, which breaks internal
# timings. For example, the health check timing and consensus timing.
# If null - auto selection.
update_rate_limit: null
# Limit for number of incoming automatic shard transfers per collection on this node, does not affect user-requested transfers.
# The same value should be used on all nodes in a cluster.
# Default is to allow 1 transfer.
# If null - allow unlimited transfers.
#incoming_shard_transfers_limit: 1
# Limit for number of outgoing automatic shard transfers per collection on this node, does not affect user-requested transfers.
# The same value should be used on all nodes in a cluster.
# Default is to allow 1 transfer.
# If null - allow unlimited transfers.
#outgoing_shard_transfers_limit: 1
# Enable async scorer which uses io_uring when rescoring.
# Only supported on Linux, must be enabled in your kernel.
# See: <https://qdrant.tech/articles/io_uring/#and-what-about-qdrant>
#async_scorer: false
optimizers:
# The minimal fraction of deleted vectors in a segment, required to perform segment optimization
deleted_threshold: 0.2
# The minimal number of vectors in a segment, required to perform segment optimization
vacuum_min_vector_number: 1000
# Target amount of segments optimizer will try to keep.
# Real amount of segments may vary depending on multiple parameters:
# - Amount of stored points
# - Current write RPS
#
# It is recommended to select default number of segments as a factor of the number of search threads,
# so that each segment would be handled evenly by one of the threads.
# If `default_segment_number = 0`, will be automatically selected by the number of available CPUs
default_segment_number: 0
# Do not create segments larger this size (in KiloBytes).
# Large segments might require disproportionately long indexation times,
# therefore it makes sense to limit the size of segments.
#
# If indexation speed have more priority for your - make this parameter lower.
# If search speed is more important - make this parameter higher.
# Note: 1Kb = 1 vector of size 256
# If not set, will be automatically selected considering the number of available CPUs.
max_segment_size_kb: null
# Maximum size (in KiloBytes) of vectors to store in-memory per segment.
# Segments larger than this threshold will be stored as read-only memmaped file.
# To enable memmap storage, lower the threshold
# Note: 1Kb = 1 vector of size 256
# To explicitly disable mmap optimization, set to `0`.
# If not set, will be disabled by default.
memmap_threshold_kb: null
# Maximum size (in KiloBytes) of vectors allowed for plain index.
# Default value based on https://github.com/google-research/google-research/blob/master/scann/docs/algorithms.md
# Note: 1Kb = 1 vector of size 256
# To explicitly disable vector indexing, set to `0`.
# If not set, the default value will be used.
indexing_threshold_kb: 20000
# Interval between forced flushes.
flush_interval_sec: 5
# Max number of threads (jobs) for running optimizations per shard.
# Note: each optimization job will also use `max_indexing_threads` threads by itself for index building.
# If null - have no limit and choose dynamically to saturate CPU.
# If 0 - no optimization threads, optimizations will be disabled.
max_optimization_threads: null
# This section has the same options as 'optimizers' above. All values specified here will overwrite the collections
# optimizers configs regardless of the config above and the options specified at collection creation.
#optimizers_overwrite:
# deleted_threshold: 0.2
# vacuum_min_vector_number: 1000
# default_segment_number: 0
# max_segment_size_kb: null
# memmap_threshold_kb: null
# indexing_threshold_kb: 20000
# flush_interval_sec: 5
# max_optimization_threads: null
# Default parameters of HNSW Index. Could be overridden for each collection or named vector individually
hnsw_index:
# Number of edges per node in the index graph. Larger the value - more accurate the search, more space required.
m: 16
# Number of neighbours to consider during the index building. Larger the value - more accurate the search, more time required to build index.
ef_construct: 100
# Minimal size (in KiloBytes) of vectors for additional payload-based indexing.
# If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used -
# in this case full-scan search should be preferred by query planner and additional indexing is not required.
# Note: 1Kb = 1 vector of size 256
full_scan_threshold_kb: 10000
# Number of parallel threads used for background index building.
# If 0 - automatically select.
# Best to keep between 8 and 16 to prevent likelihood of building broken/inefficient HNSW graphs.
# On small CPUs, less threads are used.
max_indexing_threads: 0
# Store HNSW index on disk. If set to false, index will be stored in RAM. Default: false
on_disk: false
# Custom M param for hnsw graph built for payload index. If not set, default M will be used.
payload_m: null
# Default shard transfer method to use if none is defined.
# If null - don't have a shard transfer preference, choose automatically.
# If stream_records, snapshot or wal_delta - prefer this specific method.
# More info: https://qdrant.tech/documentation/guides/distributed_deployment/#shard-transfer-method
shard_transfer_method: null
# Default parameters for collections
collection:
# Number of replicas of each shard that network tries to maintain
replication_factor: 1
# How many replicas should apply the operation for us to consider it successful
write_consistency_factor: 1
# Default parameters for vectors.
vectors:
# Whether vectors should be stored in memory or on disk.
on_disk: null
# shard_number_per_node: 1
# Default quantization configuration.
# More info: https://qdrant.tech/documentation/guides/quantization
quantization: null
service:
# Maximum size of POST data in a single request in megabytes
max_request_size_mb: 32
# Number of parallel workers used for serving the api. If 0 - equal to the number of available cores.
# If missing - Same as storage.max_search_threads
max_workers: 0
# Host to bind the service on
host: 0.0.0.0
# HTTP(S) port to bind the service on
http_port: 6333
# gRPC port to bind the service on.
# If `null` - gRPC is disabled. Default: null
# Comment to disable gRPC:
grpc_port: 6334
# Enable CORS headers in REST API.
# If enabled, browsers would be allowed to query REST endpoints regardless of query origin.
# More info: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
# Default: true
enable_cors: true
# Enable HTTPS for the REST and gRPC API
enable_tls: false
# Check user HTTPS client certificate against CA file specified in tls config
verify_https_client_certificate: false
# Set an api-key.
# If set, all requests must include a header with the api-key.
# example header: `api-key: <API-KEY>`
#
# If you enable this you should also enable TLS.
# (Either above or via an external service like nginx.)
# Sending an api-key over an unencrypted channel is insecure.
#
# Uncomment to enable.
# api_key: your_secret_api_key_here
# Set an api-key for read-only operations.
# If set, all requests must include a header with the api-key.
# example header: `api-key: <API-KEY>`
#
# If you enable this you should also enable TLS.
# (Either above or via an external service like nginx.)
# Sending an api-key over an unencrypted channel is insecure.
#
# Uncomment to enable.
# read_only_api_key: your_secret_read_only_api_key_here
# Uncomment to enable JWT Role Based Access Control (RBAC).
# If enabled, you can generate JWT tokens with fine-grained rules for access control.
# Use generated token instead of API key.
#
# jwt_rbac: true
cluster:
# Use `enabled: true` to run Qdrant in distributed deployment mode
enabled: false
# Configuration of the inter-cluster communication
p2p:
# Port for internal communication between peers
port: 6335
# Use TLS for communication between peers
enable_tls: false
# Configuration related to distributed consensus algorithm
consensus:
# How frequently peers should ping each other.
# Setting this parameter to lower value will allow consensus
# to detect disconnected nodes earlier, but too frequent
# tick period may create significant network and CPU overhead.
# We encourage you NOT to change this parameter unless you know what you are doing.
tick_period_ms: 100
# Set to true to prevent service from sending usage statistics to the developers.
# Read more: https://qdrant.tech/documentation/guides/telemetry
telemetry_disabled: false
# TLS configuration.
# Required if either service.enable_tls or cluster.p2p.enable_tls is true.
tls:
# Server certificate chain file
cert: ./tls/cert.pem
# Server private key file
key: ./tls/key.pem
# Certificate authority certificate file.
# This certificate will be used to validate the certificates
# presented by other nodes during inter-cluster communication.
#
# If verify_https_client_certificate is true, it will verify
# HTTPS client certificate
#
# Required if cluster.p2p.enable_tls is true.
ca_cert: ./tls/cacert.pem
# TTL in seconds to reload certificate from disk, useful for certificate rotations.
# Only works for HTTPS endpoints. Does not support gRPC (and intra-cluster communication).
# If `null` - TTL is disabled.
cert_ttl: 3600
```
## Validation
*Available since v1.1.1*
The configuration is validated on startup. If a configuration is loaded but
validation fails, a warning is logged. E.g.:
```text
WARN Settings configuration file has validation errors:
WARN - storage.optimizers.memmap_threshold: value 123 invalid, must be 1000 or larger
WARN - storage.hnsw_index.m: value 1 invalid, must be from 4 to 10000
```
The server will continue to operate. Any validation errors should be fixed as
soon as possible though to prevent problematic behavior.
| documentation/guides/configuration.md |
---
title: Optimize Resources
weight: 11
aliases:
- ../tutorials/optimize
---
# Optimize Qdrant
Different use cases have different requirements for balancing between memory, speed, and precision.
Qdrant is designed to be flexible and customizable so you can tune it to your needs.
![Trafeoff](/docs/tradeoff.png)
Let's look deeper into each of those possible optimization scenarios.
## Prefer low memory footprint with high speed search
The main way to achieve high speed search with low memory footprint is to keep vectors on disk while at the same time minimizing the number of disk reads.
Vector quantization is one way to achieve this. Quantization converts vectors into a more compact representation, which can be stored in memory and used for search. With smaller vectors you can cache more in RAM and reduce the number of disk reads.
To configure in-memory quantization, with on-disk original vectors, you need to create a collection with the following configuration:
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine",
"on_disk": true
},
"quantization_config": {
"scalar": {
"type": "int8",
"always_ram": true
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True),
quantization_config=models.ScalarQuantization(
scalar=models.ScalarQuantizationConfig(
type=models.ScalarType.INT8,
always_ram=True,
),
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
on_disk: true,
},
quantization_config: {
scalar: {
type: "int8",
always_ram: true,
},
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder,
VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
.quantization_config(
ScalarQuantizationBuilder::default()
.r#type(QuantizationType::Int8.into())
.always_ram(true),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
import io.qdrant.client.grpc.Collections.QuantizationConfig;
import io.qdrant.client.grpc.Collections.QuantizationType;
import io.qdrant.client.grpc.Collections.ScalarQuantization;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.setOnDisk(true)
.build())
.build())
.setQuantizationConfig(
QuantizationConfig.newBuilder()
.setScalar(
ScalarQuantization.newBuilder()
.setType(QuantizationType.Int8)
.setAlwaysRam(true)
.build())
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true },
quantizationConfig: new QuantizationConfig
{
Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true }
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
OnDisk: qdrant.PtrOf(true),
}),
QuantizationConfig: qdrant.NewQuantizationScalar(&qdrant.ScalarQuantization{
Type: qdrant.QuantizationType_Int8,
AlwaysRam: qdrant.PtrOf(true),
}),
})
```
`on_disk` will ensure that vectors will be stored on disk, while `always_ram` will ensure that quantized vectors will be stored in RAM.
Optionally, you can disable rescoring with search `params`, which will reduce the number of disk reads even further, but potentially slightly decrease the precision.
```http
POST /collections/{collection_name}/points/query
{
"query": [0.2, 0.1, 0.9, 0.7],
"params": {
"quantization": {
"rescore": false
}
},
"limit": 10
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
search_params=models.SearchParams(
quantization=models.QuantizationSearchParams(rescore=False)
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
params: {
quantization: {
rescore: false,
},
},
});
```
```rust
use qdrant_client::qdrant::{
QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(3)
.params(
SearchParamsBuilder::default()
.quantization(QuantizationSearchParamsBuilder::default().rescore(false)),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QuantizationSearchParams;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.SearchParams;
import static io.qdrant.client.QueryFactory.nearest;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setParams(
SearchParams.newBuilder()
.setQuantization(
QuantizationSearchParams.newBuilder().setRescore(false).build())
.build())
.setLimit(3)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
searchParams: new SearchParams
{
Quantization = new QuantizationSearchParams { Rescore = false }
},
limit: 3
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
Params: &qdrant.SearchParams{
Quantization: &qdrant.QuantizationSearchParams{
Rescore: qdrant.PtrOf(true),
},
},
})
```
## Prefer high precision with low memory footprint
In case you need high precision, but don't have enough RAM to store vectors in memory, you can enable on-disk vectors and HNSW index.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine",
"on_disk": true
},
"hnsw_config": {
"on_disk": true
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True),
hnsw_config=models.HnswConfigDiff(on_disk=True),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
on_disk: true,
},
hnsw_config: {
on_disk: true,
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, HnswConfigDiffBuilder, VectorParamsBuilder,
};
use qdrant_client::{Qdrant, QdrantError};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true))
.hnsw_config(HnswConfigDiffBuilder::default().on_disk(true)),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.HnswConfigDiff;
import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.setOnDisk(true)
.build())
.build())
.setHnswConfig(HnswConfigDiff.newBuilder().setOnDisk(true).build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true},
hnswConfig: new HnswConfigDiff { OnDisk = true }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
OnDisk: qdrant.PtrOf(true),
}),
HnswConfig: &qdrant.HnswConfigDiff{
OnDisk: qdrant.PtrOf(true),
},
})
```
In this scenario you can increase the precision of the search by increasing the `ef` and `m` parameters of the HNSW index, even with limited RAM.
```json
...
"hnsw_config": {
"m": 64,
"ef_construct": 512,
"on_disk": true
}
...
```
The disk IOPS is a critical factor in this scenario, it will determine how fast you can perform search.
You can use [fio](https://gist.github.com/superboum/aaa45d305700a7873a8ebbab1abddf2b) to measure disk IOPS.
## Prefer high precision with high speed search
For high speed and high precision search it is critical to keep as much data in RAM as possible.
By default, Qdrant follows this approach, but you can tune it to your needs.
It is possible to achieve high search speed and tunable accuracy by applying quantization with re-scoring.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine"
},
"quantization_config": {
"scalar": {
"type": "int8",
"always_ram": true
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
quantization_config=models.ScalarQuantization(
scalar=models.ScalarQuantizationConfig(
type=models.ScalarType.INT8,
always_ram=True,
),
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
},
quantization_config: {
scalar: {
type: "int8",
always_ram: true,
},
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder,
VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
.quantization_config(
ScalarQuantizationBuilder::default()
.r#type(QuantizationType::Int8.into())
.always_ram(true),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
import io.qdrant.client.grpc.Collections.QuantizationConfig;
import io.qdrant.client.grpc.Collections.QuantizationType;
import io.qdrant.client.grpc.Collections.ScalarQuantization;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.build())
.build())
.setQuantizationConfig(
QuantizationConfig.newBuilder()
.setScalar(
ScalarQuantization.newBuilder()
.setType(QuantizationType.Int8)
.setAlwaysRam(true)
.build())
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine},
quantizationConfig: new QuantizationConfig
{
Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true }
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
}),
QuantizationConfig: qdrant.NewQuantizationScalar(&qdrant.ScalarQuantization{
Type: qdrant.QuantizationType_Int8,
AlwaysRam: qdrant.PtrOf(true),
}),
})
```
There are also some search-time parameters you can use to tune the search accuracy and speed:
```http
POST /collections/{collection_name}/points/query
{
"query": [0.2, 0.1, 0.9, 0.7],
"params": {
"hnsw_ef": 128,
"exact": false
},
"limit": 3
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
search_params=models.SearchParams(hnsw_ef=128, exact=False),
limit=3,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
params: {
hnsw_ef: 128,
exact: false,
},
limit: 3,
});
```
```rust
use qdrant_client::qdrant::{QueryPointsBuilder, SearchParamsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(3)
.params(SearchParamsBuilder::default().hnsw_ef(128).exact(false)),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.SearchParams;
import static io.qdrant.client.QueryFactory.nearest;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setParams(SearchParams.newBuilder().setHnswEf(128).setExact(false).build())
.setLimit(3)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
searchParams: new SearchParams { HnswEf = 128, Exact = false },
limit: 3
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
Params: &qdrant.SearchParams{
HnswEf: qdrant.PtrOf(uint64(128)),
Exact: qdrant.PtrOf(false),
},
})
```
- `hnsw_ef` - controls the number of neighbors to visit during search. The higher the value, the more accurate and slower the search will be. Recommended range is 32-512.
- `exact` - if set to `true`, will perform exact search, which will be slower, but more accurate. You can use it to compare results of the search with different `hnsw_ef` values versus the ground truth.
## Latency vs Throughput
- There are two main approaches to measure the speed of search:
- latency of the request - the time from the moment request is submitted to the moment a response is received
- throughput - the number of requests per second the system can handle
Those approaches are not mutually exclusive, but in some cases it might be preferable to optimize for one or another.
To prefer minimizing latency, you can set up Qdrant to use as many cores as possible for a single request\.
You can do this by setting the number of segments in the collection to be equal to the number of cores in the system. In this case, each segment will be processed in parallel, and the final result will be obtained faster.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine"
},
"optimizers_config": {
"default_segment_number": 16
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
optimizers_config=models.OptimizersConfigDiff(default_segment_number=16),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
},
optimizers_config: {
default_segment_number: 16,
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, OptimizersConfigDiffBuilder, VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
.optimizers_config(
OptimizersConfigDiffBuilder::default().default_segment_number(16),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.build())
.build())
.setOptimizersConfig(
OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(16).build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 16 }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
}),
OptimizersConfig: &qdrant.OptimizersConfigDiff{
DefaultSegmentNumber: qdrant.PtrOf(uint64(16)),
},
})
```
To prefer throughput, you can set up Qdrant to use as many cores as possible for processing multiple requests in parallel.
To do that, you can configure qdrant to use minimal number of segments, which is usually 2.
Large segments benefit from the size of the index and overall smaller number of vector comparisons required to find the nearest neighbors. But at the same time require more time to build index.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine"
},
"optimizers_config": {
"default_segment_number": 2
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
optimizers_config=models.OptimizersConfigDiff(default_segment_number=2),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
},
optimizers_config: {
default_segment_number: 2,
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, OptimizersConfigDiffBuilder, VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
.optimizers_config(
OptimizersConfigDiffBuilder::default().default_segment_number(2),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.build())
.build())
.setOptimizersConfig(
OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(2).build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 2 }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
}),
OptimizersConfig: &qdrant.OptimizersConfigDiff{
DefaultSegmentNumber: qdrant.PtrOf(uint64(2)),
},
})
``` | documentation/guides/optimize.md |
---
title: Telemetry
weight: 150
aliases:
- ../telemetry
---
# Telemetry
Qdrant collects anonymized usage statistics from users in order to improve the engine.
You can [deactivate](#deactivate-telemetry) at any time, and any data that has already been collected can be [deleted on request](#request-information-deletion).
## Why do we collect telemetry?
We want to make Qdrant fast and reliable. To do this, we need to understand how it performs in real-world scenarios.
We do a lot of benchmarking internally, but it is impossible to cover all possible use cases, hardware, and configurations.
In order to identify bottlenecks and improve Qdrant, we need to collect information about how it is used.
Additionally, Qdrant uses a bunch of internal heuristics to optimize the performance.
To better set up parameters for these heuristics, we need to collect timings and counters of various pieces of code.
With this information, we can make Qdrant faster for everyone.
## What information is collected?
There are 3 types of information that we collect:
* System information - general information about the system, such as CPU, RAM, and disk type. As well as the configuration of the Qdrant instance.
* Performance - information about timings and counters of various pieces of code.
* Critical error reports - information about critical errors, such as backtraces, that occurred in Qdrant. This information would allow to identify problems nobody yet reported to us.
### We **never** collect the following information:
- User's IP address
- Any data that can be used to identify the user or the user's organization
- Any data, stored in the collections
- Any names of the collections
- Any URLs
## How do we anonymize data?
We understand that some users may be concerned about the privacy of their data.
That is why we make an extra effort to ensure your privacy.
There are several different techniques that we use to anonymize the data:
- We use a random UUID to identify instances. This UUID is generated on each startup and is not stored anywhere. There are no other ways to distinguish between different instances.
- We round all big numbers, so that the last digits are always 0. For example, if the number is 123456789, we will store 123456000.
- We replace all names with irreversibly hashed values. So no collection or field names will leak into the telemetry.
- All urls are hashed as well.
You can see exact version of anomymized collected data by accessing the [telemetry API](https://api.qdrant.tech/master/api-reference/service/telemetry) with `anonymize=true` parameter.
For example, <http://localhost:6333/telemetry?details_level=6&anonymize=true>
## Deactivate telemetry
You can deactivate telemetry by:
- setting the `QDRANT__TELEMETRY_DISABLED` environment variable to `true`
- setting the config option `telemetry_disabled` to `true` in the `config/production.yaml` or `config/config.yaml` files
- using cli option `--disable-telemetry`
Any of these options will prevent Qdrant from sending any telemetry data.
If you decide to deactivate telemetry, we kindly ask you to share your feedback with us in the [Discord community](https://qdrant.to/discord) or GitHub [discussions](https://github.com/qdrant/qdrant/discussions)
## Request information deletion
We provide an email address so that users can request the complete removal of their data from all of our tools.
To do so, send an email to privacy@qdrant.com containing the unique identifier generated for your Qdrant installation.
You can find this identifier in the telemetry API response (`"id"` field), or in the logs of your Qdrant instance.
Any questions regarding the management of the data we collect can also be sent to this email address.
| documentation/guides/telemetry.md |
---
title: Distributed Deployment
weight: 100
aliases:
- ../distributed_deployment
- /guides/distributed_deployment
---
# Distributed deployment
Since version v0.8.0 Qdrant supports a distributed deployment mode.
In this mode, multiple Qdrant services communicate with each other to distribute the data across the peers to extend the storage capabilities and increase stability.
## How many Qdrant nodes should I run?
The ideal number of Qdrant nodes depends on how much you value cost-saving, resilience, and performance/scalability in relation to each other.
- **Prioritizing cost-saving**: If cost is most important to you, run a single Qdrant node. This is not recommended for production environments. Drawbacks:
- Resilience: Users will experience downtime during node restarts, and recovery is not possible unless you have backups or snapshots.
- Performance: Limited to the resources of a single server.
- **Prioritizing resilience**: If resilience is most important to you, run a Qdrant cluster with three or more nodes and two or more shard replicas. Clusters with three or more nodes and replication can perform all operations even while one node is down. Additionally, they gain performance benefits from load-balancing and they can recover from the permanent loss of one node without the need for backups or snapshots (but backups are still strongly recommended). This is most recommended for production environments. Drawbacks:
- Cost: Larger clusters are more costly than smaller clusters, which is the only drawback of this configuration.
- **Balancing cost, resilience, and performance**: Running a two-node Qdrant cluster with replicated shards allows the cluster to respond to most read/write requests even when one node is down, such as during maintenance events. Having two nodes also means greater performance than a single-node cluster while still being cheaper than a three-node cluster. Drawbacks:
- Resilience (uptime): The cluster cannot perform operations on collections when one node is down. Those operations require >50% of nodes to be running, so this is only possible in a 3+ node cluster. Since creating, editing, and deleting collections are usually rare operations, many users find this drawback to be negligible.
- Resilience (data integrity): If the data on one of the two nodes is permanently lost or corrupted, it cannot be recovered aside from snapshots or backups. Only 3+ node clusters can recover from the permanent loss of a single node since recovery operations require >50% of the cluster to be healthy.
- Cost: Replicating your shards requires storing two copies of your data.
- Performance: The maximum performance of a Qdrant cluster increases as you add more nodes.
In summary, single-node clusters are best for non-production workloads, replicated 3+ node clusters are the gold standard, and replicated 2-node clusters strike a good balance.
## Enabling distributed mode in self-hosted Qdrant
To enable distributed deployment - enable the cluster mode in the [configuration](../configuration/) or using the ENV variable: `QDRANT__CLUSTER__ENABLED=true`.
```yaml
cluster:
# Use `enabled: true` to run Qdrant in distributed deployment mode
enabled: true
# Configuration of the inter-cluster communication
p2p:
# Port for internal communication between peers
port: 6335
# Configuration related to distributed consensus algorithm
consensus:
# How frequently peers should ping each other.
# Setting this parameter to lower value will allow consensus
# to detect disconnected node earlier, but too frequent
# tick period may create significant network and CPU overhead.
# We encourage you NOT to change this parameter unless you know what you are doing.
tick_period_ms: 100
```
By default, Qdrant will use port `6335` for its internal communication.
All peers should be accessible on this port from within the cluster, but make sure to isolate this port from outside access, as it might be used to perform write operations.
Additionally, you must provide the `--uri` flag to the first peer so it can tell other nodes how it should be reached:
```bash
./qdrant --uri 'http://qdrant_node_1:6335'
```
Subsequent peers in a cluster must know at least one node of the existing cluster to synchronize through it with the rest of the cluster.
To do this, they need to be provided with a bootstrap URL:
```bash
./qdrant --bootstrap 'http://qdrant_node_1:6335'
```
The URL of the new peers themselves will be calculated automatically from the IP address of their request.
But it is also possible to provide them individually using the `--uri` argument.
```text
USAGE:
qdrant [OPTIONS]
OPTIONS:
--bootstrap <URI>
Uri of the peer to bootstrap from in case of multi-peer deployment. If not specified -
this peer will be considered as a first in a new deployment
--uri <URI>
Uri of this peer. Other peers should be able to reach it by this uri.
This value has to be supplied if this is the first peer in a new deployment.
In case this is not the first peer and it bootstraps the value is optional. If not
supplied then qdrant will take internal grpc port from config and derive the IP address
of this peer on bootstrap peer (receiving side)
```
After a successful synchronization you can observe the state of the cluster through the [REST API](https://api.qdrant.tech/master/api-reference/distributed/cluster-status):
```http
GET /cluster
```
Example result:
```json
{
"result": {
"status": "enabled",
"peer_id": 11532566549086892000,
"peers": {
"9834046559507417430": {
"uri": "http://172.18.0.3:6335/"
},
"11532566549086892528": {
"uri": "http://qdrant_node_1:6335/"
}
},
"raft_info": {
"term": 1,
"commit": 4,
"pending_operations": 1,
"leader": 11532566549086892000,
"role": "Leader"
}
},
"status": "ok",
"time": 5.731e-06
}
```
Note that enabling distributed mode does not automatically replicate your data. See the section on [making use of a new distributed Qdrant cluster](#making-use-of-a-new-distributed-qdrant-cluster) for the next steps.
## Enabling distributed mode in Qdrant Cloud
For best results, first ensure your cluster is running Qdrant v1.7.4 or higher. Older versions of Qdrant do support distributed mode, but improvements in v1.7.4 make distributed clusters more resilient during outages.
In the [Qdrant Cloud console](https://cloud.qdrant.io/), click "Scale Up" to increase your cluster size to >1. Qdrant Cloud configures the distributed mode settings automatically.
After the scale-up process completes, you will have a new empty node running alongside your existing node(s). To replicate data into this new empty node, see the next section.
## Making use of a new distributed Qdrant cluster
When you enable distributed mode and scale up to two or more nodes, your data does not move to the new node automatically; it starts out empty. To make use of your new empty node, do one of the following:
* Create a new replicated collection by setting the [replication_factor](#replication-factor) to 2 or more and setting the [number of shards](#choosing-the-right-number-of-shards) to a multiple of your number of nodes.
* If you have an existing collection which does not contain enough shards for each node, you must create a new collection as described in the previous bullet point.
* If you already have enough shards for each node and you merely need to replicate your data, follow the directions for [creating new shard replicas](#creating-new-shard-replicas).
* If you already have enough shards for each node and your data is already replicated, you can move data (without replicating it) onto the new node(s) by [moving shards](#moving-shards).
## Raft
Qdrant uses the [Raft](https://raft.github.io/) consensus protocol to maintain consistency regarding the cluster topology and the collections structure.
Operations on points, on the other hand, do not go through the consensus infrastructure.
Qdrant is not intended to have strong transaction guarantees, which allows it to perform point operations with low overhead.
In practice, it means that Qdrant does not guarantee atomic distributed updates but allows you to wait until the [operation is complete](../../concepts/points/#awaiting-result) to see the results of your writes.
Operations on collections, on the contrary, are part of the consensus which guarantees that all operations are durable and eventually executed by all nodes.
In practice it means that a majority of nodes agree on what operations should be applied before the service will perform them.
Practically, it means that if the cluster is in a transition state - either electing a new leader after a failure or starting up, the collection update operations will be denied.
You may use the cluster [REST API](https://api.qdrant.tech/master/api-reference/distributed/cluster-status) to check the state of the consensus.
## Sharding
A Collection in Qdrant is made of one or more shards.
A shard is an independent store of points which is able to perform all operations provided by collections.
There are two methods of distributing points across shards:
- **Automatic sharding**: Points are distributed among shards by using a [consistent hashing](https://en.wikipedia.org/wiki/Consistent_hashing) algorithm, so that shards are managing non-intersecting subsets of points. This is the default behavior.
- **User-defined sharding**: _Available as of v1.7.0_ - Each point is uploaded to a specific shard, so that operations can hit only the shard or shards they need. Even with this distribution, shards still ensure having non-intersecting subsets of points. [See more...](#user-defined-sharding)
Each node knows where all parts of the collection are stored through the [consensus protocol](./#raft), so when you send a search request to one Qdrant node, it automatically queries all other nodes to obtain the full search result.
### Choosing the right number of shards
When you create a collection, Qdrant splits the collection into `shard_number` shards. If left unset, `shard_number` is set to the number of nodes in your cluster when the collection was created. The `shard_number` cannot be changed without recreating the collection.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 300,
"distance": "Cosine"
},
"shard_number": 6
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE),
shard_number=6,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 300,
distance: "Cosine",
},
shard_number: 6,
});
```
```rust
use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(300, Distance::Cosine))
.shard_number(6),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(300)
.setDistance(Distance.Cosine)
.build())
.build())
.setShardNumber(6)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine },
shardNumber: 6
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 300,
Distance: qdrant.Distance_Cosine,
}),
ShardNumber: qdrant.PtrOf(uint32(6)),
})
```
To ensure all nodes in your cluster are evenly utilized, the number of shards must be a multiple of the number of nodes you are currently running in your cluster.
> Aside: Advanced use cases such as multitenancy may require an uneven distribution of shards. See [Multitenancy](/articles/multitenancy/).
We recommend creating at least 2 shards per node to allow future expansion without having to re-shard. Re-sharding should be avoided since it requires creating a new collection. In-place re-sharding is planned for a future version of Qdrant.
If you anticipate a lot of growth, we recommend 12 shards since you can expand from 1 node up to 2, 3, 6, and 12 nodes without having to re-shard. Having more than 12 shards in a small cluster may not be worth the performance overhead.
Shards are evenly distributed across all existing nodes when a collection is first created, but Qdrant does not automatically rebalance shards if your cluster size or replication factor changes (since this is an expensive operation on large clusters). See the next section for how to move shards after scaling operations.
### Moving shards
*Available as of v0.9.0*
Qdrant allows moving shards between nodes in the cluster and removing nodes from the cluster. This functionality unlocks the ability to dynamically scale the cluster size without downtime. It also allows you to upgrade or migrate nodes without downtime.
Qdrant provides the information regarding the current shard distribution in the cluster with the [Collection Cluster info API](https://api.qdrant.tech/master/api-reference/distributed/collection-cluster-info).
Use the [Update collection cluster setup API](https://api.qdrant.tech/master/api-reference/distributed/update-collection-cluster) to initiate the shard transfer:
```http
POST /collections/{collection_name}/cluster
{
"move_shard": {
"shard_id": 0,
"from_peer_id": 381894127,
"to_peer_id": 467122995
}
}
```
<aside role="status">You likely want to select a specific <a href="#shard-transfer-method">shard transfer method</a> to get desired performance and guarantees.</aside>
After the transfer is initiated, the service will process it based on the used
[transfer method](#shard-transfer-method) keeping both shards in sync. Once the
transfer is completed, the old shard is deleted from the source node.
In case you want to downscale the cluster, you can move all shards away from a peer and then remove the peer using the [remove peer API](https://api.qdrant.tech/master/api-reference/distributed/remove-peer).
```http
DELETE /cluster/peer/{peer_id}
```
After that, Qdrant will exclude the node from the consensus, and the instance will be ready for shutdown.
### User-defined sharding
*Available as of v1.7.0*
Qdrant allows you to specify the shard for each point individually. This feature is useful if you want to control the shard placement of your data, so that operations can hit only the subset of shards they actually need. In big clusters, this can significantly improve the performance of operations that do not require the whole collection to be scanned.
A clear use-case for this feature is managing a multi-tenant collection, where each tenant (let it be a user or organization) is assumed to be segregated, so they can have their data stored in separate shards.
To enable user-defined sharding, set `sharding_method` to `custom` during collection creation:
```http
PUT /collections/{collection_name}
{
"shard_number": 1,
"sharding_method": "custom"
// ... other collection parameters
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
shard_number=1,
sharding_method=models.ShardingMethod.CUSTOM,
# ... other collection parameters
)
client.create_shard_key("{collection_name}", "{shard_key}")
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
shard_number: 1,
sharding_method: "custom",
// ... other collection parameters
});
client.createShardKey("{collection_name}", {
shard_key: "{shard_key}"
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, CreateShardKeyBuilder, CreateShardKeyRequestBuilder, Distance,
ShardingMethod, VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(300, Distance::Cosine))
.shard_number(1)
.sharding_method(ShardingMethod::Custom.into()),
)
.await?;
client
.create_shard_key(
CreateShardKeyRequestBuilder::new("{collection_name}")
.request(CreateShardKeyBuilder::default().shard_key("{shard_key".to_string())),
)
.await?;
```
```java
import static io.qdrant.client.ShardKeyFactory.shardKey;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.ShardingMethod;
import io.qdrant.client.grpc.Collections.CreateShardKey;
import io.qdrant.client.grpc.Collections.CreateShardKeyRequest;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
// ... other collection parameters
.setShardNumber(1)
.setShardingMethod(ShardingMethod.Custom)
.build())
.get();
client.createShardKeyAsync(CreateShardKeyRequest.newBuilder()
.setCollectionName("{collection_name}")
.setRequest(CreateShardKey.newBuilder()
.setShardKey(shardKey("{shard_key}"))
.build())
.build()).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
// ... other collection parameters
shardNumber: 1,
shardingMethod: ShardingMethod.Custom
);
await client.CreateShardKeyAsync(
"{collection_name}",
new CreateShardKey { ShardKey = new ShardKey { Keyword = "{shard_key}", } }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
// ... other collection parameters
ShardNumber: qdrant.PtrOf(uint32(1)),
ShardingMethod: qdrant.ShardingMethod_Custom.Enum(),
})
client.CreateShardKey(context.Background(), "{collection_name}", &qdrant.CreateShardKey{
ShardKey: qdrant.NewShardKey("{shard_key}"),
})
```
In this mode, the `shard_number` means the number of shards per shard key, where points will be distributed evenly. For example, if you have 10 shard keys and a collection config with these settings:
```json
{
"shard_number": 1,
"sharding_method": "custom",
"replication_factor": 2
}
```
Then you will have `1 * 10 * 2 = 20` total physical shards in the collection.
Physical shards require a large amount of resources, so make sure your custom sharding key has a low cardinality.
For large cardinality keys, it is recommended to use [partition by payload](/documentation/guides/multiple-partitions/#partition-by-payload) instead.
To specify the shard for each point, you need to provide the `shard_key` field in the upsert request:
```http
PUT /collections/{collection_name}/points
{
"points": [
{
"id": 1111,
"vector": [0.1, 0.2, 0.3]
},
]
"shard_key": "user_1"
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.upsert(
collection_name="{collection_name}",
points=[
models.PointStruct(
id=1111,
vector=[0.1, 0.2, 0.3],
),
],
shard_key_selector="user_1",
)
```
```typescript
client.upsertPoints("{collection_name}", {
points: [
{
id: 1111,
vector: [0.1, 0.2, 0.3],
},
],
shard_key: "user_1",
});
```
```rust
use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
use qdrant_client::Payload;
client
.upsert_points(
UpsertPointsBuilder::new(
"{collection_name}",
vec![PointStruct::new(
111,
vec![0.1, 0.2, 0.3],
Payload::default(),
)],
)
.shard_key_selector("user_1".to_string()),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.ShardKeySelectorFactory.shardKeySelector;
import static io.qdrant.client.VectorsFactory.vectors;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.PointStruct;
import io.qdrant.client.grpc.Points.UpsertPoints;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.upsertAsync(
UpsertPoints.newBuilder()
.setCollectionName("{collection_name}")
.addAllPoints(
List.of(
PointStruct.newBuilder()
.setId(id(111))
.setVectors(vectors(0.1f, 0.2f, 0.3f))
.build()))
.setShardKeySelector(shardKeySelector("user_1"))
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpsertAsync(
collectionName: "{collection_name}",
points: new List<PointStruct>
{
new() { Id = 111, Vectors = new[] { 0.1f, 0.2f, 0.3f } }
},
shardKeySelector: new ShardKeySelector { ShardKeys = { new List<ShardKey> { "user_1" } } }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "{collection_name}",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewIDNum(111),
Vectors: qdrant.NewVectors(0.1, 0.2, 0.3),
},
},
ShardKeySelector: &qdrant.ShardKeySelector{
ShardKeys: []*qdrant.ShardKey{
qdrant.NewShardKey("user_1"),
},
},
})
```
<aside role="alert">
Using the same point ID across multiple shard keys is <strong>not supported<sup>*</sup></strong> and should be avoided.
</aside>
<sup>
<strong>*</strong> When using custom sharding, IDs are only enforced to be unique within a shard key. This means that you can have multiple points with the same ID, if they have different shard keys.
This is a limitation of the current implementation, and is an anti-pattern that should be avoided because it can create scenarios of points with the same ID to have different contents. In the future, we plan to add a global ID uniqueness check.
</sup>
Now you can target the operations to specific shard(s) by specifying the `shard_key` on any operation you do. Operations that do not specify the shard key will be executed on __all__ shards.
Another use-case would be to have shards that track the data chronologically, so that you can do more complex itineraries like uploading live data in one shard and archiving it once a certain age has passed.
<img src="/docs/sharding-per-day.png" alt="Sharding per day" width="500" height="600">
### Shard transfer method
*Available as of v1.7.0*
There are different methods for transferring a shard, such as moving or
replicating, to another node. Depending on what performance and guarantees you'd
like to have and how you'd like to manage your cluster, you likely want to
choose a specific method. Each method has its own pros and cons. Which is
fastest depends on the size and state of a shard.
Available shard transfer methods are:
- `stream_records`: _(default)_ transfer by streaming just its records to the target node in batches.
- `snapshot`: transfer including its index and quantized data by utilizing a [snapshot](../../concepts/snapshots/) automatically.
- `wal_delta`: _(auto recovery default)_ transfer by resolving [WAL] difference; the operations that were missed.
Each has pros, cons and specific requirements, some of which are:
| Method: | Stream records | Snapshot | WAL delta |
|:---|:---|:---|:---|
| **Version** | v0.8.0+ | v1.7.0+ | v1.8.0+ |
| **Target** | New/existing shard | New/existing shard | Existing shard |
| **Connectivity** | Internal gRPC API <small>(<abbr title="port">6335</abbr>)</small> | REST API <small>(<abbr title="port">6333</abbr>)</small><br>Internal gRPC API <small>(<abbr title="port">6335</abbr>)</small> | Internal gRPC API <small>(<abbr title="port">6335</abbr>)</small> |
| **HNSW index** | Doesn't transfer, will reindex on target. | Does transfer, immediately ready on target. | Doesn't transfer, may index on target. |
| **Quantization** | Doesn't transfer, will requantize on target. | Does transfer, immediately ready on target. | Doesn't transfer, may quantize on target. |
| **Ordering** | Unordered updates on target[^unordered] | Ordered updates on target[^ordered] | Ordered updates on target[^ordered] |
| **Disk space** | No extra required | Extra required for snapshot on both nodes | No extra required |
[^unordered]: Weak ordering for updates: All records are streamed to the target node in order.
New updates are received on the target node in parallel, while the transfer
of records is still happening. We therefore have `weak` ordering, regardless
of what [ordering](#write-ordering) is used for updates.
[^ordered]: Strong ordering for updates: A snapshot of the shard
is created, it is transferred and recovered on the target node. That ensures
the state of the shard is kept consistent. New updates are queued on the
source node, and transferred in order to the target node. Updates therefore
have the same [ordering](#write-ordering) as the user selects, making
`strong` ordering possible.
To select a shard transfer method, specify the `method` like:
```http
POST /collections/{collection_name}/cluster
{
"move_shard": {
"shard_id": 0,
"from_peer_id": 381894127,
"to_peer_id": 467122995,
"method": "snapshot"
}
}
```
The `stream_records` transfer method is the simplest available. It simply
transfers all shard records in batches to the target node until it has
transferred all of them, keeping both shards in sync. It will also make sure the
transferred shard indexing process is keeping up before performing a final
switch. The method has two common disadvantages: 1. It does not transfer index
or quantization data, meaning that the shard has to be optimized again on the
new node, which can be very expensive. 2. The ordering guarantees are
`weak`[^unordered], which is not suitable for some applications. Because it is
so simple, it's also very robust, making it a reliable choice if the above cons
are acceptable in your use case. If your cluster is unstable and out of
resources, it's probably best to use the `stream_records` transfer method,
because it is unlikely to fail.
The `snapshot` transfer method utilizes [snapshots](../../concepts/snapshots/)
to transfer a shard. A snapshot is created automatically. It is then transferred
and restored on the target node. After this is done, the snapshot is removed
from both nodes. While the snapshot/transfer/restore operation is happening, the
source node queues up all new operations. All queued updates are then sent in
order to the target shard to bring it into the same state as the source. There
are two important benefits: 1. It transfers index and quantization data, so that
the shard does not have to be optimized again on the target node, making them
immediately available. This way, Qdrant ensures that there will be no
degradation in performance at the end of the transfer. Especially on large
shards, this can give a huge performance improvement. 2. The ordering guarantees
can be `strong`[^ordered], required for some applications.
The `wal_delta` transfer method only transfers the difference between two
shards. More specifically, it transfers all operations that were missed to the
target shard. The [WAL] of both shards is used to resolve this. There are two
benefits: 1. It will be very fast because it only transfers the difference
rather than all data. 2. The ordering guarantees can be `strong`[^ordered],
required for some applications. Two disadvantages are: 1. It can only be used to
transfer to a shard that already exists on the other node. 2. Applicability is
limited because the WALs normally don't hold more than 64MB of recent
operations. But that should be enough for a node that quickly restarts, to
upgrade for example. If a delta cannot be resolved, this method automatically
falls back to `stream_records` which equals transferring the full shard.
The `stream_records` method is currently used as default. This may change in the
future. As of Qdrant 1.9.0 `wal_delta` is used for automatic shard replications
to recover dead shards.
[WAL]: ../../concepts/storage/#versioning
## Replication
*Available as of v0.11.0*
Qdrant allows you to replicate shards between nodes in the cluster.
Shard replication increases the reliability of the cluster by keeping several copies of a shard spread across the cluster.
This ensures the availability of the data in case of node failures, except if all replicas are lost.
### Replication factor
When you create a collection, you can control how many shard replicas you'd like to store by changing the `replication_factor`. By default, `replication_factor` is set to "1", meaning no additional copy is maintained automatically. You can change that by setting the `replication_factor` when you create a collection.
Currently, the replication factor of a collection can only be configured at creation time.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 300,
"distance": "Cosine"
},
"shard_number": 6,
"replication_factor": 2,
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE),
shard_number=6,
replication_factor=2,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 300,
distance: "Cosine",
},
shard_number: 6,
replication_factor: 2,
});
```
```rust
use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(300, Distance::Cosine))
.shard_number(6)
.replication_factor(2),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(300)
.setDistance(Distance.Cosine)
.build())
.build())
.setShardNumber(6)
.setReplicationFactor(2)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine },
shardNumber: 6,
replicationFactor: 2
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 300,
Distance: qdrant.Distance_Cosine,
}),
ShardNumber: qdrant.PtrOf(uint32(6)),
ReplicationFactor: qdrant.PtrOf(uint32(2)),
})
```
This code sample creates a collection with a total of 6 logical shards backed by a total of 12 physical shards.
Since a replication factor of "2" would require twice as much storage space, it is advised to make sure the hardware can host the additional shard replicas beforehand.
### Creating new shard replicas
It is possible to create or delete replicas manually on an existing collection using the [Update collection cluster setup API](https://api.qdrant.tech/master/api-reference/distributed/update-collection-cluster).
A replica can be added on a specific peer by specifying the peer from which to replicate.
```http
POST /collections/{collection_name}/cluster
{
"replicate_shard": {
"shard_id": 0,
"from_peer_id": 381894127,
"to_peer_id": 467122995
}
}
```
<aside role="status">You likely want to select a specific <a href="#shard-transfer-method">shard transfer method</a> to get desired performance and guarantees.</aside>
And a replica can be removed on a specific peer.
```http
POST /collections/{collection_name}/cluster
{
"drop_replica": {
"shard_id": 0,
"peer_id": 381894127
}
}
```
Keep in mind that a collection must contain at least one active replica of a shard.
### Error handling
Replicas can be in different states:
- Active: healthy and ready to serve traffic
- Dead: unhealthy and not ready to serve traffic
- Partial: currently under resynchronization before activation
A replica is marked as dead if it does not respond to internal healthchecks or if it fails to serve traffic.
A dead replica will not receive traffic from other peers and might require a manual intervention if it does not recover automatically.
This mechanism ensures data consistency and availability if a subset of the replicas fail during an update operation.
### Node Failure Recovery
Sometimes hardware malfunctions might render some nodes of the Qdrant cluster unrecoverable.
No system is immune to this.
But several recovery scenarios allow qdrant to stay available for requests and even avoid performance degradation.
Let's walk through them from best to worst.
**Recover with replicated collection**
If the number of failed nodes is less than the replication factor of the collection, then your cluster should still be able to perform read, search and update queries.
Now, if the failed node restarts, consensus will trigger the replication process to update the recovering node with the newest updates it has missed.
If the failed node never restarts, you can recover the lost shards if you have a 3+ node cluster. You cannot recover lost shards in smaller clusters because recovery operations go through [raft](#raft) which requires >50% of the nodes to be healthy.
**Recreate node with replicated collections**
If a node fails and it is impossible to recover it, you should exclude the dead node from the consensus and create an empty node.
To exclude failed nodes from the consensus, use [remove peer](https://api.qdrant.tech/master/api-reference/distributed/remove-peer) API.
Apply the `force` flag if necessary.
When you create a new node, make sure to attach it to the existing cluster by specifying `--bootstrap` CLI parameter with the URL of any of the running cluster nodes.
Once the new node is ready and synchronized with the cluster, you might want to ensure that the collection shards are replicated enough. Remember that Qdrant will not automatically balance shards since this is an expensive operation.
Use the [Replicate Shard Operation](https://api.qdrant.tech/master/api-reference/distributed/update-collection-cluster) to create another copy of the shard on the newly connected node.
It's worth mentioning that Qdrant only provides the necessary building blocks to create an automated failure recovery.
Building a completely automatic process of collection scaling would require control over the cluster machines themself.
Check out our [cloud solution](https://qdrant.to/cloud), where we made exactly that.
**Recover from snapshot**
If there are no copies of data in the cluster, it is still possible to recover from a snapshot.
Follow the same steps to detach failed node and create a new one in the cluster:
* To exclude failed nodes from the consensus, use [remove peer](https://api.qdrant.tech/master/api-reference/distributed/remove-peer) API. Apply the `force` flag if necessary.
* Create a new node, making sure to attach it to the existing cluster by specifying the `--bootstrap` CLI parameter with the URL of any of the running cluster nodes.
Snapshot recovery, used in single-node deployment, is different from cluster one.
Consensus manages all metadata about all collections and does not require snapshots to recover it.
But you can use snapshots to recover missing shards of the collections.
Use the [Collection Snapshot Recovery API](../../concepts/snapshots/#recover-in-cluster-deployment) to do it.
The service will download the specified snapshot of the collection and recover shards with data from it.
Once all shards of the collection are recovered, the collection will become operational again.
### Temporary node failure
If properly configured, running Qdrant in distributed mode can make your cluster resistant to outages when one node fails temporarily.
Here is how differently-configured Qdrant clusters respond:
* 1-node clusters: All operations time out or fail for up to a few minutes. It depends on how long it takes to restart and load data from disk.
* 2-node clusters where shards ARE NOT replicated: All operations will time out or fail for up to a few minutes. It depends on how long it takes to restart and load data from disk.
* 2-node clusters where all shards ARE replicated to both nodes: All requests except for operations on collections continue to work during the outage.
* 3+-node clusters where all shards are replicated to at least 2 nodes: All requests continue to work during the outage.
## Consistency guarantees
By default, Qdrant focuses on availability and maximum throughput of search operations.
For the majority of use cases, this is a preferable trade-off.
During the normal state of operation, it is possible to search and modify data from any peers in the cluster.
Before responding to the client, the peer handling the request dispatches all operations according to the current topology in order to keep the data synchronized across the cluster.
- reads are using a partial fan-out strategy to optimize latency and availability
- writes are executed in parallel on all active sharded replicas
![Embeddings](/docs/concurrent-operations-replicas.png)
However, in some cases, it is necessary to ensure additional guarantees during possible hardware instabilities, mass concurrent updates of same documents, etc.
Qdrant provides a few options to control consistency guarantees:
- `write_consistency_factor` - defines the number of replicas that must acknowledge a write operation before responding to the client. Increasing this value will make write operations tolerant to network partitions in the cluster, but will require a higher number of replicas to be active to perform write operations.
- Read `consistency` param, can be used with search and retrieve operations to ensure that the results obtained from all replicas are the same. If this option is used, Qdrant will perform the read operation on multiple replicas and resolve the result according to the selected strategy. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if the update operations are frequent and the number of replicas is low.
- Write `ordering` param, can be used with update and delete operations to ensure that the operations are executed in the same order on all replicas. If this option is used, Qdrant will route the operation to the leader replica of the shard and wait for the response before responding to the client. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if read operations are more frequent than update and if search performance is critical.
### Write consistency factor
The `write_consistency_factor` represents the number of replicas that must acknowledge a write operation before responding to the client. It is set to one by default.
It can be configured at the collection's creation time.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 300,
"distance": "Cosine"
},
"shard_number": 6,
"replication_factor": 2,
"write_consistency_factor": 2,
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE),
shard_number=6,
replication_factor=2,
write_consistency_factor=2,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 300,
distance: "Cosine",
},
shard_number: 6,
replication_factor: 2,
write_consistency_factor: 2,
});
```
```rust
use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(300, Distance::Cosine))
.shard_number(6)
.replication_factor(2)
.write_consistency_factor(2),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(300)
.setDistance(Distance.Cosine)
.build())
.build())
.setShardNumber(6)
.setReplicationFactor(2)
.setWriteConsistencyFactor(2)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine },
shardNumber: 6,
replicationFactor: 2,
writeConsistencyFactor: 2
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 300,
Distance: qdrant.Distance_Cosine,
}),
ShardNumber: qdrant.PtrOf(uint32(6)),
ReplicationFactor: qdrant.PtrOf(uint32(2)),
WriteConsistencyFactor: qdrant.PtrOf(uint32(2)),
})
```
Write operations will fail if the number of active replicas is less than the `write_consistency_factor`.
### Read consistency
Read `consistency` can be specified for most read requests and will ensure that the returned result
is consistent across cluster nodes.
- `all` will query all nodes and return points, which present on all of them
- `majority` will query all nodes and return points, which present on the majority of them
- `quorum` will query randomly selected majority of nodes and return points, which present on all of them
- `1`/`2`/`3`/etc - will query specified number of randomly selected nodes and return points which present on all of them
- default `consistency` is `1`
```http
POST /collections/{collection_name}/points/query?consistency=majority
{
"query": [0.2, 0.1, 0.9, 0.7],
"filter": {
"must": [
{
"key": "city",
"match": {
"value": "London"
}
}
]
},
"params": {
"hnsw_ef": 128,
"exact": false
},
"limit": 3
}
```
```python
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
query_filter=models.Filter(
must=[
models.FieldCondition(
key="city",
match=models.MatchValue(
value="London",
),
)
]
),
search_params=models.SearchParams(hnsw_ef=128, exact=False),
limit=3,
consistency="majority",
)
```
```typescript
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
filter: {
must: [{ key: "city", match: { value: "London" } }],
},
params: {
hnsw_ef: 128,
exact: false,
},
limit: 3,
consistency: "majority",
});
```
```rust
use qdrant_client::qdrant::{
read_consistency::Value, Condition, Filter, QueryPointsBuilder, ReadConsistencyType,
SearchParamsBuilder,
};
use qdrant_client::{Qdrant, QdrantError};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(3)
.filter(Filter::must([Condition::matches(
"city",
"London".to_string(),
)]))
.params(SearchParamsBuilder::default().hnsw_ef(128).exact(false))
.read_consistency(Value::Type(ReadConsistencyType::Majority.into())),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.ReadConsistency;
import io.qdrant.client.grpc.Points.ReadConsistencyType;
import io.qdrant.client.grpc.Points.SearchParams;
import static io.qdrant.client.QueryFactory.nearest;
import static io.qdrant.client.ConditionFactory.matchKeyword;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London")).build())
.setQuery(nearest(.2f, 0.1f, 0.9f, 0.7f))
.setParams(SearchParams.newBuilder().setHnswEf(128).setExact(false).build())
.setLimit(3)
.setReadConsistency(
ReadConsistency.newBuilder().setType(ReadConsistencyType.Majority).build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
filter: MatchKeyword("city", "London"),
searchParams: new SearchParams { HnswEf = 128, Exact = false },
limit: 3,
readConsistency: new ReadConsistency { Type = ReadConsistencyType.Majority }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("city", "London"),
},
},
Params: &qdrant.SearchParams{
HnswEf: qdrant.PtrOf(uint64(128)),
},
Limit: qdrant.PtrOf(uint64(3)),
ReadConsistency: qdrant.NewReadConsistencyType(qdrant.ReadConsistencyType_Majority),
})
```
### Write ordering
Write `ordering` can be specified for any write request to serialize it through a single "leader" node,
which ensures that all write operations (issued with the same `ordering`) are performed and observed
sequentially.
- `weak` _(default)_ ordering does not provide any additional guarantees, so write operations can be freely reordered.
- `medium` ordering serializes all write operations through a dynamically elected leader, which might cause minor inconsistencies in case of leader change.
- `strong` ordering serializes all write operations through the permanent leader, which provides strong consistency, but write operations may be unavailable if the leader is down.
<aside role="status">Some <a href="#shard-transfer-method">shard transfer methods</a> may affect ordering guarantees.</aside>
```http
PUT /collections/{collection_name}/points?ordering=strong
{
"batch": {
"ids": [1, 2, 3],
"payloads": [
{"color": "red"},
{"color": "green"},
{"color": "blue"}
],
"vectors": [
[0.9, 0.1, 0.1],
[0.1, 0.9, 0.1],
[0.1, 0.1, 0.9]
]
}
}
```
```python
client.upsert(
collection_name="{collection_name}",
points=models.Batch(
ids=[1, 2, 3],
payloads=[
{"color": "red"},
{"color": "green"},
{"color": "blue"},
],
vectors=[
[0.9, 0.1, 0.1],
[0.1, 0.9, 0.1],
[0.1, 0.1, 0.9],
],
),
ordering=models.WriteOrdering.STRONG,
)
```
```typescript
client.upsert("{collection_name}", {
batch: {
ids: [1, 2, 3],
payloads: [{ color: "red" }, { color: "green" }, { color: "blue" }],
vectors: [
[0.9, 0.1, 0.1],
[0.1, 0.9, 0.1],
[0.1, 0.1, 0.9],
],
},
ordering: "strong",
});
```
```rust
use qdrant_client::qdrant::{
PointStruct, UpsertPointsBuilder, WriteOrdering, WriteOrderingType
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.upsert_points(
UpsertPointsBuilder::new(
"{collection_name}",
vec![
PointStruct::new(1, vec![0.9, 0.1, 0.1], [("color", "red".into())]),
PointStruct::new(2, vec![0.1, 0.9, 0.1], [("color", "green".into())]),
PointStruct::new(3, vec![0.1, 0.1, 0.9], [("color", "blue".into())]),
],
)
.ordering(WriteOrdering {
r#type: WriteOrderingType::Strong.into(),
}),
)
.await?;
```
```java
import java.util.List;
import java.util.Map;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.ValueFactory.value;
import static io.qdrant.client.VectorsFactory.vectors;
import io.qdrant.client.grpc.Points.PointStruct;
import io.qdrant.client.grpc.Points.UpsertPoints;
import io.qdrant.client.grpc.Points.WriteOrdering;
import io.qdrant.client.grpc.Points.WriteOrderingType;
client
.upsertAsync(
UpsertPoints.newBuilder()
.setCollectionName("{collection_name}")
.addAllPoints(
List.of(
PointStruct.newBuilder()
.setId(id(1))
.setVectors(vectors(0.9f, 0.1f, 0.1f))
.putAllPayload(Map.of("color", value("red")))
.build(),
PointStruct.newBuilder()
.setId(id(2))
.setVectors(vectors(0.1f, 0.9f, 0.1f))
.putAllPayload(Map.of("color", value("green")))
.build(),
PointStruct.newBuilder()
.setId(id(3))
.setVectors(vectors(0.1f, 0.1f, 0.94f))
.putAllPayload(Map.of("color", value("blue")))
.build()))
.setOrdering(WriteOrdering.newBuilder().setType(WriteOrderingType.Strong).build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpsertAsync(
collectionName: "{collection_name}",
points: new List<PointStruct>
{
new()
{
Id = 1,
Vectors = new[] { 0.9f, 0.1f, 0.1f },
Payload = { ["color"] = "red" }
},
new()
{
Id = 2,
Vectors = new[] { 0.1f, 0.9f, 0.1f },
Payload = { ["color"] = "green" }
},
new()
{
Id = 3,
Vectors = new[] { 0.1f, 0.1f, 0.9f },
Payload = { ["color"] = "blue" }
}
},
ordering: WriteOrderingType.Strong
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "{collection_name}",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewIDNum(1),
Vectors: qdrant.NewVectors(0.9, 0.1, 0.1),
Payload: qdrant.NewValueMap(map[string]any{"color": "red"}),
},
{
Id: qdrant.NewIDNum(2),
Vectors: qdrant.NewVectors(0.1, 0.9, 0.1),
Payload: qdrant.NewValueMap(map[string]any{"color": "green"}),
},
{
Id: qdrant.NewIDNum(3),
Vectors: qdrant.NewVectors(0.1, 0.1, 0.9),
Payload: qdrant.NewValueMap(map[string]any{"color": "blue"}),
},
},
Ordering: &qdrant.WriteOrdering{
Type: qdrant.WriteOrderingType_Strong,
},
})
```
## Listener mode
<aside role="alert">This is an experimental feature, its behavior may change in the future.</aside>
In some cases it might be useful to have a Qdrant node that only accumulates data and does not participate in search operations.
There are several scenarios where this can be useful:
- Listener option can be used to store data in a separate node, which can be used for backup purposes or to store data for a long time.
- Listener node can be used to syncronize data into another region, while still performing search operations in the local region.
To enable listener mode, set `node_type` to `Listener` in the config file:
```yaml
storage:
node_type: "Listener"
```
Listener node will not participate in search operations, but will still accept write operations and will store the data in the local storage.
All shards, stored on the listener node, will be converted to the `Listener` state.
Additionally, all write requests sent to the listener node will be processed with `wait=false` option, which means that the write oprations will be considered successful once they are written to WAL.
This mechanism should allow to minimize upsert latency in case of parallel snapshotting.
## Consensus Checkpointing
Consensus checkpointing is a technique used in Raft to improve performance and simplify log management by periodically creating a consistent snapshot of the system state.
This snapshot represents a point in time where all nodes in the cluster have reached agreement on the state, and it can be used to truncate the log, reducing the amount of data that needs to be stored and transferred between nodes.
For example, if you attach a new node to the cluster, it should replay all the log entries to catch up with the current state.
In long-running clusters, this can take a long time, and the log can grow very large.
To prevent this, one can use a special checkpointing mechanism, that will truncate the log and create a snapshot of the current state.
To use this feature, simply call the `/cluster/recover` API on required node:
```http
POST /cluster/recover
```
This API can be triggered on any non-leader node, it will send a request to the current consensus leader to create a snapshot. The leader will in turn send the snapshot back to the requesting node for application.
In some cases, this API can be used to recover from an inconsistent cluster state by forcing a snapshot creation.
| documentation/guides/distributed_deployment.md |
---
title: Installation
weight: 5
aliases:
- ../install
- ../installation
---
## Installation requirements
The following sections describe the requirements for deploying Qdrant.
### CPU and memory
The CPU and RAM that you need depends on:
- Number of vectors
- Vector dimensions
- [Payloads](/documentation/concepts/payload/) and their indexes
- Storage
- Replication
- How you configure quantization
Our [Cloud Pricing Calculator](https://cloud.qdrant.io/calculator) can help you estimate required resources without payload or index data.
### Storage
For persistent storage, Qdrant requires block-level access to storage devices with a [POSIX-compatible file system](https://www.quobyte.com/storage-explained/posix-filesystem/). Network systems such as [iSCSI](https://en.wikipedia.org/wiki/ISCSI) that provide block-level access are also acceptable.
Qdrant won't work with [Network file systems](https://en.wikipedia.org/wiki/File_system#Network_file_systems) such as NFS, or [Object storage](https://en.wikipedia.org/wiki/Object_storage) systems such as S3.
If you offload vectors to a local disk, we recommend you use a solid-state (SSD or NVMe) drive.
### Networking
Each Qdrant instance requires three open ports:
* `6333` - For the HTTP API, for the [Monitoring](/documentation/guides/monitoring/) health and metrics endpoints
* `6334` - For the [gRPC](/documentation/interfaces/#grpc-interface) API
* `6335` - For [Distributed deployment](/documentation/guides/distributed_deployment/)
All Qdrant instances in a cluster must be able to:
- Communicate with each other over these ports
- Allow incoming connections to ports `6333` and `6334` from clients that use Qdrant.
### Security
The default configuration of Qdrant might not be secure enough for every situation. Please see [our security documentation](/documentation/guides/security/) for more information.
## Installation options
Qdrant can be installed in different ways depending on your needs:
For production, you can use our Qdrant Cloud to run Qdrant either fully managed in our infrastructure or with Hybrid Cloud in yours.
For testing or development setups, you can run the Qdrant container or as a binary executable.
If you want to run Qdrant in your own infrastructure, without any cloud connection, we recommend to install Qdrant in a Kubernetes cluster with our Helm chart, or to use our Qdrant Enterprise Operator
## Production
For production, we recommend that you configure Qdrant in the cloud, with Kubernetes, or with a Qdrant Enterprise Operator.
### Qdrant Cloud
You can set up production with the [Qdrant Cloud](https://qdrant.to/cloud), which provides fully managed Qdrant databases.
It provides horizontal and vertical scaling, one click installation and upgrades, monitoring, logging, as well as backup and disaster recovery. For more information, see the [Qdrant Cloud documentation](/documentation/cloud/).
### Kubernetes
You can use a ready-made [Helm Chart](https://helm.sh/docs/) to run Qdrant in your Kubernetes cluster:
```bash
helm repo add qdrant https://qdrant.to/helm
helm install qdrant qdrant/qdrant
```
For more information, see the [qdrant-helm](https://github.com/qdrant/qdrant-helm/tree/main/charts/qdrant) README.
### Qdrant Kubernetes Operator
We provide a Qdrant Enterprise Operator for Kubernetes installations. For more information, [use this form](https://qdrant.to/contact-us) to contact us.
### Docker and Docker Compose
Usually, we recommend to run Qdrant in Kubernetes, or use the Qdrant Cloud for production setups. This makes setting up highly available and scalable Qdrant clusters with backups and disaster recovery a lot easier.
However, you can also use Docker and Docker Compose to run Qdrant in production, by following the setup instructions in the [Docker](#docker) and [Docker Compose](#docker-compose) Development sections.
In addition, you have to make sure:
* To use a performant [persistent storage](#storage) for your data
* To configure the [security settings](/documentation/guides/security/) for your deployment
* To set up and configure Qdrant on multiple nodes for a highly available [distributed deployment](/documentation/guides/distributed_deployment/)
* To set up a load balancer for your Qdrant cluster
* To create a [backup and disaster recovery strategy](/documentation/concepts/snapshots/) for your data
* To integrate Qdrant with your [monitoring](/documentation/guides/monitoring/) and logging solutions
## Development
For development and testing, we recommend that you set up Qdrant in Docker. We also have different client libraries.
### Docker
The easiest way to start using Qdrant for testing or development is to run the Qdrant container image.
The latest versions are always available on [DockerHub](https://hub.docker.com/r/qdrant/qdrant/tags?page=1&ordering=last_updated).
Make sure that [Docker](https://docs.docker.com/engine/install/), [Podman](https://podman.io/docs/installation) or the container runtime of your choice is installed and running. The following instructions use Docker.
Pull the image:
```bash
docker pull qdrant/qdrant
```
In the following command, revise `$(pwd)/path/to/data` for your Docker configuration. Then use the updated command to run the container:
```bash
docker run -p 6333:6333 \
-v $(pwd)/path/to/data:/qdrant/storage \
qdrant/qdrant
```
With this command, you start a Qdrant instance with the default configuration.
It stores all data in the `./path/to/data` directory.
By default, Qdrant uses port 6333, so at [localhost:6333](http://localhost:6333) you should see the welcome message.
To change the Qdrant configuration, you can overwrite the production configuration:
```bash
docker run -p 6333:6333 \
-v $(pwd)/path/to/data:/qdrant/storage \
-v $(pwd)/path/to/custom_config.yaml:/qdrant/config/production.yaml \
qdrant/qdrant
```
Alternatively, you can use your own `custom_config.yaml` configuration file:
```bash
docker run -p 6333:6333 \
-v $(pwd)/path/to/data:/qdrant/storage \
-v $(pwd)/path/to/custom_config.yaml:/qdrant/config/custom_config.yaml \
qdrant/qdrant \
./qdrant --config-path config/custom_config.yaml
```
For more information, see the [Configuration](/documentation/guides/configuration/) documentation.
### Docker Compose
You can also use [Docker Compose](https://docs.docker.com/compose/) to run Qdrant.
Here is an example customized compose file for a single node Qdrant cluster:
```yaml
services:
qdrant:
image: qdrant/qdrant:latest
restart: always
container_name: qdrant
ports:
- 6333:6333
- 6334:6334
expose:
- 6333
- 6334
- 6335
configs:
- source: qdrant_config
target: /qdrant/config/production.yaml
volumes:
- ./qdrant_data:/qdrant/storage
configs:
qdrant_config:
content: |
log_level: INFO
```
<aside role="status">Proving the inline <code>content</code> in the <a href="https://docs.docker.com/compose/compose-file/08-configs/">configs top-level element</a> requires <a href="https://docs.docker.com/compose/release-notes/#2231">Docker Compose v2.23.1</a> or above. This functionality is supported starting <a href="https://docs.docker.com/engine/release-notes/25.0/#2500">Docker Engine v25.0.0</a> and <a href="https://docs.docker.com/desktop/release-notes/#4260">Docker Desktop v4.26.0</a> onwards.</aside>
### From source
Qdrant is written in Rust and can be compiled into a binary executable.
This installation method can be helpful if you want to compile Qdrant for a specific processor architecture or if you do not want to use Docker.
Before compiling, make sure that the necessary libraries and the [rust toolchain](https://www.rust-lang.org/tools/install) are installed.
The current list of required libraries can be found in the [Dockerfile](https://github.com/qdrant/qdrant/blob/master/Dockerfile).
Build Qdrant with Cargo:
```bash
cargo build --release --bin qdrant
```
After a successful build, you can find the binary in the following subdirectory `./target/release/qdrant`.
## Client libraries
In addition to the service, Qdrant provides a variety of client libraries for different programming languages. For a full list, see our [Client libraries](../../interfaces/#client-libraries) documentation.
| documentation/guides/installation.md |
---
title: Quantization
weight: 120
aliases:
- ../quantization
- /articles/dedicated-service/documentation/guides/quantization/
- /guides/quantization/
---
# Quantization
Quantization is an optional feature in Qdrant that enables efficient storage and search of high-dimensional vectors.
By transforming original vectors into a new representations, quantization compresses data while preserving close to original relative distances between vectors.
Different quantization methods have different mechanics and tradeoffs. We will cover them in this section.
Quantization is primarily used to reduce the memory footprint and accelerate the search process in high-dimensional vector spaces.
In the context of the Qdrant, quantization allows you to optimize the search engine for specific use cases, striking a balance between accuracy, storage efficiency, and search speed.
There are tradeoffs associated with quantization.
On the one hand, quantization allows for significant reductions in storage requirements and faster search times.
This can be particularly beneficial in large-scale applications where minimizing the use of resources is a top priority.
On the other hand, quantization introduces an approximation error, which can lead to a slight decrease in search quality.
The level of this tradeoff depends on the quantization method and its parameters, as well as the characteristics of the data.
## Scalar Quantization
*Available as of v1.1.0*
Scalar quantization, in the context of vector search engines, is a compression technique that compresses vectors by reducing the number of bits used to represent each vector component.
For instance, Qdrant uses 32-bit floating numbers to represent the original vector components. Scalar quantization allows you to reduce the number of bits used to 8.
In other words, Qdrant performs `float32 -> uint8` conversion for each vector component.
Effectively, this means that the amount of memory required to store a vector is reduced by a factor of 4.
In addition to reducing the memory footprint, scalar quantization also speeds up the search process.
Qdrant uses a special SIMD CPU instruction to perform fast vector comparison.
This instruction works with 8-bit integers, so the conversion to `uint8` allows Qdrant to perform the comparison faster.
The main drawback of scalar quantization is the loss of accuracy. The `float32 -> uint8` conversion introduces an error that can lead to a slight decrease in search quality.
However, this error is usually negligible, and tends to be less significant for high-dimensional vectors.
In our experiments, we found that the error introduced by scalar quantization is usually less than 1%.
However, this value depends on the data and the quantization parameters.
Please refer to the [Quantization Tips](#quantization-tips) section for more information on how to optimize the quantization parameters for your use case.
## Binary Quantization
*Available as of v1.5.0*
Binary quantization is an extreme case of scalar quantization.
This feature lets you represent each vector component as a single bit, effectively reducing the memory footprint by a **factor of 32**.
This is the fastest quantization method, since it lets you perform a vector comparison with a few CPU instructions.
Binary quantization can achieve up to a **40x** speedup compared to the original vectors.
However, binary quantization is only efficient for high-dimensional vectors and require a centered distribution of vector components.
At the moment, binary quantization shows good accuracy results with the following models:
- OpenAI `text-embedding-ada-002` - 1536d tested with [dbpedia dataset](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) achieving 0.98 recall@100 with 4x oversampling
- Cohere AI `embed-english-v2.0` - 4096d tested on [wikipedia embeddings](https://huggingface.co/datasets/nreimers/wikipedia-22-12-large/tree/main) - 0.98 recall@50 with 2x oversampling
Models with a lower dimensionality or a different distribution of vector components may require additional experiments to find the optimal quantization parameters.
We recommend using binary quantization only with rescoring enabled, as it can significantly improve the search quality
with just a minor performance impact.
Additionally, oversampling can be used to tune the tradeoff between search speed and search quality in the query time.
### Binary Quantization as Hamming Distance
The additional benefit of this method is that you can efficiently emulate Hamming distance with dot product.
Specifically, if original vectors contain `{-1, 1}` as possible values, then the dot product of two vectors is equal to the Hamming distance by simply replacing `-1` with `0` and `1` with `1`.
<!-- hidden section -->
<details>
<summary><b>Sample truth table</b></summary>
| Vector 1 | Vector 2 | Dot product |
|----------|----------|-------------|
| 1 | 1 | 1 |
| 1 | -1 | -1 |
| -1 | 1 | -1 |
| -1 | -1 | 1 |
| Vector 1 | Vector 2 | Hamming distance |
|----------|----------|------------------|
| 1 | 1 | 0 |
| 1 | 0 | 1 |
| 0 | 1 | 1 |
| 0 | 0 | 0 |
</details>
As you can see, both functions are equal up to a constant factor, which makes similarity search equivalent.
Binary quantization makes it efficient to compare vectors using this representation.
## Product Quantization
*Available as of v1.2.0*
Product quantization is a method of compressing vectors to minimize their memory usage by dividing them into
chunks and quantizing each segment individually.
Each chunk is approximated by a centroid index that represents the original vector component.
The positions of the centroids are determined through the utilization of a clustering algorithm such as k-means.
For now, Qdrant uses only 256 centroids, so each centroid index can be represented by a single byte.
Product quantization can compress by a more prominent factor than a scalar one.
But there are some tradeoffs. Product quantization distance calculations are not SIMD-friendly, so it is slower than scalar quantization.
Also, product quantization has a loss of accuracy, so it is recommended to use it only for high-dimensional vectors.
Please refer to the [Quantization Tips](#quantization-tips) section for more information on how to optimize the quantization parameters for your use case.
## How to choose the right quantization method
Here is a brief table of the pros and cons of each quantization method:
| Quantization method | Accuracy | Speed | Compression |
|---------------------|----------|--------------|-------------|
| Scalar | 0.99 | up to x2 | 4 |
| Product | 0.7 | 0.5 | up to 64 |
| Binary | 0.95* | up to x40 | 32 |
`*` - for compatible models
- **Binary Quantization** is the fastest method and the most memory-efficient, but it requires a centered distribution of vector components. It is recommended to use with tested models only.
- **Scalar Quantization** is the most universal method, as it provides a good balance between accuracy, speed, and compression. It is recommended as default quantization if binary quantization is not applicable.
- **Product Quantization** may provide a better compression ratio, but it has a significant loss of accuracy and is slower than scalar quantization. It is recommended if the memory footprint is the top priority and the search speed is not critical.
## Setting up Quantization in Qdrant
You can configure quantization for a collection by specifying the quantization parameters in the `quantization_config` section of the collection configuration.
Quantization will be automatically applied to all vectors during the indexation process.
Quantized vectors are stored alongside the original vectors in the collection, so you will still have access to the original vectors if you need them.
*Available as of v1.1.1*
The `quantization_config` can also be set on a per vector basis by specifying it in a named vector.
### Setting up Scalar Quantization
To enable scalar quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine"
},
"quantization_config": {
"scalar": {
"type": "int8",
"quantile": 0.99,
"always_ram": true
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
quantization_config=models.ScalarQuantization(
scalar=models.ScalarQuantizationConfig(
type=models.ScalarType.INT8,
quantile=0.99,
always_ram=True,
),
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
},
quantization_config: {
scalar: {
type: "int8",
quantile: 0.99,
always_ram: true,
},
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder,
VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
.quantization_config(
ScalarQuantizationBuilder::default()
.r#type(QuantizationType::Int8.into())
.quantile(0.99)
.always_ram(true),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.QuantizationConfig;
import io.qdrant.client.grpc.Collections.QuantizationType;
import io.qdrant.client.grpc.Collections.ScalarQuantization;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.build())
.build())
.setQuantizationConfig(
QuantizationConfig.newBuilder()
.setScalar(
ScalarQuantization.newBuilder()
.setType(QuantizationType.Int8)
.setQuantile(0.99f)
.setAlwaysRam(true)
.build())
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
quantizationConfig: new QuantizationConfig
{
Scalar = new ScalarQuantization
{
Type = QuantizationType.Int8,
Quantile = 0.99f,
AlwaysRam = true
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
}),
QuantizationConfig: qdrant.NewQuantizationScalar(
&qdrant.ScalarQuantization{
Type: qdrant.QuantizationType_Int8,
Quantile: qdrant.PtrOf(float32(0.99)),
AlwaysRam: qdrant.PtrOf(true),
},
),
})
```
There are 3 parameters that you can specify in the `quantization_config` section:
`type` - the type of the quantized vector components. Currently, Qdrant supports only `int8`.
`quantile` - the quantile of the quantized vector components.
The quantile is used to calculate the quantization bounds.
For instance, if you specify `0.99` as the quantile, 1% of extreme values will be excluded from the quantization bounds.
Using quantiles lower than `1.0` might be useful if there are outliers in your vector components.
This parameter only affects the resulting precision and not the memory footprint.
It might be worth tuning this parameter if you experience a significant decrease in search quality.
`always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors.
However, in some setups you might want to keep quantized vectors in RAM to speed up the search process.
In this case, you can set `always_ram` to `true` to store quantized vectors in RAM.
### Setting up Binary Quantization
To enable binary quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 1536,
"distance": "Cosine"
},
"quantization_config": {
"binary": {
"always_ram": true
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE),
quantization_config=models.BinaryQuantization(
binary=models.BinaryQuantizationConfig(
always_ram=True,
),
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 1536,
distance: "Cosine",
},
quantization_config: {
binary: {
always_ram: true,
},
},
});
```
```rust
use qdrant_client::qdrant::{
BinaryQuantizationBuilder, CreateCollectionBuilder, Distance, VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(1536, Distance::Cosine))
.quantization_config(BinaryQuantizationBuilder::new(true)),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.BinaryQuantization;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.QuantizationConfig;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(1536)
.setDistance(Distance.Cosine)
.build())
.build())
.setQuantizationConfig(
QuantizationConfig.newBuilder()
.setBinary(BinaryQuantization.newBuilder().setAlwaysRam(true).build())
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 1536, Distance = Distance.Cosine },
quantizationConfig: new QuantizationConfig
{
Binary = new BinaryQuantization { AlwaysRam = true }
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 1536,
Distance: qdrant.Distance_Cosine,
}),
QuantizationConfig: qdrant.NewQuantizationBinary(
&qdrant.BinaryQuantization{
AlwaysRam: qdrant.PtrOf(true),
},
),
})
```
`always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors.
However, in some setups you might want to keep quantized vectors in RAM to speed up the search process.
In this case, you can set `always_ram` to `true` to store quantized vectors in RAM.
### Setting up Product Quantization
To enable product quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine"
},
"quantization_config": {
"product": {
"compression": "x16",
"always_ram": true
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
quantization_config=models.ProductQuantization(
product=models.ProductQuantizationConfig(
compression=models.CompressionRatio.X16,
always_ram=True,
),
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
},
quantization_config: {
product: {
compression: "x16",
always_ram: true,
},
},
});
```
```rust
use qdrant_client::qdrant::{
CompressionRatio, CreateCollectionBuilder, Distance, ProductQuantizationBuilder,
VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
.quantization_config(
ProductQuantizationBuilder::new(CompressionRatio::X16.into()).always_ram(true),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CompressionRatio;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.ProductQuantization;
import io.qdrant.client.grpc.Collections.QuantizationConfig;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.build())
.build())
.setQuantizationConfig(
QuantizationConfig.newBuilder()
.setProduct(
ProductQuantization.newBuilder()
.setCompression(CompressionRatio.x16)
.setAlwaysRam(true)
.build())
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
quantizationConfig: new QuantizationConfig
{
Product = new ProductQuantization { Compression = CompressionRatio.X16, AlwaysRam = true }
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
}),
QuantizationConfig: qdrant.NewQuantizationProduct(
&qdrant.ProductQuantization{
Compression: qdrant.CompressionRatio_x16,
AlwaysRam: qdrant.PtrOf(true),
},
),
})
```
There are two parameters that you can specify in the `quantization_config` section:
`compression` - compression ratio.
Compression ratio represents the size of the quantized vector in bytes divided by the size of the original vector in bytes.
In this case, the quantized vector will be 16 times smaller than the original vector.
`always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors.
However, in some setups you might want to keep quantized vectors in RAM to speed up the search process. Then set `always_ram` to `true`.
### Searching with Quantization
Once you have configured quantization for a collection, you don't need to do anything extra to search with quantization.
Qdrant will automatically use quantized vectors if they are available.
However, there are a few options that you can use to control the search process:
```http
POST /collections/{collection_name}/points/query
{
"query": [0.2, 0.1, 0.9, 0.7],
"params": {
"quantization": {
"ignore": false,
"rescore": true,
"oversampling": 2.0
}
},
"limit": 10
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
search_params=models.SearchParams(
quantization=models.QuantizationSearchParams(
ignore=False,
rescore=True,
oversampling=2.0,
)
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
params: {
quantization: {
ignore: false,
rescore: true,
oversampling: 2.0,
},
},
limit: 10,
});
```
```rust
use qdrant_client::qdrant::{
QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(10)
.params(
SearchParamsBuilder::default().quantization(
QuantizationSearchParamsBuilder::default()
.ignore(false)
.rescore(true)
.oversampling(2.0),
),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QuantizationSearchParams;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.SearchParams;
import static io.qdrant.client.QueryFactory.nearest;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setParams(
SearchParams.newBuilder()
.setQuantization(
QuantizationSearchParams.newBuilder()
.setIgnore(false)
.setRescore(true)
.setOversampling(2.0)
.build())
.build())
.setLimit(10)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
searchParams: new SearchParams
{
Quantization = new QuantizationSearchParams
{
Ignore = false,
Rescore = true,
Oversampling = 2.0
}
},
limit: 10
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
Params: &qdrant.SearchParams{
Quantization: &qdrant.QuantizationSearchParams{
Ignore: qdrant.PtrOf(false),
Rescore: qdrant.PtrOf(true),
Oversampling: qdrant.PtrOf(2.0),
},
},
})
```
`ignore` - Toggle whether to ignore quantized vectors during the search process. By default, Qdrant will use quantized vectors if they are available.
`rescore` - Having the original vectors available, Qdrant can re-evaluate top-k search results using the original vectors.
This can improve the search quality, but may slightly decrease the search speed, compared to the search without rescore.
It is recommended to disable rescore only if the original vectors are stored on a slow storage (e.g. HDD or network storage).
By default, rescore is enabled.
**Available as of v1.3.0**
`oversampling` - Defines how many extra vectors should be pre-selected using quantized index, and then re-scored using original vectors.
For example, if oversampling is 2.4 and limit is 100, then 240 vectors will be pre-selected using quantized index, and then top-100 will be returned after re-scoring.
Oversampling is useful if you want to tune the tradeoff between search speed and search quality in the query time.
## Quantization tips
#### Accuracy tuning
In this section, we will discuss how to tune the search precision.
The fastest way to understand the impact of quantization on the search quality is to compare the search results with and without quantization.
In order to disable quantization, you can set `ignore` to `true` in the search request:
```http
POST /collections/{collection_name}/points/query
{
"query": [0.2, 0.1, 0.9, 0.7],
"params": {
"quantization": {
"ignore": true
}
},
"limit": 10
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
search_params=models.SearchParams(
quantization=models.QuantizationSearchParams(
ignore=True,
)
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
params: {
quantization: {
ignore: true,
},
},
});
```
```rust
use qdrant_client::qdrant::{
QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(3)
.params(
SearchParamsBuilder::default()
.quantization(QuantizationSearchParamsBuilder::default().ignore(true)),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QuantizationSearchParams;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.SearchParams;
import static io.qdrant.client.QueryFactory.nearest;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setParams(
SearchParams.newBuilder()
.setQuantization(
QuantizationSearchParams.newBuilder().setIgnore(true).build())
.build())
.setLimit(10)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
searchParams: new SearchParams
{
Quantization = new QuantizationSearchParams { Ignore = true }
},
limit: 10
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
Params: &qdrant.SearchParams{
Quantization: &qdrant.QuantizationSearchParams{
Ignore: qdrant.PtrOf(false),
},
},
})
```
- **Adjust the quantile parameter**: The quantile parameter in scalar quantization determines the quantization bounds.
By setting it to a value lower than 1.0, you can exclude extreme values (outliers) from the quantization bounds.
For example, if you set the quantile to 0.99, 1% of the extreme values will be excluded.
By adjusting the quantile, you find an optimal value that will provide the best search quality for your collection.
- **Enable rescore**: Having the original vectors available, Qdrant can re-evaluate top-k search results using the original vectors. On large collections, this can improve the search quality, with just minor performance impact.
#### Memory and speed tuning
In this section, we will discuss how to tune the memory and speed of the search process with quantization.
There are 3 possible modes to place storage of vectors within the qdrant collection:
- **All in RAM** - all vector, original and quantized, are loaded and kept in RAM. This is the fastest mode, but requires a lot of RAM. Enabled by default.
- **Original on Disk, quantized in RAM** - this is a hybrid mode, allows to obtain a good balance between speed and memory usage. Recommended scenario if you are aiming to shrink the memory footprint while keeping the search speed.
This mode is enabled by setting `always_ram` to `true` in the quantization config while using memmap storage:
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine",
"on_disk": true
},
"quantization_config": {
"scalar": {
"type": "int8",
"always_ram": true
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True),
quantization_config=models.ScalarQuantization(
scalar=models.ScalarQuantizationConfig(
type=models.ScalarType.INT8,
always_ram=True,
),
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
on_disk: true,
},
quantization_config: {
scalar: {
type: "int8",
always_ram: true,
},
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder,
VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true))
.quantization_config(
ScalarQuantizationBuilder::default()
.r#type(QuantizationType::Int8.into())
.always_ram(true),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
import io.qdrant.client.grpc.Collections.QuantizationConfig;
import io.qdrant.client.grpc.Collections.QuantizationType;
import io.qdrant.client.grpc.Collections.ScalarQuantization;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.setOnDisk(true)
.build())
.build())
.setQuantizationConfig(
QuantizationConfig.newBuilder()
.setScalar(
ScalarQuantization.newBuilder()
.setType(QuantizationType.Int8)
.setAlwaysRam(true)
.build())
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true},
quantizationConfig: new QuantizationConfig
{
Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true }
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
OnDisk: qdrant.PtrOf(true),
}),
QuantizationConfig: qdrant.NewQuantizationScalar(
&qdrant.ScalarQuantization{
Type: qdrant.QuantizationType_Int8,
AlwaysRam: qdrant.PtrOf(true),
},
),
})
```
In this scenario, the number of disk reads may play a significant role in the search speed.
In a system with high disk latency, the re-scoring step may become a bottleneck.
Consider disabling `rescore` to improve the search speed:
```http
POST /collections/{collection_name}/points/query
{
"query": [0.2, 0.1, 0.9, 0.7],
"params": {
"quantization": {
"rescore": false
}
},
"limit": 10
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
search_params=models.SearchParams(
quantization=models.QuantizationSearchParams(rescore=False)
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
params: {
quantization: {
rescore: false,
},
},
});
```
```rust
use qdrant_client::qdrant::{
QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(3)
.params(
SearchParamsBuilder::default()
.quantization(QuantizationSearchParamsBuilder::default().rescore(false)),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QuantizationSearchParams;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.SearchParams;
import static io.qdrant.client.QueryFactory.nearest;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setParams(
SearchParams.newBuilder()
.setQuantization(
QuantizationSearchParams.newBuilder().setRescore(false).build())
.build())
.setLimit(3)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
searchParams: new SearchParams
{
Quantization = new QuantizationSearchParams { Rescore = false }
},
limit: 3
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
Params: &qdrant.SearchParams{
Quantization: &qdrant.QuantizationSearchParams{
Rescore: qdrant.PtrOf(false),
},
},
})
```
- **All on Disk** - all vectors, original and quantized, are stored on disk. This mode allows to achieve the smallest memory footprint, but at the cost of the search speed.
It is recommended to use this mode if you have a large collection and fast storage (e.g. SSD or NVMe).
This mode is enabled by setting `always_ram` to `false` in the quantization config while using mmap storage:
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine",
"on_disk": true
},
"quantization_config": {
"scalar": {
"type": "int8",
"always_ram": false
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True),
quantization_config=models.ScalarQuantization(
scalar=models.ScalarQuantizationConfig(
type=models.ScalarType.INT8,
always_ram=False,
),
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
on_disk: true,
},
quantization_config: {
scalar: {
type: "int8",
always_ram: false,
},
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder,
VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true))
.quantization_config(
ScalarQuantizationBuilder::default()
.r#type(QuantizationType::Int8.into())
.always_ram(false),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
import io.qdrant.client.grpc.Collections.QuantizationConfig;
import io.qdrant.client.grpc.Collections.QuantizationType;
import io.qdrant.client.grpc.Collections.ScalarQuantization;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.setOnDisk(true)
.build())
.build())
.setQuantizationConfig(
QuantizationConfig.newBuilder()
.setScalar(
ScalarQuantization.newBuilder()
.setType(QuantizationType.Int8)
.setAlwaysRam(false)
.build())
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true},
quantizationConfig: new QuantizationConfig
{
Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = false }
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
OnDisk: qdrant.PtrOf(true),
}),
QuantizationConfig: qdrant.NewQuantizationScalar(
&qdrant.ScalarQuantization{
Type: qdrant.QuantizationType_Int8,
AlwaysRam: qdrant.PtrOf(false),
},
),
})
```
| documentation/guides/quantization.md |
---
title: Monitoring
weight: 155
aliases:
- ../monitoring
---
# Monitoring
Qdrant exposes its metrics in [Prometheus](https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format)/[OpenMetrics](https://github.com/OpenObservability/OpenMetrics) format, so you can integrate them easily
with the compatible tools and monitor Qdrant with your own monitoring system. You can
use the `/metrics` endpoint and configure it as a scrape target.
Metrics endpoint: <http://localhost:6333/metrics>
The integration with Qdrant is easy to
[configure](https://prometheus.io/docs/prometheus/latest/getting_started/#configure-prometheus-to-monitor-the-sample-targets)
with Prometheus and Grafana.
## Monitoring multi-node clusters
When scraping metrics from multi-node Qdrant clusters, it is important to scrape from
each node individually instead of using a load-balanced URL. Otherwise, your metrics will appear inconsistent after each scrape.
## Monitoring in Qdrant Cloud
To scrape metrics from a Qdrant cluster running in Qdrant Cloud, note that an [API key](/documentation/cloud/authentication/) is required to access `/metrics`. Qdrant Cloud also supports supplying the API key as a [Bearer token](https://www.rfc-editor.org/rfc/rfc6750.html), which may be required by some providers.
## Exposed metrics
Each Qdrant server will expose the following metrics.
| Name | Type | Meaning |
|-------------------------------------|---------|---------------------------------------------------|
| app_info | gauge | Information about Qdrant server |
| app_status_recovery_mode | gauge | If Qdrant is currently started in recovery mode |
| collections_total | gauge | Number of collections |
| collections_vector_total | gauge | Total number of vectors in all collections |
| collections_full_total | gauge | Number of full collections |
| collections_aggregated_total | gauge | Number of aggregated collections |
| rest_responses_total | counter | Total number of responses through REST API |
| rest_responses_fail_total | counter | Total number of failed responses through REST API |
| rest_responses_avg_duration_seconds | gauge | Average response duration in REST API |
| rest_responses_min_duration_seconds | gauge | Minimum response duration in REST API |
| rest_responses_max_duration_seconds | gauge | Maximum response duration in REST API |
| grpc_responses_total | counter | Total number of responses through gRPC API |
| grpc_responses_fail_total | counter | Total number of failed responses through REST API |
| grpc_responses_avg_duration_seconds | gauge | Average response duration in gRPC API |
| grpc_responses_min_duration_seconds | gauge | Minimum response duration in gRPC API |
| grpc_responses_max_duration_seconds | gauge | Maximum response duration in gRPC API |
| cluster_enabled | gauge | Whether the cluster support is enabled. 1 - YES |
### Cluster-related metrics
There are also some metrics which are exposed in distributed mode only.
| Name | Type | Meaning |
| -------------------------------- | ------- | ---------------------------------------------------------------------- |
| cluster_peers_total | gauge | Total number of cluster peers |
| cluster_term | counter | Current cluster term |
| cluster_commit | counter | Index of last committed (finalized) operation cluster peer is aware of |
| cluster_pending_operations_total | gauge | Total number of pending operations for cluster peer |
| cluster_voter | gauge | Whether the cluster peer is a voter or learner. 1 - VOTER |
## Kubernetes health endpoints
*Available as of v1.5.0*
Qdrant exposes three endpoints, namely
[`/healthz`](http://localhost:6333/healthz),
[`/livez`](http://localhost:6333/livez) and
[`/readyz`](http://localhost:6333/readyz), to indicate the current status of the
Qdrant server.
These currently provide the most basic status response, returning HTTP 200 if
Qdrant is started and ready to be used.
Regardless of whether an [API key](../security/#authentication) is configured,
the endpoints are always accessible.
You can read more about Kubernetes health endpoints
[here](https://kubernetes.io/docs/reference/using-api/health-checks/).
| documentation/guides/monitoring.md |
---
title: Guides
weight: 12
# If the index.md file is empty, the link to the section will be hidden from the sidebar
is_empty: true
--- | documentation/guides/_index.md |
---
title: Security
weight: 165
aliases:
- ../security
---
# Security
Please read this page carefully. Although there are various ways to secure your Qdrant instances, **they are unsecured by default**.
You need to enable security measures before production use. Otherwise, they are completely open to anyone
## Authentication
*Available as of v1.2.0*
Qdrant supports a simple form of client authentication using a static API key.
This can be used to secure your instance.
To enable API key based authentication in your own Qdrant instance you must
specify a key in the configuration:
```yaml
service:
# Set an api-key.
# If set, all requests must include a header with the api-key.
# example header: `api-key: <API-KEY>`
#
# If you enable this you should also enable TLS.
# (Either above or via an external service like nginx.)
# Sending an api-key over an unencrypted channel is insecure.
api_key: your_secret_api_key_here
```
Or alternatively, you can use the environment variable:
```bash
export QDRANT__SERVICE__API_KEY=your_secret_api_key_here
```
<aside role="alert"><a href="#tls">TLS</a> must be used to prevent leaking the API key over an unencrypted connection.</aside>
For using API key based authentication in Qdrant Cloud see the cloud
[Authentication](/documentation/cloud/authentication/)
section.
The API key then needs to be present in all REST or gRPC requests to your instance.
All official Qdrant clients for Python, Go, Rust, .NET and Java support the API key parameter.
<!---
Examples with clients
-->
```bash
curl \
-X GET https://localhost:6333 \
--header 'api-key: your_secret_api_key_here'
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(
url="https://localhost:6333",
api_key="your_secret_api_key_here",
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({
url: "http://localhost",
port: 6333,
apiKey: "your_secret_api_key_here",
});
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("https://xyz-example.eu-central.aws.cloud.qdrant.io:6334")
.api_key("<paste-your-api-key-here>")
.build()?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(
QdrantGrpcClient.newBuilder(
"xyz-example.eu-central.aws.cloud.qdrant.io",
6334,
true)
.withApiKey("<paste-your-api-key-here>")
.build());
```
```csharp
using Qdrant.Client;
var client = new QdrantClient(
host: "xyz-example.eu-central.aws.cloud.qdrant.io",
https: true,
apiKey: "<paste-your-api-key-here>"
);
```
```go
import "github.com/qdrant/go-client/qdrant"
client, err := qdrant.NewClient(&qdrant.Config{
Host: "xyz-example.eu-central.aws.cloud.qdrant.io",
Port: 6334,
APIKey: "<paste-your-api-key-here>",
UseTLS: true,
})
```
<aside role="alert">Internal communication channels are <strong>never</strong> protected by an API key nor bearer tokens. Internal gRPC uses port 6335 by default if running in distributed mode. You must ensure that this port is not publicly reachable and can only be used for node communication. By default, this setting is disabled for Qdrant Cloud and the Qdrant Helm chart.</aside>
### Read-only API key
*Available as of v1.7.0*
In addition to the regular API key, Qdrant also supports a read-only API key.
This key can be used to access read-only operations on the instance.
```yaml
service:
read_only_api_key: your_secret_read_only_api_key_here
```
Or with the environment variable:
```bash
export QDRANT__SERVICE__READ_ONLY_API_KEY=your_secret_read_only_api_key_here
```
Both API keys can be used simultaneously.
### Granular access control with JWT
*Available as of v1.9.0*
For more complex cases, Qdrant supports granular access control with [JSON Web Tokens (JWT)](https://jwt.io/).
This allows you to have tokens, which allow restricited access to a specific parts of the stored data and build [Role-based access control (RBAC)](https://en.wikipedia.org/wiki/Role-based_access_control) on top of that.
In this way, you can define permissions for users and restrict access to sensitive endpoints.
To enable JWT-based authentication in your own Qdrant instance you need to specify the `api-key` and enable the `jwt_rbac` feature in the configuration:
```yaml
service:
api_key: you_secret_api_key_here
jwt_rbac: true
```
Or with the environment variables:
```bash
export QDRANT__SERVICE__API_KEY=your_secret_api_key_here
export QDRANT__SERVICE__JWT_RBAC=true
```
The `api_key` you set in the configuration will be used to encode and decode the JWTs, so –needless to say– keep it secure. If your `api_key` changes, all existing tokens will be invalid.
To use JWT-based authentication, you need to provide it as a bearer token in the `Authorization` header, or as an key in the `Api-Key` header of your requests.
```http
Authorization: Bearer <JWT>
// or
Api-Key: <JWT>
```
```python
from qdrant_client import QdrantClient
qdrant_client = QdrantClient(
"xyz-example.eu-central.aws.cloud.qdrant.io",
api_key="<JWT>",
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({
host: "xyz-example.eu-central.aws.cloud.qdrant.io",
apiKey: "<JWT>",
});
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("https://xyz-example.eu-central.aws.cloud.qdrant.io:6334")
.api_key("<JWT>")
.build()?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(
QdrantGrpcClient.newBuilder(
"xyz-example.eu-central.aws.cloud.qdrant.io",
6334,
true)
.withApiKey("<JWT>")
.build());
```
```csharp
using Qdrant.Client;
var client = new QdrantClient(
host: "xyz-example.eu-central.aws.cloud.qdrant.io",
https: true,
apiKey: "<JWT>"
);
```
```go
import "github.com/qdrant/go-client/qdrant"
client, err := qdrant.NewClient(&qdrant.Config{
Host: "xyz-example.eu-central.aws.cloud.qdrant.io",
Port: 6334,
APIKey: "<JWT>",
UseTLS: true,
})
```
#### Generating JSON Web Tokens
Due to the nature of JWT, anyone who knows the `api_key` can generate tokens by using any of the existing libraries and tools, it is not necessary for them to have access to the Qdrant instance to generate them.
For convenience, we have added a JWT generation tool the Qdrant Web UI under the 🔑 tab, if you're using the default url, it will be at `http://localhost:6333/dashboard#/jwt`.
- **JWT Header** - Qdrant uses the `HS256` algorithm to decode the tokens.
```json
{
"alg": "HS256",
"typ": "JWT"
}
```
- **JWT Payload** - You can include any combination of the [parameters available](#jwt-configuration) in the payload. Keep reading for more info on each one.
```json
{
"exp": 1640995200, // Expiration time
"value_exists": ..., // Validate this token by looking for a point with a payload value
"access": "r", // Define the access level.
}
```
**Signing the token** - To confirm that the generated token is valid, it needs to be signed with the `api_key` you have set in the configuration.
That would mean, that someone who knows the `api_key` gives the authorization for the new token to be used in the Qdrant instance.
Qdrant can validate the signature, because it knows the `api_key` and can decode the token.
The process of token generation can be done on the client side offline, and doesn't require any communication with the Qdrant instance.
Here is an example of libraries that can be used to generate JWT tokens:
- Python: [PyJWT](https://pyjwt.readthedocs.io/en/stable/)
- JavaScript: [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken)
- Rust: [jsonwebtoken](https://crates.io/crates/jsonwebtoken)
#### JWT Configuration
These are the available options, or **claims** in the JWT lingo. You can use them in the JWT payload to define its functionality.
- **`exp`** - The expiration time of the token. This is a Unix timestamp in seconds. The token will be invalid after this time. The check for this claim includes a 30-second leeway to account for clock skew.
```json
{
"exp": 1640995200, // Expiration time
}
```
- **`value_exists`** - This is a claim that can be used to validate the token against the data stored in a collection. Structure of this claim is as follows:
```json
{
"value_exists": {
"collection": "my_validation_collection",
"matches": [
{ "key": "my_key", "value": "value_that_must_exist" }
],
},
}
```
If this claim is present, Qdrant will check if there is a point in the collection with the specified key-values. If it does, the token is valid.
This claim is especially useful if you want to have an ability to revoke tokens without changing the `api_key`.
Consider a case where you have a collection of users, and you want to revoke access to a specific user.
```json
{
"value_exists": {
"collection": "users",
"matches": [
{ "key": "user_id", "value": "andrey" },
{ "key": "role", "value": "manager" }
],
},
}
```
You can create a token with this claim, and when you want to revoke access, you can change the `role` of the user to something else, and the token will be invalid.
- **`access`** - This claim defines the [access level](#table-of-access) of the token. If this claim is present, Qdrant will check if the token has the required access level to perform the operation. If this claim is **not** present, **manage** access is assumed.
It can provide global access with `r` for read-only, or `m` for manage. For example:
```json
{
"access": "r"
}
```
It can also be specific to one or more collections. The `access` level for each collection is `r` for read-only, or `rw` for read-write, like this:
```json
{
"access": [
{
"collection": "my_collection",
"access": "rw"
}
]
}
```
You can also specify which subset of the collection the user is able to access by specifying a `payload` restriction that the points must have.
```json
{
"access": [
{
"collection": "my_collection",
"access": "r",
"payload": {
"user_id": "user_123456"
}
}
]
}
```
This `payload` claim will be used to implicitly filter the points in the collection. It will be equivalent to appending this filter to each request:
```json
{ "filter": { "must": [{ "key": "user_id", "match": { "value": "user_123456" } }] } }
```
### Table of access
Check out this table to see which actions are allowed or denied based on the access level.
This is also applicable to using api keys instead of tokens. In that case, `api_key` maps to **manage**, while `read_only_api_key` maps to **read-only**.
<div style="text-align: right"> <strong>Symbols:</strong> ✅ Allowed | ❌ Denied | 🟡 Allowed, but filtered </div>
| Action | manage | read-only | collection read-write | collection read-only | collection with payload claim (r / rw) |
|--------|--------|-----------|----------------------|-----------------------|------------------------------------|
| list collections | ✅ | ✅ | 🟡 | 🟡 | 🟡 |
| get collection info | ✅ | ✅ | ✅ | ✅ | ❌ |
| create collection | ✅ | ❌ | ❌ | ❌ | ❌ |
| delete collection | ✅ | ❌ | ❌ | ❌ | ❌ |
| update collection params | ✅ | ❌ | ❌ | ❌ | ❌ |
| get collection cluster info | ✅ | ✅ | ✅ | ✅ | ❌ |
| collection exists | ✅ | ✅ | ✅ | ✅ | ✅ |
| update collection cluster setup | ✅ | ❌ | ❌ | ❌ | ❌ |
| update aliases | ✅ | ❌ | ❌ | ❌ | ❌ |
| list collection aliases | ✅ | ✅ | 🟡 | 🟡 | 🟡 |
| list aliases | ✅ | ✅ | 🟡 | 🟡 | 🟡 |
| create shard key | ✅ | ❌ | ❌ | ❌ | ❌ |
| delete shard key | ✅ | ❌ | ❌ | ❌ | ❌ |
| create payload index | ✅ | ❌ | ✅ | ❌ | ❌ |
| delete payload index | ✅ | ❌ | ✅ | ❌ | ❌ |
| list collection snapshots | ✅ | ✅ | ✅ | ✅ | ❌ |
| create collection snapshot | ✅ | ❌ | ✅ | ❌ | ❌ |
| delete collection snapshot | ✅ | ❌ | ✅ | ❌ | ❌ |
| download collection snapshot | ✅ | ✅ | ✅ | ✅ | ❌ |
| upload collection snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
| recover collection snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
| list shard snapshots | ✅ | ✅ | ✅ | ✅ | ❌ |
| create shard snapshot | ✅ | ❌ | ✅ | ❌ | ❌ |
| delete shard snapshot | ✅ | ❌ | ✅ | ❌ | ❌ |
| download shard snapshot | ✅ | ✅ | ✅ | ✅ | ❌ |
| upload shard snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
| recover shard snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
| list full snapshots | ✅ | ✅ | ❌ | ❌ | ❌ |
| create full snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
| delete full snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
| download full snapshot | ✅ | ✅ | ❌ | ❌ | ❌ |
| get cluster info | ✅ | ✅ | ❌ | ❌ | ❌ |
| recover raft state | ✅ | ❌ | ❌ | ❌ | ❌ |
| delete peer | ✅ | ❌ | ❌ | ❌ | ❌ |
| get point | ✅ | ✅ | ✅ | ✅ | ❌ |
| get points | ✅ | ✅ | ✅ | ✅ | ❌ |
| upsert points | ✅ | ❌ | ✅ | ❌ | ❌ |
| update points batch | ✅ | ❌ | ✅ | ❌ | ❌ |
| delete points | ✅ | ❌ | ✅ | ❌ | ❌ / 🟡 |
| update vectors | ✅ | ❌ | ✅ | ❌ | ❌ |
| delete vectors | ✅ | ❌ | ✅ | ❌ | ❌ / 🟡 |
| set payload | ✅ | ❌ | ✅ | ❌ | ❌ |
| overwrite payload | ✅ | ❌ | ✅ | ❌ | ❌ |
| delete payload | ✅ | ❌ | ✅ | ❌ | ❌ |
| clear payload | ✅ | ❌ | ✅ | ❌ | ❌ |
| scroll points | ✅ | ✅ | ✅ | ✅ | 🟡 |
| query points | ✅ | ✅ | ✅ | ✅ | 🟡 |
| search points | ✅ | ✅ | ✅ | ✅ | 🟡 |
| search groups | ✅ | ✅ | ✅ | ✅ | 🟡 |
| recommend points | ✅ | ✅ | ✅ | ✅ | ❌ |
| recommend groups | ✅ | ✅ | ✅ | ✅ | ❌ |
| discover points | ✅ | ✅ | ✅ | ✅ | ❌ |
| count points | ✅ | ✅ | ✅ | ✅ | 🟡 |
| version | ✅ | ✅ | ✅ | ✅ | ✅ |
| readyz, healthz, livez | ✅ | ✅ | ✅ | ✅ | ✅ |
| telemetry | ✅ | ✅ | ❌ | ❌ | ❌ |
| metrics | ✅ | ✅ | ❌ | ❌ | ❌ |
| update locks | ✅ | ❌ | ❌ | ❌ | ❌ |
| get locks | ✅ | ✅ | ❌ | ❌ | ❌ |
## TLS
*Available as of v1.2.0*
TLS for encrypted connections can be enabled on your Qdrant instance to secure
connections.
<aside role="alert">Connections are unencrypted by default. This allows sniffing and <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack">MitM</a> attacks.</aside>
First make sure you have a certificate and private key for TLS, usually in
`.pem` format. On your local machine you may use
[mkcert](https://github.com/FiloSottile/mkcert#readme) to generate a self signed
certificate.
To enable TLS, set the following properties in the Qdrant configuration with the
correct paths and restart:
```yaml
service:
# Enable HTTPS for the REST and gRPC API
enable_tls: true
# TLS configuration.
# Required if either service.enable_tls or cluster.p2p.enable_tls is true.
tls:
# Server certificate chain file
cert: ./tls/cert.pem
# Server private key file
key: ./tls/key.pem
```
For internal communication when running cluster mode, TLS can be enabled with:
```yaml
cluster:
# Configuration of the inter-cluster communication
p2p:
# Use TLS for communication between peers
enable_tls: true
```
With TLS enabled, you must start using HTTPS connections. For example:
```bash
curl -X GET https://localhost:6333
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(
url="https://localhost:6333",
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ url: "https://localhost", port: 6333 });
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
```
Certificate rotation is enabled with a default refresh time of one hour. This
reloads certificate files every hour while Qdrant is running. This way changed
certificates are picked up when they get updated externally. The refresh time
can be tuned by changing the `tls.cert_ttl` setting. You can leave this on, even
if you don't plan to update your certificates. Currently this is only supported
for the REST API.
Optionally, you can enable client certificate validation on the server against a
local certificate authority. Set the following properties and restart:
```yaml
service:
# Check user HTTPS client certificate against CA file specified in tls config
verify_https_client_certificate: false
# TLS configuration.
# Required if either service.enable_tls or cluster.p2p.enable_tls is true.
tls:
# Certificate authority certificate file.
# This certificate will be used to validate the certificates
# presented by other nodes during inter-cluster communication.
#
# If verify_https_client_certificate is true, it will verify
# HTTPS client certificate
#
# Required if cluster.p2p.enable_tls is true.
ca_cert: ./tls/cacert.pem
```
## Hardening
We recommend reducing the amount of permissions granted to Qdrant containers so that you can reduce the risk of exploitation. Here are some ways to reduce the permissions of a Qdrant container:
* Run Qdrant as a non-root user. This can help mitigate the risk of future container breakout vulnerabilities. Qdrant does not need the privileges of the root user for any purpose.
- You can use the image `qdrant/qdrant:<version>-unprivileged` instead of the default Qdrant image.
- You can use the flag `--user=1000:2000` when running [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/).
- You can set [`user: 1000`](https://docs.docker.com/compose/compose-file/05-services/#user) when using Docker Compose.
- You can set [`runAsUser: 1000`](https://kubernetes.io/docs/tasks/configure-pod-container/security-context) when running in Kubernetes (our [Helm chart](https://github.com/qdrant/qdrant-helm) does this by default).
* Run Qdrant with a read-only root filesystem. This can help mitigate vulnerabilities that require the ability to modify system files, which is a permission Qdrant does not need. As long as the container uses mounted volumes for storage (`/qdrant/storage` and `/qdrant/snapshots` by default), Qdrant can continue to operate while being prevented from writing data outside of those volumes.
- You can use the flag `--read-only` when running [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/).
- You can set [`read_only: true`](https://docs.docker.com/compose/compose-file/05-services/#read_only) when using Docker Compose.
- You can set [`readOnlyRootFilesystem: true`](https://kubernetes.io/docs/tasks/configure-pod-container/security-context) when running in Kubernetes (our [Helm chart](https://github.com/qdrant/qdrant-helm) does this by default).
* Block Qdrant's external network access. This can help mitigate [server side request forgery attacks](https://owasp.org/www-community/attacks/Server_Side_Request_Forgery), like via the [snapshot recovery API](https://api.qdrant.tech/api-reference/snapshots/recover-from-snapshot). Single-node Qdrant clusters do not require any outbound network access. Multi-node Qdrant clusters only need the ability to connect to other Qdrant nodes via TCP ports 6333, 6334, and 6335.
- You can use [`docker network create --internal <name>`](https://docs.docker.com/reference/cli/docker/network/create/#internal) and use that network when running [`docker run --network <name>`](https://docs.docker.com/reference/cli/docker/container/run/#network).
- You can create an [internal network](https://docs.docker.com/compose/compose-file/06-networks/#internal) when using Docker Compose.
- You can create a [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) when using Kubernetes. Note that multi-node Qdrant clusters [will also need access to cluster DNS in Kubernetes](https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/11-deny-egress-traffic-from-an-application.md#allowing-dns-traffic).
There are other techniques for reducing the permissions such as dropping [Linux capabilities](https://www.man7.org/linux/man-pages/man7/capabilities.7.html) depending on your deployment method, but the methods mentioned above are the most important.
| documentation/guides/security.md |
---
title: Private RAG Information Extraction Engine
weight: 32
social_preview_image: /blog/hybrid-cloud-vultr/hybrid-cloud-vultr-tutorial.png
aliases:
- /documentation/tutorials/rag-chatbot-vultr-dspy-ollama/
---
# Private RAG Information Extraction Engine
| Time: 90 min | Level: Advanced | | |
|--------------|-----------------|--|----|
Handling private documents is a common task in many industries. Various businesses possess a large amount of
unstructured data stored as huge files that must be processed and analyzed. Industry reports, financial analysis, legal
documents, and many other documents are stored in PDF, Word, and other formats. Conversational chatbots built on top of
RAG pipelines are one of the viable solutions for finding the relevant answers in such documents. However, if we want to
extract structured information from these documents, and pass them to downstream systems, we need to use a different
approach.
Information extraction is a process of structuring unstructured data into a format that can be easily processed by
machines. In this tutorial, we will show you how to use [DSPy](https://dspy-docs.vercel.app/) to perform that process on
a set of documents. Assuming we cannot send our data to an external service, we will use [Ollama](https://ollama.com/)
to run our own LLM model on our premises, using [Vultr](https://www.vultr.com/) as a cloud provider. Qdrant, acting in
this setup as a knowledge base providing the relevant pieces of documents for a given query, will also be hosted in the
Hybrid Cloud mode on Vultr. The last missing piece, the DSPy application will be also running in the same environment.
If you work in a regulated industry, or just need to keep your data private, this tutorial is for you.
![Architecture diagram](/documentation/examples/information-extraction-ollama-vultr/architecture-diagram.png)
## Deploying Qdrant Hybrid Cloud on Vultr
All the services we are going to use in this tutorial will be running on [Vultr Kubernetes
Engine](https://www.vultr.com/kubernetes/). That gives us a lot of flexibility in terms of scaling and managing the resources. Vultr manages the control plane and worker nodes and provides integration with other managed services such as Load Balancers, Block Storage, and DNS.
1. To start using managed Kubernetes on Vultr, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#vultr).
2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/).
### Installing the necessary packages
We are going to need a couple of Python packages to run our application. They might be installed together with the
`dspy-ai` package and `qdrant` extra:
```shell
pip install dspy-ai[qdrant]
```
### Qdrant Hybrid Cloud
Our [documentation](/documentation/hybrid-cloud/) contains a comprehensive guide on how to set up Qdrant in the Hybrid Cloud mode on Vultr. Please follow it carefully to get your Qdrant instance up and running. Once it's done, we need to store the Qdrant URL and the API key in the environment variables. You can do it by running the following commands:
```shell
export QDRANT_URL="https://qdrant.example.com"
export QDRANT_API_KEY="your-api-key"
```
```python
import os
os.environ["QDRANT_URL"] = "https://qdrant.example.com"
os.environ["QDRANT_API_KEY"] = "your-api-key"
```
DSPy is framework we are going to use. It's integrated with Qdrant already, but it assumes you use
[FastEmbed](https://qdrant.github.io/fastembed/) to create the embeddings. DSPy does not provide a way to index the
data, but leaves this task to the user. We are going to create a collection on our own, and fill it with the embeddings
of our document chunks.
#### Data indexing
FastEmbed uses the `BAAI/bge-small-en` as the default embedding model. We are going to use it as well. Our collection
will be created automatically if we call the `.add` method on an existing `QdrantClient` instance. In this tutorial we
are not going to focus much on the document parsing, as there are plenty of tools that can help with that. The
[`unstructured`](https://github.com/Unstructured-IO/unstructured) library is one of the options you can launch on your
infrastructure. In our simplified example, we are going to use a list of strings as our documents. These are the
descriptions of the made up technical events. Each of them should contain the name of the event along with the location
and start and end dates.
```python
documents = [
"Taking place in San Francisco, USA, from the 10th to the 12th of June, 2024, the Global Developers Conference is the annual gathering spot for developers worldwide, offering insights into software engineering, web development, and mobile applications.",
"The AI Innovations Summit, scheduled for 15-17 September 2024 in London, UK, aims at professionals and researchers advancing artificial intelligence and machine learning.",
"Berlin, Germany will host the CyberSecurity World Conference between November 5th and 7th, 2024, serving as a key forum for cybersecurity professionals to exchange strategies and research on threat detection and mitigation.",
"Data Science Connect in New York City, USA, occurring from August 22nd to 24th, 2024, connects data scientists, analysts, and engineers to discuss data science's innovative methodologies, tools, and applications.",
"Set for July 14-16, 2024, in Tokyo, Japan, the Frontend Developers Fest invites developers to delve into the future of UI/UX design, web performance, and modern JavaScript frameworks.",
"The Blockchain Expo Global, happening May 20-22, 2024, in Dubai, UAE, focuses on blockchain technology's applications, opportunities, and challenges for entrepreneurs, developers, and investors.",
"Singapore's Cloud Computing Summit, scheduled for October 3-5, 2024, is where IT professionals and cloud experts will convene to discuss strategies, architectures, and cloud solutions.",
"The IoT World Forum, taking place in Barcelona, Spain from December 1st to 3rd, 2024, is the premier conference for those focused on the Internet of Things, from smart cities to IoT security.",
"Los Angeles, USA, will become the hub for game developers, designers, and enthusiasts at the Game Developers Arcade, running from April 18th to 20th, 2024, to showcase new games and discuss development tools.",
"The TechWomen Summit in Sydney, Australia, from March 8-10, 2024, aims to empower women in tech with workshops, keynotes, and networking opportunities.",
"Seoul, South Korea's Mobile Tech Conference, happening from September 29th to October 1st, 2024, will explore the future of mobile technology, including 5G networks and app development trends.",
"The Open Source Summit, to be held in Helsinki, Finland from August 11th to 13th, 2024, celebrates open source technologies and communities, offering insights into the latest software and collaboration techniques.",
"Vancouver, Canada will play host to the VR/AR Innovation Conference from June 20th to 22nd, 2024, focusing on the latest in virtual and augmented reality technologies.",
"Scheduled for May 5-7, 2024, in London, UK, the Fintech Leaders Forum brings together experts to discuss the future of finance, including innovations in blockchain, digital currencies, and payment technologies.",
"The Digital Marketing Summit, set for April 25-27, 2024, in New York City, USA, is designed for marketing professionals and strategists to discuss digital marketing and social media trends.",
"EcoTech Symposium in Paris, France, unfolds over 2024-10-09 to 2024-10-11, spotlighting sustainable technologies and green innovations for environmental scientists, tech entrepreneurs, and policy makers.",
"Set in Tokyo, Japan, from 16th to 18th May '24, the Robotic Innovations Conference showcases automation, robotics, and AI-driven solutions, appealing to enthusiasts and engineers.",
"The Software Architecture World Forum in Dublin, Ireland, occurring 22-24 Sept 2024, gathers software architects and IT managers to discuss modern architecture patterns.",
"Quantum Computing Summit, convening in Silicon Valley, USA from 2024/11/12 to 2024/11/14, is a rendezvous for exploring quantum computing advancements with physicists and technologists.",
"From March 3 to 5, 2024, the Global EdTech Conference in London, UK, discusses the intersection of education and technology, featuring e-learning and digital classrooms.",
"Bangalore, India's NextGen DevOps Days, from 28 to 30 August 2024, is a hotspot for IT professionals keen on the latest DevOps tools and innovations.",
"The UX/UI Design Conference, slated for April 21-23, 2024, in New York City, USA, invites discussions on the latest in user experience and interface design among designers and developers.",
"Big Data Analytics Summit, taking place 2024 July 10-12 in Amsterdam, Netherlands, brings together data professionals to delve into big data analysis and insights.",
"Toronto, Canada, will see the HealthTech Innovation Forum from June 8 to 10, '24, focusing on technology's impact on healthcare with professionals and innovators.",
"Blockchain for Business Summit, happening in Singapore from 2024-05-02 to 2024-05-04, focuses on blockchain's business applications, from finance to supply chain.",
"Las Vegas, USA hosts the Global Gaming Expo from October 18th to 20th, 2024, a premiere event for game developers, publishers, and enthusiasts.",
"The Renewable Energy Tech Conference in Copenhagen, Denmark, from 2024/09/05 to 2024/09/07, discusses renewable energy innovations and policies.",
"Set for 2024 Apr 9-11 in Boston, USA, the Artificial Intelligence in Healthcare Summit gathers healthcare professionals to discuss AI's healthcare applications.",
"Nordic Software Engineers Conference, happening in Stockholm, Sweden from June 15 to 17, 2024, focuses on software development in the Nordic region.",
"The International Space Exploration Symposium, scheduled in Houston, USA from 2024-08-05 to 2024-08-07, invites discussions on space exploration technologies and missions."
]
```
We'll be able to ask general questions, for example, about topics we are interested in or events happening in a specific
location, but expect the results to be returned in a structured format.
![An example of extracted information](/documentation/examples/information-extraction-ollama-vultr/extracted-information.png)
Indexing in Qdrant is a single call if we have the documents defined:
```python
client.add(
collection_name="document-parts",
documents=documents,
metadata=[{"document": document} for document in documents],
)
```
Our collection is ready to be queried. We can now move to the next step, which is setting up the Ollama model.
### Ollama on Vultr
Ollama is a great tool for running the LLM models on your own infrastructure. It's designed to be lightweight and easy
to use, and [an official Docker image](https://hub.docker.com/r/ollama/ollama) is available. We can use it to run Ollama
on our Vultr Kubernetes cluster. In case of LLMs we may have some special requirements, like a GPU, and Vultr provides
the [Vultr Kubernetes Engine for Cloud GPU](https://www.vultr.com/products/cloud-gpu/) so the model can be run on a
specialized machine. Please refer to the official documentation to get Ollama up and running within your environment.
Once it's done, we need to store the Ollama URL in the environment variable:
```shell
export OLLAMA_URL="https://ollama.example.com"
```
```python
os.environ["OLLAMA_URL"] = "https://ollama.example.com"
```
We will refer to this URL later on when configuring the Ollama model in our application.
#### Setting up the Large Language Model
We are going to use one of the lightweight LLMs available in Ollama, a `gemma:2b` model. It was developed by Google
DeepMind team and has 3B parameters. The [Ollama version](https://ollama.com/library/gemma:2b) uses 4-bit quantization.
Installing the model is as simple as running the following command on the machine where Ollama is running:
```shell
ollama run gemma:2b
```
Ollama models are also integrated with DSPy, so we can use them directly in our application.
## Implementing the information extraction pipeline
DSPy is a bit different from the other LLM frameworks. It's designed to optimize the prompts and weights of LMs in a
pipeline. It's a bit like a compiler for LMs: you write a pipeline in a high-level language, and DSPy generates the
prompts and weights for you. This means you can build complex systems without having to worry about the details of how
to prompt your LMs, as DSPy will do that for you. It is somehow similar to PyTorch but for LLMs.
First of all, we will define the Language Model we are going to use:
```python
import dspy
gemma_model = dspy.OllamaLocal(
model="gemma:2b",
base_url=os.environ.get("OLLAMA_URL"),
max_tokens=500,
)
```
Similarly, we have to define connection to our Qdrant Hybrid Cloud cluster:
```python
from dspy.retrieve.qdrant_rm import QdrantRM
from qdrant_client import QdrantClient, models
client = QdrantClient(
os.environ.get("QDRANT_URL"),
api_key=os.environ.get("QDRANT_API_KEY"),
)
qdrant_retriever = QdrantRM(
qdrant_collection_name="document-parts",
qdrant_client=client,
)
```
Finally, both components have to be configured in DSPy with a simple call to one of the functions:
```python
dspy.configure(lm=gemma_model, rm=qdrant_retriever)
```
### Application logic
There is a concept of signatures which defines input and output formats of the pipeline. We are going to define a simple
signature for the event:
```python
class Event(dspy.Signature):
description = dspy.InputField(
desc="Textual description of the event, including name, location and dates"
)
event_name = dspy.OutputField(desc="Name of the event")
location = dspy.OutputField(desc="Location of the event")
start_date = dspy.OutputField(desc="Start date of the event, YYYY-MM-DD")
end_date = dspy.OutputField(desc="End date of the event, YYYY-MM-DD")
```
It is designed to derive the structured information from the textual description of the event. Now, we can build our
module that will use it, along with Qdrant and Ollama model. Let's call it `EventExtractor`:
```python
class EventExtractor(dspy.Module):
def __init__(self):
super().__init__()
# Retrieve module to get relevant documents
self.retriever = dspy.Retrieve(k=3)
# Predict module for the created signature
self.predict = dspy.Predict(Event)
def forward(self, query: str):
# Retrieve the most relevant documents
results = self.retriever.forward(query)
# Try to extract events from the retrieved documents
events = []
for document in results.passages:
event = self.predict(description=document)
events.append(event)
return events
```
The logic is simple: we retrieve the most relevant documents from Qdrant, and then try to extract the structured
information from them using the `Event` signature. We can simply call it and see the results:
```python
extractor = EventExtractor()
extractor.forward("Blockchain events close to Europe")
```
Output:
```python
[
Prediction(
event_name='Event Name: Blockchain Expo Global',
location='Dubai, UAE',
start_date='2024-05-20',
end_date='2024-05-22'
),
Prediction(
event_name='Event Name: Blockchain for Business Summit',
location='Singapore',
start_date='2024-05-02',
end_date='2024-05-04'
),
Prediction(
event_name='Event Name: Open Source Summit',
location='Helsinki, Finland',
start_date='2024-08-11',
end_date='2024-08-13'
)
]
```
The task was solved successfully, even without any optimization. However, each of the events has the "Event Name: "
prefix that we might want to remove. DSPy allows optimizing the module, so we can improve the results. Optimization
might be done in different ways, and it's [well covered in the DSPy
documentation](https://dspy-docs.vercel.app/docs/building-blocks/optimizers).
We are not going to go through the optimization process in this tutorial. However, we encourage you to experiment with
it, as it might significantly improve the performance of your pipeline.
Created module might be easily stored on a specific path, and loaded later on:
```python
extractor.save("event_extractor")
```
To load, just create an instance of the module and call the `load` method:
```python
second_extractor = EventExtractor()
second_extractor.load("event_extractor")
```
This is especially useful when you optimize the module, as the optimized version might be stored and loaded later on
without redoing the optimization process each time you run the application.
### Deploying the extraction pipeline
Vultr gives us a lot of flexibility in terms of deploying the applications. Perfectly, we would use the Kubernetes
cluster we set up earlier to run it. The deployment is as simple as running any other Python application. This time we
don't need a GPU, as Ollama is already running on a separate machine, and DSPy just interacts with it.
## Wrapping up
In this tutorial, we showed you how to set up a private environment for information extraction using DSPy, Ollama, and
Qdrant. All the components might be securely hosted on the Vultr cloud, giving you full control over your data. | documentation/examples/rag-chatbot-vultr-dspy-ollama.md |
---
title: "Inference with Mighty"
short_description: "Mighty offers a speedy scalable embedding, a perfect fit for the speedy scalable Qdrant search. Let's combine them!"
description: "We combine Mighty and Qdrant to create a semantic search service in Rust with just a few lines of code."
weight: 17
author: Andre Bogus
author_link: https://llogiq.github.io
date: 2023-06-01T11:24:20+01:00
draft: true
keywords:
- vector search
- embeddings
- mighty
- rust
- semantic search
---
# Semantic Search with Mighty and Qdrant
Much like Qdrant, the [Mighty](https://max.io/) inference server is written in Rust and promises to offer low latency and high scalability. This brief demo combines Mighty and Qdrant into a simple semantic search service that is efficient, affordable and easy to setup. We will use [Rust](https://rust-lang.org) and our [qdrant\_client crate](https://docs.rs/qdrant_client) for this integration.
## Initial setup
For Mighty, start up a [docker container](https://hub.docker.com/layers/maxdotio/mighty-sentence-transformers/0.9.9/images/sha256-0d92a89fbdc2c211d927f193c2d0d34470ecd963e8179798d8d391a4053f6caf?context=explore) with an open port 5050. Just loading the port in a window shows the following:
```json
{
"name": "sentence-transformers/all-MiniLM-L6-v2",
"architectures": [
"BertModel"
],
"model_type": "bert",
"max_position_embeddings": 512,
"labels": null,
"named_entities": null,
"image_size": null,
"source": "https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2"
}
```
Note that this uses the `MiniLM-L6-v2` model from Hugging Face. As per their website, the model "maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search". The distance measure to use is cosine similarity.
Verify that mighty works by calling `curl https://<address>:5050/sentence-transformer?q=hello+mighty`. This will give you a result like (formatted via `jq`):
```json
{
"outputs": [
[
-0.05019686743617058,
0.051746174693107605,
0.048117730766534805,
... (381 values skipped)
]
],
"shape": [
1,
384
],
"texts": [
"Hello mighty"
],
"took": 77
}
```
For Qdrant, follow our [cloud documentation](../../cloud/cloud-quick-start/) to spin up a [free tier](https://cloud.qdrant.io/). Make sure to retrieve an API key.
## Implement model API
For mighty, you will need a way to emit HTTP(S) requests. This version uses the [reqwest](https://docs.rs/reqwest) crate, so add the following to your `Cargo.toml`'s dependencies section:
```toml
[dependencies]
reqwest = { version = "0.11.18", default-features = false, features = ["json", "rustls-tls"] }
```
Mighty offers a variety of model APIs which will download and cache the model on first use. For semantic search, use the `sentence-transformer` API (as in the above `curl` command). The Rust code to make the call is:
```rust
use anyhow::anyhow;
use reqwest::Client;
use serde::Deserialize;
use serde_json::Value as JsonValue;
#[derive(Deserialize)]
struct EmbeddingsResponse {
pub outputs: Vec<Vec<f32>>,
}
pub async fn get_mighty_embedding(
client: &Client,
url: &str,
text: &str
) -> anyhow::Result<Vec<f32>> {
let response = client.get(url).query(&[("text", text)]).send().await?;
if !response.status().is_success() {
return Err(anyhow!(
"Mighty API returned status code {}",
response.status()
));
}
let embeddings: EmbeddingsResponse = response.json().await?;
// ignore multiple embeddings at the moment
embeddings.get(0).ok_or_else(|| anyhow!("mighty returned empty embedding"))
}
```
Note that mighty can return multiple embeddings (if the input is too long to fit the model, it is automatically split).
## Create embeddings and run a query
Use this code to create embeddings both for insertion and search. On the Qdrant side, take the embedding and run a query:
```rust
use anyhow::anyhow;
use qdrant_client::prelude::*;
pub const SEARCH_LIMIT: u64 = 5;
const COLLECTION_NAME: &str = "mighty";
pub async fn qdrant_search_embeddings(
qdrant_client: &QdrantClient,
vector: Vec<f32>,
) -> anyhow::Result<Vec<ScoredPoint>> {
qdrant_client
.search_points(&SearchPoints {
collection_name: COLLECTION_NAME.to_string(),
vector,
limit: SEARCH_LIMIT,
with_payload: Some(true.into()),
..Default::default()
})
.await
.map_err(|err| anyhow!("Failed to search Qdrant: {}", err))
}
```
You can convert the [`ScoredPoint`](https://docs.rs/qdrant-client/latest/qdrant_client/qdrant/struct.ScoredPoint.html)s to fit your desired output format. | documentation/examples/mighty.md |
---
title: Question-Answering System for AI Customer Support
weight: 26
social_preview_image: /blog/hybrid-cloud-airbyte/hybrid-cloud-airbyte-tutorial.png
aliases:
- /documentation/tutorials/rag-customer-support-cohere-airbyte-aws/
---
# Question-Answering System for AI Customer Support
| Time: 120 min | Level: Advanced | |
| --- | ----------- | ----------- |----------- |
Maintaining top-notch customer service is vital to business success. As your operation expands, so does the influx of customer queries. Many of these queries are repetitive, making automation a time-saving solution.
Your support team's expertise is typically kept private, but you can still use AI to automate responses securely.
In this tutorial we will setup a private AI service that answers customer support queries with high accuracy and effectiveness. By leveraging Cohere's powerful models (deployed to [AWS](https://cohere.com/deployment-options/aws)) with Qdrant Hybrid Cloud, you can create a fully private customer support system. Data synchronization, facilitated by [Airbyte](https://airbyte.com/), will complete the setup.
![Architecture diagram](/documentation/examples/customer-support-cohere-airbyte/architecture-diagram.png)
## System design
The history of past interactions with your customers is not a static dataset. It is constantly evolving, as new
questions are coming in. You probably have a ticketing system that stores all the interactions, or use a different way
to communicate with your customers. No matter what is the communication channel, you need to bring the correct answers
to the selected Large Language Model, and have an established way to do it in a continuous manner. Thus, we will build
an ingestion pipeline and then a Retrieval Augmented Generation application that will use the data.
- **Dataset:** a [set of Frequently Asked Questions from Qdrant
users](/documentation/faq/qdrant-fundamentals/) as an incrementally updated Excel sheet
- **Embedding model:** Cohere `embed-multilingual-v3.0`, to support different languages with the same pipeline
- **Knowledge base:** Qdrant, running in Hybrid Cloud mode
- **Ingestion pipeline:** [Airbyte](https://airbyte.com/), loading the data into Qdrant
- **Large Language Model:** Cohere [Command-R](https://docs.cohere.com/docs/command-r)
- **RAG:** Cohere [RAG](https://docs.cohere.com/docs/retrieval-augmented-generation-rag) using our knowledge base
through a custom connector
All the selected components are compatible with the [AWS](https://aws.amazon.com/) infrastructure. Thanks to Cohere models' availability, you can build a fully private customer support system completely isolates data within your infrastructure. Also, if you have AWS credits, you can now use them without spending additional money on the models or
semantic search layer.
### Data ingestion
Building a RAG starts with a well-curated dataset. In your specific case you may prefer loading the data directly from
a ticketing system, such as [Zendesk Support](https://airbyte.com/connectors/zendesk-support),
[Freshdesk](https://airbyte.com/connectors/freshdesk), or maybe integrate it with a shared inbox. However, in case of
customer questions quality over quantity is the key. There should be a conscious decision on what data to include in the
knowledge base, so we do not confuse the model with possibly irrelevant information. We'll assume there is an [Excel
sheet](https://docs.airbyte.com/integrations/sources/file) available over HTTP/FTP that Airbyte can access and load into
Qdrant in an incremental manner.
### Cohere <> Qdrant Connector for RAG
Cohere RAG relies on [connectors](https://docs.cohere.com/docs/connectors) which brings additional context to the model.
The connector is a web service that implements a specific interface, and exposes its data through HTTP API. With that
setup, the Large Language Model becomes responsible for communicating with the connectors, so building a prompt with the
context is not needed anymore.
### Answering bot
Finally, we want to automate the responses and send them automatically when we are sure that the model is confident
enough. Again, the way such an application should be created strongly depends on the system you are using within the
customer support team. If it exposes a way to set up a webhook whenever a new question is coming in, you can create a
web service and use it to automate the responses. In general, our bot should be created specifically for the platform
you use, so we'll just cover the general idea here and build a simple CLI tool.
## Prerequisites
### Cohere models on AWS
One of the possible ways to deploy Cohere models on AWS is to use AWS SageMaker. Cohere's website has [a detailed
guide on how to deploy the models in that way](https://docs.cohere.com/docs/amazon-sagemaker-setup-guide), so you can
follow the steps described there to set up your own instance.
### Qdrant Hybrid Cloud on AWS
Our documentation covers the deployment of Qdrant on AWS as a Hybrid Cloud Environment, so you can follow the steps described
there to set up your own instance. The deployment process is quite straightforward, and you can have your Qdrant cluster
up and running in a few minutes.
[//]: # (TODO: refer to the documentation on how to deploy Qdrant on AWS)
Once you perform all the steps, your Qdrant cluster should be running on a specific URL. You will need this URL and the
API key to interact with Qdrant, so let's store them both in the environment variables:
```shell
export QDRANT_URL="https://qdrant.example.com"
export QDRANT_API_KEY="your-api-key"
```
```python
import os
os.environ["QDRANT_URL"] = "https://qdrant.example.com"
os.environ["QDRANT_API_KEY"] = "your-api-key"
```
### Airbyte Open Source
Airbyte is an open-source data integration platform that helps you replicate your data in your warehouses, lakes, and
databases. You can install it on your infrastructure and use it to load the data into Qdrant. The installation process
for AWS EC2 is described in the [official documentation](https://docs.airbyte.com/deploying-airbyte/on-aws-ec2).
Please follow the instructions to set up your own instance.
#### Setting up the connection
Once you have an Airbyte up and running, you can configure the connection to load the data from the respective source
into Qdrant. The configuration will require setting up the source and destination connectors. In this tutorial we will
use the following connectors:
- **Source:** [File](https://docs.airbyte.com/integrations/sources/file) to load the data from an Excel sheet
- **Destination:** [Qdrant](https://docs.airbyte.com/integrations/destinations/qdrant) to load the data into Qdrant
Airbyte UI will guide you through the process of setting up the source and destination and connecting them. Here is how
the configuration of the source might look like:
![Airbyte source configuration](/documentation/examples/customer-support-cohere-airbyte/airbyte-excel-source.png)
Qdrant is our target destination, so we need to set up the connection to it. We need to specify which fields should be
included to generate the embeddings. In our case it makes complete sense to embed just the questions, as we are going
to look for similar questions asked in the past and provide the answers.
![Airbyte destination configuration](/documentation/examples/customer-support-cohere-airbyte/airbyte-qdrant-destination.png)
Once we have the destination set up, we can finally configure a connection. The connection will define the schedule
of the data synchronization.
![Airbyte connection configuration](/documentation/examples/customer-support-cohere-airbyte/airbyte-connection.png)
Airbyte should now be ready to accept any data updates from the source and load them into Qdrant. You can monitor the
progress of the synchronization in the UI.
## RAG connector
One of our previous tutorials, guides you step-by-step on [implementing custom connector for Cohere
RAG](../cohere-rag-connector/) with Cohere Embed v3 and Qdrant. You can just point it to use your Hybrid Cloud
Qdrant instance running on AWS. Created connector might be deployed to Amazon Web Services in various ways, even in a
[Serverless](https://aws.amazon.com/serverless/) manner using [AWS
Lambda](https://aws.amazon.com/lambda/?c=ser&sec=srv).
In general, RAG connector has to expose a single endpoint that will accept POST requests with `query` parameter and
return the matching documents as JSON document with a specific structure. Our FastAPI implementation created [in the
related tutorial](../cohere-rag-connector/) is a perfect fit for this task. The only difference is that you
should point it to the Cohere models and Qdrant running on AWS infrastructure.
> Our connector is a lightweight web service that exposes a single endpoint and glues the Cohere embedding model with
> our Qdrant Hybrid Cloud instance. Thus, it perfectly fits the serverless architecture, requiring no additional
> infrastructure to run.
You can also run the connector as another service within your [Kubernetes cluster running on AWS
(EKS)](https://aws.amazon.com/eks/), or by launching an [EC2](https://aws.amazon.com/ec2/) compute instance. This step
is dependent on the way you deploy your other services, so we'll leave it to you to decide how to run the connector.
Eventually, the web service should be available under a specific URL, and it's a good practice to store it in the
environment variable, so the other services can easily access it.
```shell
export RAG_CONNECTOR_URL="https://rag-connector.example.com/search"
```
```python
os.environ["RAG_CONNECTOR_URL"] = "https://rag-connector.example.com/search"
```
## Customer interface
At this part we have all the data loaded into Qdrant, and the RAG connector is ready to serve the relevant context. The
last missing piece is the customer interface, that will call the Command model to create the answer. Such a system
should be built specifically for the platform you use and integrated into its workflow, but we will build the strong
foundation for it and show how to use it in a simple CLI tool.
> Our application does not have to connect to Qdrant anymore, as the model will connect to the RAG connector directly.
First of all, we have to create a connection to Cohere services through the Cohere SDK.
```python
import cohere
# Create a Cohere client pointing to the AWS instance
cohere_client = cohere.Client(...)
```
Next, our connector should be registered. **Please make sure to do it once, and store the id of the connector in the
environment variable or in any other way that will be accessible to the application.**
```python
import os
connector_response = cohere_client.connectors.create(
name="customer-support",
url=os.environ["RAG_CONNECTOR_URL"],
)
# The id returned by the API should be stored for future use
connector_id = connector_response.connector.id
```
Finally, we can create a prompt and get the answer from the model. Additionally, we define which of the connectors
should be used to provide the context, as we may have multiple connectors and want to use specific ones, depending on
some conditions. Let's start with asking a question.
```python
query = "Why Qdrant does not return my vectors?"
```
Now we can send the query to the model, get the response, and possibly send it back to the customer.
```python
response = cohere_client.chat(
message=query,
connectors=[
cohere.ChatConnector(id=connector_id),
],
model="command-r",
)
print(response.text)
```
The output should be the answer to the question, generated by the model, for example:
> Qdrant is set up by default to minimize network traffic and therefore doesn't return vectors in search results. However, you can make Qdrant return your vectors by setting the 'with_vector' parameter of the Search/Scroll function to true.
Customer support should not be fully automated, as some completely new issues might require human intervention. We
should play with prompt engineering and expect the model to provide the answer with a certain confidence level. If the
confidence is too low, we should not send the answer automatically but present it to the support team for review.
## Wrapping up
This tutorial shows how to build a fully private customer support system using Cohere models, Qdrant Hybrid Cloud, and
Airbyte, which runs on AWS infrastructure. You can ensure your data does not leave your premises and focus on providing
the best customer support experience without bothering your team with repetitive tasks.
| documentation/examples/rag-customer-support-cohere-airbyte-aws.md |
---
title: Movie Recommendation System
weight: 34
social_preview_image: /blog/hybrid-cloud-ovhcloud/hybrid-cloud-ovhcloud-tutorial.png
aliases:
- /documentation/tutorials/recommendation-system-ovhcloud/
---
# Movie Recommendation System
| Time: 120 min | Level: Advanced | Output: [GitHub](https://github.com/infoslack/qdrant-example/blob/main/HC-demo/HC-OVH.ipynb) |
| --- | ----------- | ----------- |----------- |
In this tutorial, you will build a mechanism that recommends movies based on defined preferences. Vector databases like Qdrant are good for storing high-dimensional data, such as user and item embeddings. They can enable personalized recommendations by quickly retrieving similar entries based on advanced indexing techniques. In this specific case, we will use [sparse vectors](/articles/sparse-vectors/) to create an efficient and accurate recommendation system.
**Privacy and Sovereignty:** Since preference data is proprietary, it should be stored in a secure and controlled environment. Our vector database can easily be hosted on [OVHcloud](https://ovhcloud.com/), our trusted [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) partner. This means that Qdrant can be run from your OVHcloud region, but the database itself can still be managed from within Qdrant Cloud's interface. Both products have been tested for compatibility and scalability, and we recommend their [managed Kubernetes](https://www.ovhcloud.com/en/public-cloud/kubernetes/) service.
> To see the entire output, use our [notebook with complete instructions](https://github.com/infoslack/qdrant-example/blob/main/HC-demo/HC-OVH.ipynb).
## Components
- **Dataset:** The [MovieLens dataset](https://grouplens.org/datasets/movielens/) contains a list of movies and ratings given by users.
- **Cloud:** [OVHcloud](https://ovhcloud.com/), with managed Kubernetes.
- **Vector DB:** [Qdrant Hybrid Cloud](https://hybrid-cloud.qdrant.tech) running on [OVHcloud](https://ovhcloud.com/).
**Methodology:** We're adopting a collaborative filtering approach to construct a recommendation system from the dataset provided. Collaborative filtering works on the premise that if two users share similar tastes, they're likely to enjoy similar movies. Leveraging this concept, we'll identify users whose ratings align closely with ours, and explore the movies they liked but we haven't seen yet. To do this, we'll represent each user's ratings as a vector in a high-dimensional, sparse space. Using Qdrant, we'll index these vectors and search for users whose ratings vectors closely match ours. Ultimately, we will see which movies were enjoyed by users similar to us.
![](/documentation/examples/recommendation-system-ovhcloud/architecture-diagram.png)
## Deploying Qdrant Hybrid Cloud on OVHcloud
[Service Managed Kubernetes](https://www.ovhcloud.com/en-in/public-cloud/kubernetes/), powered by OVH Public Cloud Instances, a leading European cloud provider. With OVHcloud Load Balancers and disks built in. OVHcloud Managed Kubernetes provides high availability, compliance, and CNCF conformance, allowing you to focus on your containerized software layers with total reversibility.
1. To start using managed Kubernetes on OVHcloud, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#ovhcloud).
2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/).
## Prerequisites
Download and unzip the MovieLens dataset:
```shell
mkdir -p data
wget https://files.grouplens.org/datasets/movielens/ml-1m.zip
unzip ml-1m.zip -d data
```
The necessary * libraries are installed using `pip`, including `pandas` for data manipulation, `qdrant-client` for interfacing with Qdrant, and `*-dotenv` for managing environment variables.
```python
!pip install -U \
pandas \
qdrant-client \
*-dotenv
```
The `.env` file is used to store sensitive information like the Qdrant host URL and API key securely.
```shell
QDRANT_HOST
QDRANT_API_KEY
```
Load all environment variables into the setup:
```python
import os
from dotenv import load_dotenv
load_dotenv('./.env')
```
## Implementation
Load the data from the MovieLens dataset into pandas DataFrames to facilitate data manipulation and analysis.
```python
from qdrant_client import QdrantClient, models
import pandas as pd
```
Load user data:
```python
users = pd.read_csv(
'data/ml-1m/users.dat',
sep='::',
names=['user_id', 'gender', 'age', 'occupation', 'zip'],
engine='*'
)
users.head()
```
Add movies:
```python
movies = pd.read_csv(
'data/ml-1m/movies.dat',
sep='::',
names=['movie_id', 'title', 'genres'],
engine='*',
encoding='latin-1'
)
movies.head()
```
Finally, add the ratings:
```python
ratings = pd.read_csv(
'data/ml-1m/ratings.dat',
sep='::',
names=['user_id', 'movie_id', 'rating', 'timestamp'],
engine='*'
)
ratings.head()
```
### Normalize the ratings
Sparse vectors can use advantage of negative values, so we can normalize ratings to have a mean of 0 and a standard deviation of 1. This normalization ensures that ratings are consistent and centered around zero, enabling accurate similarity calculations. In this scenario we can take into account movies that we don't like.
```python
ratings.rating = (ratings.rating - ratings.rating.mean()) / ratings.rating.std()
```
To get the results:
```python
ratings.head()
```
### Data preparation
Now you will transform user ratings into sparse vectors, where each vector represents ratings for different movies. This step prepares the data for indexing in Qdrant.
First, create a collection with configured sparse vectors. For sparse vectors, you don't need to specify the dimension, because it's extracted from the data automatically.
```python
from collections import defaultdict
user_sparse_vectors = defaultdict(lambda: {"values": [], "indices": []})
for row in ratings.itertuples():
user_sparse_vectors[row.user_id]["values"].append(row.rating)
user_sparse_vectors[row.user_id]["indices"].append(row.movie_id)
```
Connect to Qdrant and create a collection called **movielens**:
```python
client = QdrantClient(
url = os.getenv("QDRANT_HOST"),
api_key = os.getenv("QDRANT_API_KEY")
)
client.create_collection(
"movielens",
vectors_config={},
sparse_vectors_config={
"ratings": models.SparseVectorParams()
}
)
```
Upload user ratings to the **movielens** collection in Qdrant as sparse vectors, along with user metadata. This step populates the database with the necessary data for recommendation generation.
```python
def data_generator():
for user in users.itertuples():
yield models.PointStruct(
id=user.user_id,
vector={
"ratings": user_sparse_vectors[user.user_id]
},
payload=user._asdict()
)
client.upload_points(
"movielens",
data_generator()
)
```
## Recommendations
Personal movie ratings are specified, where positive ratings indicate likes and negative ratings indicate dislikes. These ratings serve as the basis for finding similar users with comparable tastes.
Personal ratings are converted into a sparse vector representation suitable for querying Qdrant. This vector represents the user's preferences across different movies.
Let's try to recommend something for ourselves:
```
1 = Like
-1 = dislike
```
```python
# Search with movies[movies.title.str.contains("Matrix", case=False)].
my_ratings = {
2571: 1, # Matrix
329: 1, # Star Trek
260: 1, # Star Wars
2288: -1, # The Thing
1: 1, # Toy Story
1721: -1, # Titanic
296: -1, # Pulp Fiction
356: 1, # Forrest Gump
2116: 1, # Lord of the Rings
1291: -1, # Indiana Jones
1036: -1 # Die Hard
}
inverse_ratings = {k: -v for k, v in my_ratings.items()}
def to_vector(ratings):
vector = models.SparseVector(
values=[],
indices=[]
)
for movie_id, rating in ratings.items():
vector.values.append(rating)
vector.indices.append(movie_id)
return vector
```
Query Qdrant to find users with similar tastes based on the provided personal ratings. The search returns a list of similar users along with their ratings, facilitating collaborative filtering.
```python
results = client.query_points(
"movielens",
query=to_vector(my_ratings),
using="ratings",
with_vectors=True, # We will use those to find new movies
limit=20
).points
```
Movie scores are computed based on how frequently each movie appears in the ratings of similar users, weighted by their ratings. This step identifies popular movies among users with similar tastes. Calculate how frequently each movie is found in similar users' ratings
```python
def results_to_scores(results):
movie_scores = defaultdict(lambda: 0)
for user in results:
user_scores = user.vector['ratings']
for idx, rating in zip(user_scores.indices, user_scores.values):
if idx in my_ratings:
continue
movie_scores[idx] += rating
return movie_scores
```
The top-rated movies are sorted based on their scores and printed as recommendations for the user. These recommendations are tailored to the user's preferences and aligned with their tastes. Sort movies by score and print top five:
```python
movie_scores = results_to_scores(results)
top_movies = sorted(movie_scores.items(), key=lambda x: x[1], reverse=True)
for movie_id, score in top_movies[:5]:
print(movies[movies.movie_id == movie_id].title.values[0], score)
```
Result:
```text
Star Wars: Episode V - The Empire Strikes Back (1980) 20.02387858
Star Wars: Episode VI - Return of the Jedi (1983) 16.443184379999998
Princess Bride, The (1987) 15.840068229999996
Raiders of the Lost Ark (1981) 14.94489462
Sixth Sense, The (1999) 14.570322149999999
``` | documentation/examples/recommendation-system-ovhcloud.md |
---
title: Chat With Product PDF Manuals Using Hybrid Search
weight: 27
social_preview_image: /blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex-tutorial.png
aliases:
- /documentation/tutorials/hybrid-search-llamaindex-jinaai/
---
# Chat With Product PDF Manuals Using Hybrid Search
| Time: 120 min | Level: Advanced | Output: [GitHub](https://github.com/infoslack/qdrant-example/blob/main/HC-demo/HC-DO-LlamaIndex-Jina-v2.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/infoslack/qdrant-example/blob/main/HC-demo/HC-DO-LlamaIndex-Jina-v2.ipynb) |
| --- | ----------- | ----------- |----------- |
With the proliferation of digital manuals and the increasing demand for quick and accurate customer support, having a chatbot capable of efficiently parsing through complex PDF documents and delivering precise information can be a game-changer for any business.
In this tutorial, we'll walk you through the process of building a RAG-based chatbot, designed specifically to assist users with understanding the operation of various household appliances.
We'll cover the essential steps required to build your system, including data ingestion, natural language understanding, and response generation for customer support use cases.
## Components
- **Embeddings:** Jina Embeddings, served via the [Jina Embeddings API](https://jina.ai/embeddings/#apiform)
- **Database:** [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/), deployed in a managed Kubernetes cluster on [DigitalOcean
(DOKS)](https://www.digitalocean.com/products/kubernetes)
- **LLM:** [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) language model on HuggingFace
- **Framework:** [LlamaIndex](https://www.llamaindex.ai/) for extended RAG functionality and [Hybrid Search support](https://docs.llamaindex.ai/en/stable/examples/vector_stores/qdrant_hybrid/).
- **Parser:** [LlamaParse](https://github.com/run-llama/llama_parse) as a way to parse complex documents with embedded objects such as tables and figures.
![Architecture diagram](/documentation/examples/hybrid-search-llamaindex-jinaai/architecture-diagram.png)
### Procedure
Retrieval Augmented Generation (RAG) combines search with language generation. An external information retrieval system is used to identify documents likely to provide information relevant to the user's query. These documents, along with the user's request, are then passed on to a text-generating language model, producing a natural response.
This method enables a language model to respond to questions and access information from a much larger set of documents than it could see otherwise. The language model only looks at a few relevant sections of the documents when generating responses, which also helps to reduce inexplicable errors.
##
[Service Managed Kubernetes](https://www.ovhcloud.com/en-in/public-cloud/kubernetes/), powered by OVH Public Cloud Instances, a leading European cloud provider. With OVHcloud Load Balancers and disks built in. OVHcloud Managed Kubernetes provides high availability, compliance, and CNCF conformance, allowing you to focus on your containerized software layers with total reversibility.
## Prerequisites
### Deploying Qdrant Hybrid Cloud on DigitalOcean
[DigitalOcean Kubernetes (DOKS)](https://www.digitalocean.com/products/kubernetes) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and volumes.
1. To start using managed Kubernetes on DigitalOcean, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#digital-ocean).
2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/).
3. Once it's deployed, you should have a running Qdrant cluster with an API key.
### Development environment
Then, install all dependencies:
```python
!pip install -U \
llama-index \
llama-parse \
python-dotenv \
llama-index-embeddings-jinaai \
llama-index-llms-huggingface \
llama-index-vector-stores-qdrant \
"huggingface_hub[inference]" \
datasets
```
Set up secret key values on `.env` file:
```bash
JINAAI_API_KEY
HF_INFERENCE_API_KEY
LLAMA_CLOUD_API_KEY
QDRANT_HOST
QDRANT_API_KEY
```
Load all environment variables:
```python
import os
from dotenv import load_dotenv
load_dotenv('./.env')
```
## Implementation
### Connect Jina Embeddings and Mixtral LLM
LlamaIndex provides built-in support for the [Jina Embeddings API](https://jina.ai/embeddings/#apiform). To use it, you need to initialize the `JinaEmbedding` object with your API Key and model name.
For the LLM, you need wrap it in a subclass of `llama_index.llms.CustomLLM` to make it compatible with LlamaIndex.
```python
# connect embeddings
from llama_index.embeddings.jinaai import JinaEmbedding
jina_embedding_model = JinaEmbedding(
model="jina-embeddings-v2-base-en",
api_key=os.getenv("JINAAI_API_KEY"),
)
# connect LLM
from llama_index.llms.huggingface import HuggingFaceInferenceAPI
mixtral_llm = HuggingFaceInferenceAPI(
model_name = "mistralai/Mixtral-8x7B-Instruct-v0.1",
token=os.getenv("HF_INFERENCE_API_KEY"),
)
```
### Prepare data for RAG
This example will use household appliance manuals, which are generally available as PDF documents.
LlamaPar
In the `data` folder, we have three documents, and we will use it to extract the textual content from the PDF and use it as a knowledge base in a simple RAG.
The free LlamaIndex Cloud plan is sufficient for our example:
```python
import nest_asyncio
nest_asyncio.apply()
from llama_parse import LlamaParse
llamaparse_api_key = os.getenv("LLAMA_CLOUD_API_KEY")
llama_parse_documents = LlamaParse(api_key=llamaparse_api_key, result_type="markdown").load_data([
"data/DJ68-00682F_0.0.pdf",
"data/F500E_WF80F5E_03445F_EN.pdf",
"data/O_ME4000R_ME19R7041FS_AA_EN.pdf"
])
```
### Store data into Qdrant
The code below does the following:
- create a vector store with Qdrant client;
- get an embedding for each chunk using Jina Embeddings API;
- combines `sparse` and `dense` vectors for hybrid search;
- stores all data into Qdrant;
Hybrid search with Qdrant must be enabled from the beginning - we can simply set `enable_hybrid=True`.
```python
# By default llamaindex uses OpenAI models
# setting embed_model to Jina and llm model to Mixtral
from llama_index.core import Settings
Settings.embed_model = jina_embedding_model
Settings.llm = mixtral_llm
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.vector_stores.qdrant import QdrantVectorStore
import qdrant_client
client = qdrant_client.QdrantClient(
url=os.getenv("QDRANT_HOST"),
api_key=os.getenv("QDRANT_API_KEY")
)
vector_store = QdrantVectorStore(
client=client, collection_name="demo", enable_hybrid=True, batch_size=20
)
Settings.chunk_size = 512
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents=llama_parse_documents,
storage_context=storage_context
)
```
### Prepare a prompt
Here we will create a custom prompt template. This prompt asks the LLM to use only the context information retrieved from Qdrant. When querying with hybrid mode, we can set `similarity_top_k` and `sparse_top_k` separately:
- `sparse_top_k` represents how many nodes will be retrieved from each dense and sparse query.
- `similarity_top_k` controls the final number of returned nodes. In the above setting, we end up with 10 nodes.
Then, we assemble the query engine using the prompt.
```python
from llama_index.core import PromptTemplate
qa_prompt_tmpl = (
"Context information is below.\n"
"-------------------------------"
"{context_str}\n"
"-------------------------------"
"Given the context information and not prior knowledge,"
"answer the query. Please be concise, and complete.\n"
"If the context does not contain an answer to the query,"
"respond with \"I don't know!\"."
"Query: {query_str}\n"
"Answer: "
)
qa_prompt = PromptTemplate(qa_prompt_tmpl)
from llama_index.core.retrievers import VectorIndexRetriever
from llama_index.core.query_engine import RetrieverQueryEngine
from llama_index.core import get_response_synthesizer
from llama_index.core import Settings
Settings.embed_model = jina_embedding_model
Settings.llm = mixtral_llm
# retriever
retriever = VectorIndexRetriever(
index=index,
similarity_top_k=2,
sparse_top_k=12,
vector_store_query_mode="hybrid"
)
# response synthesizer
response_synthesizer = get_response_synthesizer(
llm=mixtral_llm,
text_qa_template=qa_prompt,
response_mode="compact",
)
# query engine
query_engine = RetrieverQueryEngine(
retriever=retriever,
response_synthesizer=response_synthesizer,
)
```
## Run a test query
Now you can ask questions and receive answers based on the data:
**Question**
```python
result = query_engine.query("What temperature should I use for my laundry?")
print(result.response)
```
**Answer**
```text
The water temperature is set to 70 ˚C during the Eco Drum Clean cycle. You cannot change the water temperature. However, the temperature for other cycles is not specified in the context.
```
And that's it! Feel free to scale this up to as many documents and complex PDFs as you like. | documentation/examples/hybrid-search-llamaindex-jinaai.md |
---
title: Region-Specific Contract Management System
weight: 28
social_preview_image: /blog/hybrid-cloud-aleph-alpha/hybrid-cloud-aleph-alpha-tutorial.png
aliases:
- /documentation/tutorials/rag-contract-management-stackit-aleph-alpha/
---
# Region-Specific Contract Management System
| Time: 90 min | Level: Advanced | |
| --- | ----------- | ----------- |----------- |
Contract management benefits greatly from Retrieval Augmented Generation (RAG), streamlining the handling of lengthy business contract texts. With AI assistance, complex questions can be asked and well-informed answers generated, facilitating efficient document management. This proves invaluable for businesses with extensive relationships, like shipping companies, construction firms, and consulting practices. Access to such contracts is often restricted to authorized team members due to security and regulatory requirements, such as GDPR in Europe, necessitating secure storage practices.
Companies want their data to be kept and processed within specific geographical boundaries. For that reason, this RAG-centric tutorial focuses on dealing with a region-specific cloud provider. You will set up a contract management system using [Aleph Alpha's](https://aleph-alpha.com/) embeddings and LLM. You will host everything on [STACKIT](https://www.stackit.de/), a German business cloud provider. On this platform, you will run Qdrant Hybrid Cloud as well as the rest of your RAG application. This setup will ensure that your data is stored and processed in Germany.
![Architecture diagram](/documentation/examples/contract-management-stackit-aleph-alpha/architecture-diagram.png)
## Components
A contract management platform is not a simple CLI tool, but an application that should be available to all team
members. It needs an interface to upload, search, and manage the documents. Ideally, the system should be
integrated with org's existing stack, and the permissions/access controls inherited from LDAP or Active
Directory.
> **Note:** In this tutorial, we are going to build a solid foundation for such a system. However, it is up to your organization's setup to implement the entire solution.
- **Dataset** - a collection of documents, using different formats, such as PDF or DOCx, scraped from internet
- **Asymmetric semantic embeddings** - [Aleph Alpha embedding](https://docs.aleph-alpha.com/api/semantic-embed/) to
convert the queries and the documents into vectors
- **Large Language Model** - the [Luminous-extended-control
model](https://docs.aleph-alpha.com/docs/introduction/model-card/), but you can play with a different one from the
Luminous family
- **Qdrant Hybrid Cloud** - a knowledge base to store the vectors and search over the documents
- **STACKIT** - a [German business cloud](https://www.stackit.de) to run the Qdrant Hybrid Cloud and the application
processes
We will implement the process of uploading the documents, converting them into vectors, and storing them in Qdrant.
Then, we will build a search interface to query the documents and get the answers. All that, assuming the user
interacts with the system with some set of permissions, and can only access the documents they are allowed to.
## Prerequisites
### Aleph Alpha account
Since you will be using Aleph Alpha's models, [sign up](https://app.aleph-alpha.com/signup) with their managed service and generate an API token in the [User Profile](https://app.aleph-alpha.com/profile). Once you have it ready, store it as an environment variable:
```shell
export ALEPH_ALPHA_API_KEY="<your-token>"
```
```python
import os
os.environ["ALEPH_ALPHA_API_KEY"] = "<your-token>"
```
### Qdrant Hybrid Cloud on STACKIT
Please refer to our documentation to see [how to deploy Qdrant Hybrid Cloud on
STACKIT](/documentation/hybrid-cloud/platform-deployment-options/#stackit). Once you finish the deployment, you will
have the API endpoint to interact with the Qdrant server. Let's store it in the environment variable as well:
```shell
export QDRANT_URL="https://qdrant.example.com"
export QDRANT_API_KEY="your-api-key"
```
```python
os.environ["QDRANT_URL"] = "https://qdrant.example.com"
os.environ["QDRANT_API_KEY"] = "your-api-key"
```
Qdrant will be running on a specific URL and access will be restricted by the API key. Make sure to store them both as environment variables as well:
*Optional:* Whenever you use LangChain, you can also [configure LangSmith](https://docs.smith.langchain.com/), which will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/).
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY="your-api-key"
export LANGCHAIN_PROJECT="your-project" # if not specified, defaults to "default"
```
## Implementation
To build the application, we can use the official SDKs of Aleph Alpha and Qdrant. However, to streamline the process
let's use [LangChain](https://python.langchain.com/docs/get_started/introduction). This framework is already integrated with both services, so we can focus our efforts on
developing business logic.
### Qdrant collection
Aleph Alpha embeddings are high dimensional vectors by default, with a dimensionality of `5120`. However, a pretty
unique feature of that model is that they might be compressed to a size of `128`, with a small drop in accuracy
performance (4-6%, according to the docs). Qdrant can store even the original vectors easily, and this sounds like a
good idea to enable [Binary Quantization](/documentation/guides/quantization/#binary-quantization) to save space and
make the retrieval faster. Let's create a collection with such settings:
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(
location=os.environ["QDRANT_URL"],
api_key=os.environ["QDRANT_API_KEY"],
)
client.create_collection(
collection_name="contracts",
vectors_config=models.VectorParams(
size=5120,
distance=models.Distance.COSINE,
quantization_config=models.BinaryQuantization(
binary=models.BinaryQuantizationConfig(
always_ram=True,
)
)
),
)
```
We are going to use the `contracts` collection to store the vectors of the documents. The `always_ram` flag is set to
`True` to keep the quantized vectors in RAM, which will speed up the search process. We also wanted to restrict access
to the individual documents, so only users with the proper permissions can see them. In Qdrant that should be solved by
adding a payload field that defines who can access the document. We'll call this field `roles` and set it to an array
of strings with the roles that can access the document.
```python
client.create_payload_index(
collection_name="contracts",
field_name="metadata.roles",
field_schema=models.PayloadSchemaType.KEYWORD,
)
```
Since we use Langchain, the `roles` field is a nested field of the `metadata`, so we have to define it as
`metadata.roles`. The schema says that the field is a keyword, which means it is a string or an array of strings. We are
going to use the name of the customers as the roles, so the access control will be based on the customer name.
### Ingestion pipeline
Semantic search systems rely on high-quality data as their foundation. With the [unstructured integration of Langchain](https://python.langchain.com/docs/integrations/providers/unstructured), ingestion of various document formats like PDFs, Microsoft Word files, and PowerPoint presentations becomes effortless. However, it's crucial to split the text intelligently to avoid converting entire documents into vectors; instead, they should be divided into meaningful chunks. Subsequently, the extracted documents are converted into vectors using Aleph Alpha embeddings and stored in the Qdrant collection.
Let's start by defining the components and connecting them together:
```python
embeddings = AlephAlphaAsymmetricSemanticEmbedding(
model="luminous-base",
aleph_alpha_api_key=os.environ["ALEPH_ALPHA_API_KEY"],
normalize=True,
)
qdrant = Qdrant(
client=client,
collection_name="contracts",
embeddings=embeddings,
)
```
Now it's high time to index our documents. Each of the documents is a separate file, and we also have to know the
customer name to set the access control properly. There might be several roles for a single document, so let's keep them
in a list.
```python
documents = {
"data/Data-Processing-Agreement_STACKIT_Cloud_version-1.2.pdf": ["stackit"],
"data/langchain-terms-of-service.pdf": ["langchain"],
}
```
This is how the documents might look like:
![Example of the indexed document](/documentation/examples/contract-management-stackit-aleph-alpha/indexed-document.png)
Each has to be split into chunks first; there is no silver bullet. Our chunking algorithm will be simple and based on
recursive splitting, with the maximum chunk size of 500 characters and the overlap of 100 characters.
```python
from langchain_text_splitters import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=500,
chunk_overlap=100,
)
```
Now we can iterate over the documents, split them into chunks, convert them into vectors with Aleph Alpha embedding
model, and store them in the Qdrant.
```python
from langchain_community.document_loaders.unstructured import UnstructuredFileLoader
for document_path, roles in documents.items():
document_loader = UnstructuredFileLoader(file_path=document_path)
# Unstructured loads each file into a single Document object
loaded_documents = document_loader.load()
for doc in loaded_documents:
doc.metadata["roles"] = roles
# Chunks will have the same metadata as the original document
document_chunks = text_splitter.split_documents(loaded_documents)
# Add the documents to the Qdrant collection
qdrant.add_documents(document_chunks, batch_size=20)
```
Our collection is filled with data, and we can start searching over it. In a real-world scenario, the ingestion process
should be automated and triggered by the new documents uploaded to the system. Since we already use Qdrant Hybrid Cloud
running on Kubernetes, we can easily deploy the ingestion pipeline as a job to the same environment. On STACKIT, you
probably use the [STACKIT Kubernetes Engine (SKE)](https://www.stackit.de/en/product/kubernetes/) and launch it in a
container. The [Compute Engine](https://www.stackit.de/en/product/stackit-compute-engine/) is also an option, but
everything depends on the specifics of your organization.
### Search application
Specialized Document Management Systems have a lot of features, but semantic search is not yet a standard. We are going
to build a simple search mechanism which could be possibly integrated with the existing system. The search process is
quite simple: we convert the query into a vector using the same Aleph Alpha model, and then search for the most similar
documents in the Qdrant collection. The access control is also applied, so the user can only see the documents they are
allowed to.
We start with creating an instance of the LLM of our choice, and set the maximum number of tokens to 200, as the default
value is 64, which might be too low for our purposes.
```python
from langchain.llms.aleph_alpha import AlephAlpha
llm = AlephAlpha(
model="luminous-extended-control",
aleph_alpha_api_key=os.environ["ALEPH_ALPHA_API_KEY"],
maximum_tokens=200,
)
```
Then, we can glue the components together and build the search process. `RetrievalQA` is a class that takes implements
the Question Retrieval process, with a specified retriever and Large Language Model. The instance of `Qdrant` might be
converted into a retriever, with additional filter that will be passed to the `similarity_search` method. The filter
is created as [in a regular Qdrant query](../../../documentation/concepts/filtering/), with the `roles` field set to the
user's roles.
```python
user_roles = ["stackit", "aleph-alpha"]
qdrant_retriever = qdrant.as_retriever(
search_kwargs={
"filter": models.Filter(
must=[
models.FieldCondition(
key="metadata.roles",
match=models.MatchAny(any=user_roles)
)
]
)
}
)
```
We set the user roles to `stackit` and `aleph-alpha`, so the user can see the documents that are accessible to these
customers, but not to the others. The final step is to create the `RetrievalQA` instance and use it to search over the
documents, with the custom prompt.
```python
from langchain.prompts import PromptTemplate
from langchain.chains.retrieval_qa.base import RetrievalQA
prompt_template = """
Question: {question}
Answer the question using the Source. If there's no answer, say "NO ANSWER IN TEXT".
Source: {context}
### Response:
"""
prompt = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
retrieval_qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=qdrant_retriever,
return_source_documents=True,
chain_type_kwargs={"prompt": prompt},
)
response = retrieval_qa.invoke({"query": "What are the rules of performing the audit?"})
print(response["result"])
```
Output:
```text
The rules for performing the audit are as follows:
1. The Customer must inform the Contractor in good time (usually at least two weeks in advance) about any and all circumstances related to the performance of the audit.
2. The Customer is entitled to perform one audit per calendar year. Any additional audits may be performed if agreed with the Contractor and are subject to reimbursement of expenses.
3. If the Customer engages a third party to perform the audit, the Customer must obtain the Contractor's consent and ensure that the confidentiality agreements with the third party are observed.
4. The Contractor may object to any third party deemed unsuitable.
```
There are some other parameters that might be tuned to optimize the search process. The `k` parameter defines how many
documents should be returned, but Langchain allows us also to control the retrieval process by choosing the type of the
search operation. The default is `similarity`, which is just vector search, but we can also use `mmr` which stands for
Maximal Marginal Relevance. It is a technique to diversify the search results, so the user gets the most relevant
documents, but also the most diverse ones. The `mmr` search is slower, but might be more user-friendly.
Our search application is ready, and we can deploy it to the same environment as the ingestion pipeline on STACKIT. The
same rules apply here, so you can use the SKE or the Compute Engine, depending on the specifics of your organization.
## Next steps
We built a solid foundation for the contract management system, but there is still a lot to do. If you want to make the
system production-ready, you should consider implementing the mechanism into your existing stack. If you have any
questions, feel free to ask on our [Discord community](https://qdrant.to/discord). | documentation/examples/rag-contract-management-stackit-aleph-alpha.md |
---
title: Implement Cohere RAG connector
weight: 24
aliases:
- /documentation/tutorials/cohere-rag-connector/
---
# Implement custom connector for Cohere RAG
| Time: 45 min | Level: Intermediate | | |
|--------------|---------------------|-|----|
The usual approach to implementing Retrieval Augmented Generation requires users to build their prompts with the
relevant context the LLM may rely on, and manually sending them to the model. Cohere is quite unique here, as their
models can now speak to the external tools and extract meaningful data on their own. You can virtually connect any data
source and let the Cohere LLM know how to access it. Obviously, vector search goes well with LLMs, and enabling semantic
search over your data is a typical case.
Cohere RAG has lots of interesting features, such as inline citations, which help you to refer to the specific parts of
the documents used to generate the response.
![Cohere RAG citations](/documentation/tutorials/cohere-rag-connector/cohere-rag-citations.png)
*Source: https://docs.cohere.com/docs/retrieval-augmented-generation-rag*
The connectors have to implement a specific interface and expose the data source as HTTP REST API. Cohere documentation
[describes a general process of creating a connector](https://docs.cohere.com/docs/creating-and-deploying-a-connector).
This tutorial guides you step by step on building such a service around Qdrant.
## Qdrant connector
You probably already have some collections you would like to bring to the LLM. Maybe your pipeline was set up using some
of the popular libraries such as Langchain, Llama Index, or Haystack. Cohere connectors may implement even more complex
logic, e.g. hybrid search. In our case, we are going to start with a fresh Qdrant collection, index data using Cohere
Embed v3, build the connector, and finally connect it with the [Command-R model](https://txt.cohere.com/command-r/).
### Building the collection
First things first, let's build a collection and configure it for the Cohere `embed-multilingual-v3.0` model. It
produces 1024-dimensional embeddings, and we can choose any of the distance metrics available in Qdrant. Our connector
will act as a personal assistant of a software engineer, and it will expose our notes to suggest the priorities or
actions to perform.
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(
"https://my-cluster.cloud.qdrant.io:6333",
api_key="my-api-key",
)
client.create_collection(
collection_name="personal-notes",
vectors_config=models.VectorParams(
size=1024,
distance=models.Distance.DOT,
),
)
```
Our notes will be represented as simple JSON objects with a `title` and `text` of the specific note. The embeddings will
be created from the `text` field only.
```python
notes = [
{
"title": "Project Alpha Review",
"text": "Review the current progress of Project Alpha, focusing on the integration of the new API. Check for any compatibility issues with the existing system and document the steps needed to resolve them. Schedule a meeting with the development team to discuss the timeline and any potential roadblocks."
},
{
"title": "Learning Path Update",
"text": "Update the learning path document with the latest courses on React and Node.js from Pluralsight. Schedule at least 2 hours weekly to dedicate to these courses. Aim to complete the React course by the end of the month and the Node.js course by mid-next month."
},
{
"title": "Weekly Team Meeting Agenda",
"text": "Prepare the agenda for the weekly team meeting. Include the following topics: project updates, review of the sprint backlog, discussion on the new feature requests, and a brainstorming session for improving remote work practices. Send out the agenda and the Zoom link by Thursday afternoon."
},
{
"title": "Code Review Process Improvement",
"text": "Analyze the current code review process to identify inefficiencies. Consider adopting a new tool that integrates with our version control system. Explore options such as GitHub Actions for automating parts of the process. Draft a proposal with recommendations and share it with the team for feedback."
},
{
"title": "Cloud Migration Strategy",
"text": "Draft a plan for migrating our current on-premise infrastructure to the cloud. The plan should cover the selection of a cloud provider, cost analysis, and a phased migration approach. Identify critical applications for the first phase and any potential risks or challenges. Schedule a meeting with the IT department to discuss the plan."
},
{
"title": "Quarterly Goals Review",
"text": "Review the progress towards the quarterly goals. Update the documentation to reflect any completed objectives and outline steps for any remaining goals. Schedule individual meetings with team members to discuss their contributions and any support they might need to achieve their targets."
},
{
"title": "Personal Development Plan",
"text": "Reflect on the past quarter's achievements and areas for improvement. Update the personal development plan to include new technical skills to learn, certifications to pursue, and networking events to attend. Set realistic timelines and check-in points to monitor progress."
},
{
"title": "End-of-Year Performance Reviews",
"text": "Start preparing for the end-of-year performance reviews. Collect feedback from peers and managers, review project contributions, and document achievements. Consider areas for improvement and set goals for the next year. Schedule preliminary discussions with each team member to gather their self-assessments."
},
{
"title": "Technology Stack Evaluation",
"text": "Conduct an evaluation of our current technology stack to identify any outdated technologies or tools that could be replaced for better performance and productivity. Research emerging technologies that might benefit our projects. Prepare a report with findings and recommendations to present to the management team."
},
{
"title": "Team Building Event Planning",
"text": "Plan a team-building event for the next quarter. Consider activities that can be done remotely, such as virtual escape rooms or online game nights. Survey the team for their preferences and availability. Draft a budget proposal for the event and submit it for approval."
}
]
```
Storing the embeddings along with the metadata is fairly simple.
```python
import cohere
import uuid
cohere_client = cohere.Client(api_key="my-cohere-api-key")
response = cohere_client.embed(
texts=[
note.get("text")
for note in notes
],
model="embed-multilingual-v3.0",
input_type="search_document",
)
client.upload_points(
collection_name="personal-notes",
points=[
models.PointStruct(
id=uuid.uuid4().hex,
vector=embedding,
payload=note,
)
for note, embedding in zip(notes, response.embeddings)
]
)
```
Our collection is now ready to be searched over. In the real world, the set of notes would be changing over time, so the
ingestion process won't be as straightforward. This data is not yet exposed to the LLM, but we will build the connector
in the next step.
### Connector web service
[FastAPI](https://fastapi.tiangolo.com/) is a modern web framework and perfect a choice for a simple HTTP API. We are
going to use it for the purposes of our connector. There will be just one endpoint, as required by the model. It will
accept POST requests at the `/search` path. There is a single `query` parameter required. Let's define a corresponding
model.
```python
from pydantic import BaseModel
class SearchQuery(BaseModel):
query: str
```
RAG connector does not have to return the documents in any specific format. There are [some good practices to follow](https://docs.cohere.com/docs/creating-and-deploying-a-connector#configure-the-connection-between-the-connector-and-the-chat-api),
but Cohere models are quite flexible here. Results just have to be returned as JSON, with a list of objects in a
`results` property of the output. We will use the same document structure as we did for the Qdrant payloads, so there
is no conversion required. That requires two additional models to be created.
```python
from typing import List
class Document(BaseModel):
title: str
text: str
class SearchResults(BaseModel):
results: List[Document]
```
Once our model classes are ready, we can implement the logic that will get the query and provide the notes that are
relevant to it. Please note the LLM is not going to define the number of documents to be returned. That's completely
up to you how many of them you want to bring to the context.
There are two services we need to interact with - Qdrant server and Cohere API. FastAPI has a concept of a [dependency
injection](https://fastapi.tiangolo.com/tutorial/dependencies/#dependencies), and we will use it to provide both
clients into the implementation.
In case of queries, we need to set the `input_type` to `search_query` in the calls to Cohere API.
```python
from fastapi import FastAPI, Depends
from typing import Annotated
app = FastAPI()
def client() -> QdrantClient:
return QdrantClient(config.QDRANT_URL, api_key=config.QDRANT_API_KEY)
def cohere_client() -> cohere.Client:
return cohere.Client(api_key=config.COHERE_API_KEY)
@app.post("/search")
def search(
query: SearchQuery,
client: Annotated[QdrantClient, Depends(client)],
cohere_client: Annotated[cohere.Client, Depends(cohere_client)],
) -> SearchResults:
response = cohere_client.embed(
texts=[query.query],
model="embed-multilingual-v3.0",
input_type="search_query",
)
results = client.query_points(
collection_name="personal-notes",
query=response.embeddings[0],
limit=2,
).points
return SearchResults(
results=[
Document(**point.payload)
for point in results
]
)
```
Our app might be launched locally for the development purposes, given we have the `uvicorn` server installed:
```shell
uvicorn main:app
```
FastAPI exposes an interactive documentation at `http://localhost:8000/docs`, where we can test our endpoint. The
`/search` endpoint is available there.
![FastAPI documentation](/documentation/tutorials/cohere-rag-connector/fastapi-openapi.png)
We can interact with it and check the documents that will be returned for a specific query. For example, we want to know
recall what we are supposed to do regarding the infrastructure for your projects.
```shell
curl -X "POST" \
-H "Content-type: application/json" \
-d '{"query": "Is there anything I have to do regarding the project infrastructure?"}' \
"http://localhost:8000/search"
```
The output should look like following:
```json
{
"results": [
{
"title": "Cloud Migration Strategy",
"text": "Draft a plan for migrating our current on-premise infrastructure to the cloud. The plan should cover the selection of a cloud provider, cost analysis, and a phased migration approach. Identify critical applications for the first phase and any potential risks or challenges. Schedule a meeting with the IT department to discuss the plan."
},
{
"title": "Project Alpha Review",
"text": "Review the current progress of Project Alpha, focusing on the integration of the new API. Check for any compatibility issues with the existing system and document the steps needed to resolve them. Schedule a meeting with the development team to discuss the timeline and any potential roadblocks."
}
]
}
```
### Connecting to Command-R
Our web service is implemented, yet running only on our local machine. It has to be exposed to the public before
Command-R can interact with it. For a quick experiment, it might be enough to set up tunneling using services such as
[ngrok](https://ngrok.com/). We won't cover all the details in the tutorial, but their
[Quickstart](https://ngrok.com/docs/guides/getting-started/) is a great resource describing the process step-by-step.
Alternatively, you can also deploy the service with a public URL.
Once it's done, we can create the connector first, and then tell the model to use it, while interacting through the chat
API. Creating a connector is a single call to Cohere client:
```python
connector_response = cohere_client.connectors.create(
name="personal-notes",
url="https:/this-is-my-domain.app/search",
)
```
The `connector_response.connector` will be a descriptor, with `id` being one of the attributes. We'll use this
identifier for our interactions like this:
```python
response = cohere_client.chat(
message=(
"Is there anything I have to do regarding the project infrastructure? "
"Please mention the tasks briefly."
),
connectors=[
cohere.ChatConnector(id=connector_response.connector.id)
],
model="command-r",
)
```
We changed the `model` to `command-r`, as this is currently the best Cohere model available to public. The
`response.text` is the output of the model:
```text
Here are some of the tasks related to project infrastructure that you might have to perform:
- You need to draft a plan for migrating your on-premise infrastructure to the cloud and come up with a plan for the selection of a cloud provider, cost analysis, and a gradual migration approach.
- It's important to evaluate your current technology stack to identify any outdated technologies. You should also research emerging technologies and the benefits they could bring to your projects.
```
You only need to create a specific connector once! Please do not call `cohere_client.connectors.create` for every single
message you send to the `chat` method.
## Wrapping up
We have built a Cohere RAG connector that integrates with your existing knowledge base stored in Qdrant. We covered just
the basic flow, but in real world scenarios, you should also consider e.g. [building the authentication
system](https://docs.cohere.com/docs/connector-authentication) to prevent unauthorized access. | documentation/examples/cohere-rag-connector.md |
---
title: Aleph Alpha Search
weight: 16
draft: true
---
# Multimodal Semantic Search with Aleph Alpha
| Time: 30 min | Level: Beginner | | |
| --- | ----------- | ----------- |----------- |
This tutorial shows you how to run a proper multimodal semantic search system with a few lines of code, without the need to annotate the data or train your networks.
In most cases, semantic search is limited to homogenous data types for both documents and queries (text-text, image-image, audio-audio, etc.). With the recent growth of multimodal architectures, it is now possible to encode different data types into the same latent space. That opens up some great possibilities, as you can finally explore non-textual data, for example visual, with text queries.
In the past, this would require labelling every image with a description of what it presents. Right now, you can rely on vector embeddings, which can represent all
the inputs in the same space.
*Figure 1: Two examples of text-image pairs presenting a similar object, encoded by a multimodal network into the same
2D latent space. Both texts are examples of English [pangrams](https://en.wikipedia.org/wiki/Pangram).
https://deepai.org generated the images with pangrams used as input prompts.*
![](/docs/integrations/aleph-alpha/2d_text_image_embeddings.png)
## Sample dataset
You will be using [COCO](https://cocodataset.org/), a large-scale object detection, segmentation, and captioning dataset. It provides
various splits, 330,000 images in total. For demonstration purposes, this tutorials uses the
[2017 validation split](http://images.cocodataset.org/zips/train2017.zip) that contains 5000 images from different
categories with total size about 19GB.
```terminal
wget http://images.cocodataset.org/zips/train2017.zip
```
## Prerequisites
There is no need to curate your datasets and train the models. [Aleph Alpha](https://www.aleph-alpha.com/), already has multimodality and multilinguality already built-in. There is an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that simplifies the integration.
In order to enable the search capabilities, you need to build the search index to query on. For this example,
you are going to vectorize the images and store their embeddings along with the filenames. You can then return the most
similar files for given query.
There are two things you need to set up before you start:
1. You need to have a Qdrant instance running. If you want to launch it locally,
[Docker is the fastest way to do that](/documentation/quick_start/#installation).
2. You need to have a registered [Aleph Alpha account](https://app.aleph-alpha.com/).
3. Upon registration, create an API key (see: [API Tokens](https://app.aleph-alpha.com/profile)).
Now you can store the Aleph Alpha API key in a variable and choose the model your are going to use.
```python
aa_token = "<< your_token >>"
model = "luminous-base"
```
## Vectorize the dataset
In this example, images have been extracted and are stored in the `val2017` directory:
```python
from aleph_alpha_client import (
Prompt,
AsyncClient,
SemanticEmbeddingRequest,
SemanticRepresentation,
Image,
)
from glob import glob
ids, vectors, payloads = [], [], []
async with AsyncClient(token=aa_token) as aa_client:
for i, image_path in enumerate(glob("./val2017/*.jpg")):
# Convert the JPEG file into the embedding by calling
# Aleph Alpha API
prompt = Image.from_file(image_path)
prompt = Prompt.from_image(prompt)
query_params = {
"prompt": prompt,
"representation": SemanticRepresentation.Symmetric,
"compress_to_size": 128,
}
query_request = SemanticEmbeddingRequest(**query_params)
query_response = await aa_client.semantic_embed(request=query_request, model=model)
# Finally store the id, vector and the payload
ids.append(i)
vectors.append(query_response.embedding)
payloads.append({"filename": image_path})
```
## Load embeddings into Qdrant
Add all created embeddings, along with their ids and payloads into the `COCO` collection.
```python
import qdrant_client
from qdrant_client.models import Batch, VectorParams, Distance
client = qdrant_client.QdrantClient()
client.create_collection(
collection_name="COCO",
vectors_config=VectorParams(
size=len(vectors[0]),
distance=Distance.COSINE,
),
)
client.upsert(
collection_name="COCO",
points=Batch(
ids=ids,
vectors=vectors,
payloads=payloads,
),
)
```
## Query the database
The `luminous-base`, model can provide you the vectors for both texts and images, which means you can run both
text queries and reverse image search. Assume you want to find images similar to the one below:
![An image used to query the database](/docs/integrations/aleph-alpha/visual_search_query.png)
With the following code snippet create its vector embedding and then perform the lookup in Qdrant:
```python
async with AsyncCliet(token=aa_token) as aa_client:
prompt = ImagePrompt.from_file("query.jpg")
prompt = Prompt.from_image(prompt)
query_params = {
"prompt": prompt,
"representation": SemanticRepresentation.Symmetric,
"compress_to_size": 128,
}
query_request = SemanticEmbeddingRequest(**query_params)
query_response = await aa_client.semantic_embed(request=query_request, model=model)
results = client.query_points(
collection_name="COCO",
query=query_response.embedding,
limit=3,
).points
print(results)
```
Here are the results:
![Visual search results](/docs/integrations/aleph-alpha/visual_search_results.png)
**Note:** AlephAlpha models can provide embeddings for English, French, German, Italian
and Spanish. Your search is not only multimodal, but also multilingual, without any need for translations.
```python
text = "Surfing"
async with AsyncClient(token=aa_token) as aa_client:
query_params = {
"prompt": Prompt.from_text(text),
"representation": SemanticRepresentation.Symmetric,
"compres_to_size": 128,
}
query_request = SemanticEmbeddingRequest(**query_params)
query_response = await aa_client.semantic_embed(request=query_request, model=model)
results = client.query_points(
collection_name="COCO",
query=query_response.embedding,
limit=3,
).points
print(results)
```
Here are the top 3 results for “Surfing”:
![Text search results](/docs/integrations/aleph-alpha/text_search_results.png)
| documentation/examples/aleph-alpha-search.md |
---
title: Private Chatbot for Interactive Learning
weight: 23
social_preview_image: /blog/hybrid-cloud-red-hat-openshift/hybrid-cloud-red-hat-openshift-tutorial.png
aliases:
- /documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/
---
# Private Chatbot for Interactive Learning
| Time: 120 min | Level: Advanced | |
| --- | ----------- | ----------- |----------- |
With chatbots, companies can scale their training programs to accommodate a large workforce, delivering consistent and standardized learning experiences across departments, locations, and time zones. Furthermore, having already completed their online training, corporate employees might want to refer back old course materials. Most of this information is proprietary to the company, and manually searching through an entire library of materials takes time. However, a chatbot built on this knowledge can respond in the blink of an eye.
With a simple RAG pipeline, you can build a private chatbot. In this tutorial, you will combine open source tools inside of a closed infrastructure and tie them together with a reliable framework. This custom solution lets you run a chatbot without public internet access. You will be able to keep sensitive data secure without compromising privacy.
![OpenShift](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/openshift-diagram.png)
**Figure 1:** The LLM and Qdrant Hybrid Cloud are containerized as separate services. Haystack combines them into a RAG pipeline and exposes the API via Hayhooks.
## Components
To maintain complete data isolation, we need to limit ourselves to open-source tools and use them in a private environment, such as [Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift). The pipeline will run internally and will be inaccessible from the internet.
- **Dataset:** [Red Hat Interactive Learning Portal](https://developers.redhat.com/learn), an online library of Red Hat course materials.
- **LLM:** `mistralai/Mistral-7B-Instruct-v0.1`, deployed as a standalone service on OpenShift.
- **Embedding Model:** `BAAI/bge-base-en-v1.5`, lightweight embedding model deployed from within the Haystack pipeline
with [FastEmbed](https://github.com/qdrant/fastembed)
- **Vector DB:** [Qdrant Hybrid Cloud](https://hybrid-cloud.qdrant.tech) running on OpenShift.
- **Framework:** [Haystack 2.x](https://haystack.deepset.ai/) to connect all and [Hayhooks](https://docs.haystack.deepset.ai/docs/hayhooks) to serve the app through HTTP endpoints.
### Procedure
The [Haystack](https://haystack.deepset.ai/) framework leverages two pipelines, which combine our components sequentially to process data.
1. The **Indexing Pipeline** will run offline in batches, when new data is added or updated.
2. The **Search Pipeline** will retrieve information from Qdrant and use an LLM to produce an answer.
> **Note:** We will define the pipelines in Python and then export them to YAML format, so that [Hayhooks](https://docs.haystack.deepset.ai/docs/hayhooks) can run them as a web service.
## Prerequisites
### Deploy the LLM to OpenShift
Follow the steps in [Chapter 6. Serving large language models](https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2.5/html/working_on_data_science_projects/serving-large-language-models_serving-large-language-models#doc-wrapper). This will download the LLM from the [HuggingFace](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), and deploy it to OpenShift using a *single model serving platform*.
Your LLM service will have a URL, which you need to store as an environment variable.
```shell
export INFERENCE_ENDPOINT_URL="http://mistral-service.default.svc.cluster.local"
```
```python
import os
os.environ["INFERENCE_ENDPOINT_URL"] = "http://mistral-service.default.svc.cluster.local"
```
### Launch Qdrant Hybrid Cloud
Complete **How to Set Up Qdrant on Red Hat OpenShift**. When in Hybrid Cloud, your Qdrant instance is private and and its nodes run on the same OpenShift infrastructure as your other components.
Retrieve your Qdrant URL and API key and store them as environment variables:
```shell
export QDRANT_URL="https://qdrant.example.com"
export QDRANT_API_KEY="your-api-key"
```
```python
os.environ["QDRANT_URL"] = "https://qdrant.example.com"
os.environ["QDRANT_API_KEY"] = "your-api-key"
```
## Implementation
We will first create an indexing pipeline to add documents to the system.
Then, the search pipeline will retrieve relevant data from our documents.
After the pipelines are tested, we will export them to YAML files.
### Indexing pipeline
[Haystack 2.x](https://haystack.deepset.ai/) comes packed with a lot of useful components, from data fetching, through
HTML parsing, up to the vector storage. Before we start, there are a few Python packages that we need to install:
```shell
pip install haystack-ai \
qdrant-client \
qdrant-haystack \
fastembed-haystack
```
<aside role="status">
FastEmbed uses ONNX runtime and does not require a GPU for the embedding models while still providing a fast inference speed.
</aside>
Our environment is now ready, so we can jump right into the code. Let's define an empty pipeline and gradually add
components to it:
```python
from haystack import Pipeline
indexing_pipeline = Pipeline()
```
#### Data fetching and conversion
In this step, we will use Haystack's `LinkContentFetcher` to download course content from a list of URLs and store it in Qdrant for retrieval.
As we don't want to store raw HTML, this tool will extract text content from each webpage. Then, the fetcher will divide them into digestible chunks, since the documents might be pretty long.
Let's start with data fetching and text conversion:
```python
from haystack.components.fetchers import LinkContentFetcher
from haystack.components.converters import HTMLToDocument
fetcher = LinkContentFetcher()
converter = HTMLToDocument()
indexing_pipeline.add_component("fetcher", fetcher)
indexing_pipeline.add_component("converter", converter)
```
Our pipeline knows there are two components, but they are not connected yet. We need to define the flow between them:
```python
indexing_pipeline.connect("fetcher.streams", "converter.sources")
```
Each component has a set of inputs and outputs which might be combined in a directed graph. The definitions of the
inputs and outputs are usually provided in the documentation of the component. The `LinkContentFetcher` has the
following parameters:
![Parameters of the `LinkContentFetcher`](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/haystack-link-content-fetcher.png)
*Source: https://docs.haystack.deepset.ai/docs/linkcontentfetcher*
#### Chunking and creating the embeddings
We used `HTMLToDocument` to convert the HTML sources into `Document` instances of Haystack, which is a
base class containing some data to be queried. However, a single document might be too long to be processed by the
embedding model, and it also carries way too much information to make the search relevant.
Therefore, we need to split the document into smaller parts and convert them into embeddings. For this, we will use the
`DocumentSplitter` and `FastembedDocumentEmbedder` pointed to our `BAAI/bge-base-en-v1.5` model:
```python
from haystack.components.preprocessors import DocumentSplitter
from haystack_integrations.components.embedders.fastembed import FastembedDocumentEmbedder
splitter = DocumentSplitter(split_by="sentence", split_length=5, split_overlap=2)
embedder = FastembedDocumentEmbedder(model="BAAI/bge-base-en-v1.5")
embedder.warm_up()
indexing_pipeline.add_component("splitter", splitter)
indexing_pipeline.add_component("embedder", embedder)
indexing_pipeline.connect("converter.documents", "splitter.documents")
indexing_pipeline.connect("splitter.documents", "embedder.documents")
```
#### Writing data to Qdrant
The splitter will be producing chunks with a maximum length of 5 sentences, with an overlap of 2 sentences. Then, these
smaller portions will be converted into embeddings.
Finally, we need to store our embeddings in Qdrant.
```python
from haystack.utils import Secret
from haystack_integrations.document_stores.qdrant import QdrantDocumentStore
from haystack.components.writers import DocumentWriter
document_store = QdrantDocumentStore(
os.environ["QDRANT_URL"],
api_key=Secret.from_env_var("QDRANT_API_KEY"),
index="red-hat-learning",
return_embedding=True,
embedding_dim=768,
)
writer = DocumentWriter(document_store=document_store)
indexing_pipeline.add_component("writer", writer)
indexing_pipeline.connect("embedder.documents", "writer.documents")
```
Our pipeline is now complete. Haystack comes with a handy visualization of the pipeline, so you can see and verify the
connections between the components. It is displayed in the Jupyter notebook, but you can also export it to a file:
```python
indexing_pipeline.draw("indexing_pipeline.png")
```
![Structure of the indexing pipeline](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/indexing_pipeline.png)
#### Test the entire pipeline
We can finally run it on a list of URLs to index the content in Qdrant. We have a bunch of URLs to all the Red Hat
OpenShift Foundations course lessons, so let's use them:
```python
course_urls = [
"https://developers.redhat.com/learn/openshift/foundations-openshift",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:openshift-and-developer-sandbox",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:overview-web-console",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:use-terminal-window-within-red-hat-openshift-web-console",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-source-code-github-repository-using-openshift-web-console",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-linux-container-image-repository-using-openshift-web-console",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-linux-container-image-using-oc-cli-tool",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-source-code-using-oc-cli-tool",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:scale-applications-using-openshift-web-console",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:scale-applications-using-oc-cli-tool",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:work-databases-openshift-using-oc-cli-tool",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:work-databases-openshift-web-console",
"https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:view-performance-information-using-openshift-web-console",
]
indexing_pipeline.run(data={
"fetcher": {
"urls": course_urls,
}
})
```
The execution might take a while, as the model needs to process all the documents. After the process is finished, we
should have all the documents stored in Qdrant, ready for search. You should see a short summary of processed documents:
```shell
{'writer': {'documents_written': 381}}
```
### Search pipeline
Our documents are now indexed and ready for search. The next pipeline is a bit simpler, but we still need to define a
few components. Let's start again with an empty pipeline:
```python
search_pipeline = Pipeline()
```
Our second process takes user input, converts it into embeddings and then searches for the most relevant documents
using the query embedding. This might look familiar, but we arent working with `Document` instances
anymore, since the query only accepts raw text. Thus, some of the components will be different, especially the embedder,
as it has to accept a single string as an input and produce a single embedding as an output:
```python
from haystack_integrations.components.embedders.fastembed import FastembedTextEmbedder
from haystack_integrations.components.retrievers.qdrant import QdrantEmbeddingRetriever
query_embedder = FastembedTextEmbedder(model="BAAI/bge-base-en-v1.5")
query_embedder.warm_up()
retriever = QdrantEmbeddingRetriever(
document_store=document_store, # The same document store as the one used for indexing
top_k=3, # Number of documents to return
)
search_pipeline.add_component("query_embedder", query_embedder)
search_pipeline.add_component("retriever", retriever)
search_pipeline.connect("query_embedder.embedding", "retriever.query_embedding")
```
#### Run a test query
If our goal was to just retrieve the relevant documents, we could stop here. Let's try the current pipeline on a simple
query:
```python
query = "How to install an application using the OpenShift web console?"
search_pipeline.run(data={
"query_embedder": {
"text": query
}
})
```
We set the `top_k` parameter to 3, so the retriever should return the three most relevant documents. Your output should look like this:
```text
{
'retriever': {
'documents': [
Document(id=867b4aa4c37a91e72dc7ff452c47972c1a46a279a7531cd6af14169bcef1441b, content: 'Install a Node.js application from GitHub using the web console The following describes the steps r...', meta: {'content_type': 'text/html', 'source_id': 'f56e8f827dda86abe67c0ba3b4b11331d896e2d4f7b2b43c74d3ce973d07be0c', 'url': 'https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:work-databases-openshift-web-console'}, score: 0.9209432),
Document(id=0c74381c178597dd91335ebfde790d13bf5989b682d73bf5573c7734e6765af7, content: 'How to remove an application from OpenShift using the web console. In addition to providing the cap...', meta: {'content_type': 'text/html', 'source_id': '2a0759f3ce4a37d9f5c2af9c0ffcc80879077c102fb8e41e576e04833c9d24ce', 'url': 'https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-linux-container-image-repository-using-openshift-web-console'}, score: 0.9132109500000001),
Document(id=3e5f8923a34ab05611ef20783211e5543e880c709fd6534d9c1f63576edc4061, content: 'Path resource: Install an application from source code in a GitHub repository using the OpenShift w...', meta: {'content_type': 'text/html', 'source_id': 'a4c4cd62d07c0d9d240e3289d2a1cc0a3d1127ae70704529967f715601559089', 'url': 'https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-source-code-github-repository-using-openshift-web-console'}, score: 0.912748935)
]
}
}
```
#### Generating the answer
Retrieval should serve more than just documents. Therefore, we will need to use an LLM to generate exact answers to our question.
This is the final component of our second pipeline.
Haystack will create a prompt which adds your documents to the model's context.
```python
from haystack.components.builders.prompt_builder import PromptBuilder
from haystack.components.generators import HuggingFaceTGIGenerator
prompt_builder = PromptBuilder("""
Given the following information, answer the question.
Context:
{% for document in documents %}
{{ document.content }}
{% endfor %}
Question: {{ query }}
""")
llm = HuggingFaceTGIGenerator(
model="mistralai/Mistral-7B-Instruct-v0.1",
url=os.environ["INFERENCE_ENDPOINT_URL"],
generation_kwargs={
"max_new_tokens": 1000, # Allow longer responses
},
)
search_pipeline.add_component("prompt_builder", prompt_builder)
search_pipeline.add_component("llm", llm)
search_pipeline.connect("retriever.documents", "prompt_builder.documents")
search_pipeline.connect("prompt_builder.prompt", "llm.prompt")
```
The `PromptBuilder` is a Jinja2 template that will be filled with the documents and the query. The
`HuggingFaceTGIGenerator` connects to the LLM service and generates the answer. Let's run the pipeline again:
```python
query = "How to install an application using the OpenShift web console?"
response = search_pipeline.run(data={
"query_embedder": {
"text": query
},
"prompt_builder": {
"query": query
},
})
```
The LLM may provide multiple replies, if asked to do so, so let's iterate over and print them out:
```python
for reply in response["llm"]["replies"]:
print(reply.strip())
```
In our case there is a single response, which should be the answer to the question:
```text
Answer: To install an application using the OpenShift web console, follow these steps:
1. Select +Add on the left side of the web console.
2. Identify the container image to install.
3. Using your web browser, navigate to the Developer Sandbox for Red Hat OpenShift and select Start your Sandbox for free.
4. Install an application from source code stored in a GitHub repository using the OpenShift web console.
```
Our final search pipeline might also be visualized, so we can see how the components are glued together:
```python
search_pipeline.draw("search_pipeline.png")
```
![Structure of the search pipeline](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/search_pipeline.png)
## Deployment
The pipelines are now ready, and we can export them to YAML. Hayhooks will use these files to run the
pipelines as HTTP endpoints. To do this, specify both file paths and your environment variables.
> Note: The indexing pipeline might be run inside your ETL tool, but search should be definitely exposed as an HTTP endpoint.
Let's run it on the local machine:
```shell
pip install hayhooks
```
First of all, we need to save the pipelines to the YAML file:
```python
with open("search-pipeline.yaml", "w") as fp:
search_pipeline.dump(fp)
```
And now we are able to run the Hayhooks service:
```shell
hayhooks run
```
The command should start the service on the default port, so you can access it at `http://localhost:1416`. The pipeline
is not deployed yet, but we can do it with just another command:
```shell
hayhooks deploy search-pipeline.yaml
```
Once it's finished, you should be able to see the OpenAPI documentation at
[http://localhost:1416/docs](http://localhost:1416/docs), and test the newly created endpoint.
![Search pipeline in the OpenAPI documentation](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/hayhooks-openapi.png)
Our search is now accessible through the HTTP endpoint, so we can integrate it with any other service. We can even
control the other parameters, like the number of documents to return:
```shell
curl -X 'POST' \
'http://localhost:1416/search-pipeline' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"llm": {
},
"prompt_builder": {
"query": "How can I remove an application?"
},
"query_embedder": {
"text": "How can I remove an application?"
},
"retriever": {
"top_k": 5
}
}'
```
The response should be similar to the one we got in the Python before:
```json
{
"llm": {
"replies": [
"\n\nAnswer: You can remove an application running in OpenShift by right-clicking on the circular graphic representing the application in Topology view and selecting the Delete Application text from the dialog that appears when you click the graphic’s outer ring. Alternatively, you can use the oc CLI tool to delete an installed application using the oc delete all command."
],
"meta": [
{
"model": "mistralai/Mistral-7B-Instruct-v0.1",
"index": 0,
"finish_reason": "eos_token",
"usage": {
"completion_tokens": 75,
"prompt_tokens": 642,
"total_tokens": 717
}
}
]
}
}
```
## Next steps
- In this example, [Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is the infrastructure of choice for proprietary chatbots. [Read more](https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2.8) about how to host AI projects in their [extensive documentation](https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2.8).
- [Haystack's documentation](https://docs.haystack.deepset.ai/docs/kubernetes) describes [how to deploy the Hayhooks service in a Kubernetes
environment](https://docs.haystack.deepset.ai/docs/kubernetes), so you can easily move it to your own OpenShift infrastructure.
- If you are just getting started and need more guidance on Qdrant, read the [quickstart](/documentation/quick-start/) or try out our [beginner tutorial](/documentation/tutorials/neural-search/). | documentation/examples/rag-chatbot-red-hat-openshift-haystack.md |
---
title: Blog-Reading Chatbot with GPT-4o
weight: 35
social_preview_image: /blog/hybrid-cloud-scaleway/hybrid-cloud-scaleway-tutorial.png
aliases:
- /documentation/tutorials/rag-chatbot-scaleway/
---
# Blog-Reading Chatbot with GPT-4o
| Time: 90 min | Level: Advanced |[GitHub](https://github.com/qdrant/examples/blob/master/langchain-lcel-rag/Langchain-LCEL-RAG-Demo.ipynb)| |
|--------------|-----------------|--|----|
In this tutorial, you will build a RAG system that combines blog content ingestion with the capabilities of semantic search. **OpenAI's GPT-4o LLM** is powerful, but scaling its use requires us to supply context systematically.
RAG enhances the LLM's generation of answers by retrieving relevant documents to aid the question-answering process. This setup showcases the integration of advanced search and AI language processing to improve information retrieval and generation tasks.
A notebook for this tutorial is available on [GitHub](https://github.com/qdrant/examples/blob/master/langchain-lcel-rag/Langchain-LCEL-RAG-Demo.ipynb).
**Data Privacy and Sovereignty:** RAG applications often rely on sensitive or proprietary internal data. Running the entire stack within your own environment becomes crucial for maintaining control over this data. Qdrant Hybrid Cloud deployed on [Scaleway](https://www.scaleway.com/) addresses this need perfectly, offering a secure, scalable platform that still leverages the full potential of RAG. Scaleway offers serverless [Functions](https://www.scaleway.com/en/serverless-functions/) and serverless [Jobs](https://www.scaleway.com/en/serverless-jobs/), both of which are ideal for embedding creation in large-scale RAG cases.
## Components
- **Cloud Host:** [Scaleway on managed Kubernetes](https://www.scaleway.com/en/kubernetes-kapsule/) for compatibility with Qdrant Hybrid Cloud.
- **Vector Database:** Qdrant Hybrid Cloud as the vector search engine for retrieval.
- **LLM:** GPT-4o, developed by OpenAI is utilized as the generator for producing answers.
- **Framework:** [LangChain](https://www.langchain.com/) for extensive RAG capabilities.
![Architecture diagram](/documentation/examples/rag-chatbot-scaleway/architecture-diagram.png)
> Langchain [supports a wide range of LLMs](https://python.langchain.com/docs/integrations/chat/), and GPT-4o is used as the main generator in this tutorial. You can easily swap it out for your preferred model that might be launched on your premises to complete the fully private setup. For the sake of simplicity, we used the OpenAI APIs, but LangChain makes the transition seamless.
## Deploying Qdrant Hybrid Cloud on Scaleway
[Scaleway Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) and [Kosmos](https://www.scaleway.com/en/kubernetes-kosmos/) are managed Kubernetes services from [Scaleway](https://www.scaleway.com/en/). They abstract away the complexities of managing and operating a Kubernetes cluster. The primary difference being, Kapsule clusters are composed solely of Scaleway Instances. Whereas, a Kosmos cluster is a managed multi-cloud Kubernetes engine that allows you to connect instances from any cloud provider to a single managed Control-Plane.
1. To start using managed Kubernetes on Scaleway, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#scaleway).
2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/).
## Prerequisites
To prepare the environment for working with Qdrant and related libraries, it's necessary to install all required Python packages. This can be done using Poetry, a tool for dependency management and packaging in Python. The code snippet imports various libraries essential for the tasks ahead, including `bs4` for parsing HTML and XML documents, `langchain` and its community extensions for working with language models and document loaders, and `Qdrant` for vector storage and retrieval. These imports lay the groundwork for utilizing Qdrant alongside other tools for natural language processing and machine learning tasks.
Qdrant will be running on a specific URL and access will be restricted by the API key. Make sure to store them both as environment variables as well:
```shell
export QDRANT_URL="https://qdrant.example.com"
export QDRANT_API_KEY="your-api-key"
```
*Optional:* Whenever you use LangChain, you can also [configure LangSmith](https://docs.smith.langchain.com/), which will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/).
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY="your-api-key"
export LANGCHAIN_PROJECT="your-project" # if not specified, defaults to "default"
```
Now you can get started:
```python
import getpass
import os
import bs4
from langchain import hub
from langchain_community.document_loaders import WebBaseLoader
from langchain_qdrant import Qdrant
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
```
Set up the OpenAI API key:
```python
os.environ["OPENAI_API_KEY"] = getpass.getpass()
```
Initialize the language model:
```python
llm = ChatOpenAI(model="gpt-4o")
```
It is here that we configure both the Embeddings and LLM. You can replace this with your own models using Ollama or other services. Scaleway has some great [L4 GPU Instances](https://www.scaleway.com/en/l4-gpu-instance/) you can use for compute here.
## Download and parse data
To begin working with blog post contents, the process involves loading and parsing the HTML content. This is achieved using `urllib` and `BeautifulSoup`, which are tools designed for such tasks. After the content is loaded and parsed, it is indexed using Qdrant, a powerful tool for managing and querying vector data. The code snippet demonstrates how to load, chunk, and index the contents of a blog post by specifying the URL of the blog and the specific HTML elements to parse. This step is crucial for preparing the data for further processing and analysis with Qdrant.
```python
# Load, chunk and index the contents of the blog.
loader = WebBaseLoader(
web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
bs_kwargs=dict(
parse_only=bs4.SoupStrainer(
class_=("post-content", "post-title", "post-header")
)
),
)
docs = loader.load()
```
### Chunking data
When dealing with large documents, such as a blog post exceeding 42,000 characters, it's crucial to manage the data efficiently for processing. Many models have a limited context window and struggle with long inputs, making it difficult to extract or find relevant information. To overcome this, the document is divided into smaller chunks. This approach enhances the model's ability to process and retrieve the most pertinent sections of the document effectively.
In this scenario, the document is split into chunks using the `RecursiveCharacterTextSplitter` with a specified chunk size and overlap. This method ensures that no critical information is lost between chunks. Following the splitting, these chunks are then indexed into Qdrant—a vector database for efficient similarity search and storage of embeddings. The `Qdrant.from_documents` function is utilized for indexing, with documents being the split chunks and embeddings generated through `OpenAIEmbeddings`. The entire process is facilitated within an in-memory database, signifying that the operations are performed without the need for persistent storage, and the collection is named "lilianweng" for reference.
This chunking and indexing strategy significantly improves the management and retrieval of information from large documents, making it a practical solution for handling extensive texts in data processing workflows.
```python
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
splits = text_splitter.split_documents(docs)
vectorstore = Qdrant.from_documents(
documents=splits,
embedding=OpenAIEmbeddings(),
collection_name="lilianweng",
url=os.environ["QDRANT_URL"],
api_key=os.environ["QDRANT_API_KEY"],
)
```
## Retrieve and generate content
The `vectorstore` is used as a retriever to fetch relevant documents based on vector similarity. The `hub.pull("rlm/rag-prompt")` function is used to pull a specific prompt from a repository, which is designed to work with retrieved documents and a question to generate a response.
The `format_docs` function formats the retrieved documents into a single string, preparing them for further processing. This formatted string, along with a question, is passed through a chain of operations. Firstly, the context (formatted documents) and the question are processed by the retriever and the prompt. Then, the result is fed into a large language model (`llm`) for content generation. Finally, the output is parsed into a string format using `StrOutputParser()`.
This chain of operations demonstrates a sophisticated approach to information retrieval and content generation, leveraging both the semantic understanding capabilities of vector search and the generative prowess of large language models.
Now, retrieve and generate data using relevant snippets from the blogL
```python
retriever = vectorstore.as_retriever()
prompt = hub.pull("rlm/rag-prompt")
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
```
### Invoking the RAG Chain
```python
rag_chain.invoke("What is Task Decomposition?")
```
## Next steps:
We built a solid foundation for a simple chatbot, but there is still a lot to do. If you want to make the
system production-ready, you should consider implementing the mechanism into your existing stack. We recommend
Our vector database can easily be hosted on [Scaleway](https://www.scaleway.com/), our trusted [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) partner. This means that Qdrant can be run from your Scaleway region, but the database itself can still be managed from within Qdrant Cloud's interface. Both products have been tested for compatibility and scalability, and we recommend their [managed Kubernetes](https://www.scaleway.com/en/kubernetes-kapsule/) service.
Their French deployment regions e.g. France are excellent for network latency and data sovereignty. For hosted GPUs, try [rendering with L4 GPU instances](https://www.scaleway.com/en/l4-gpu-instance/).
If you have any questions, feel free to ask on our [Discord community](https://qdrant.to/discord).
| documentation/examples/rag-chatbot-scaleway.md |
---
title: Multitenancy with LlamaIndex
weight: 18
aliases:
- /documentation/tutorials/llama-index-multitenancy/
---
# Multitenancy with LlamaIndex
If you are building a service that serves vectors for many independent users, and you want to isolate their
data, the best practice is to use a single collection with payload-based partitioning. This approach is
called **multitenancy**. Our guide on the [Separate Partitions](/documentation/guides/multiple-partitions/) describes
how to set it up in general, but if you use [LlamaIndex](/documentation/integrations/llama-index/) as a
backend, you may prefer reading a more specific instruction. So here it is!
## Prerequisites
This tutorial assumes that you have already installed Qdrant and LlamaIndex. If you haven't, please run the
following commands:
```bash
pip install llama-index llama-index-vector-stores-qdrant
```
We are going to use a local Docker-based instance of Qdrant. If you want to use a remote instance, please
adjust the code accordingly. Here is how we can start a local instance:
```bash
docker run -d --name qdrant -p 6333:6333 -p 6334:6334 qdrant/qdrant:latest
```
## Setting up LlamaIndex pipeline
We are going to implement an end-to-end example of multitenant application using LlamaIndex. We'll be
indexing the documentation of different Python libraries, and we definitely don't want any users to see the
results coming from a library they are not interested in. In real case scenarios, this is even more dangerous,
as the documents may contain sensitive information.
### Creating vector store
[QdrantVectorStore](https://docs.llamaindex.ai/en/stable/examples/vector_stores/QdrantIndexDemo.html) is a
wrapper around Qdrant that provides all the necessary methods to work with your vector database in LlamaIndex.
Let's create a vector store for our collection. It requires setting a collection name and passing an instance
of `QdrantClient`.
```python
from qdrant_client import QdrantClient
from llama_index.vector_stores.qdrant import QdrantVectorStore
client = QdrantClient("http://localhost:6333")
vector_store = QdrantVectorStore(
collection_name="my_collection",
client=client,
)
```
### Defining chunking strategy and embedding model
Any semantic search application requires a way to convert text queries into vectors - an embedding model.
`ServiceContext` is a bundle of commonly used resources used during the indexing and querying stage in any
LlamaIndex application. We can also use it to set up an embedding model - in our case, a local
[BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5).
set up
```python
from llama_index.core import ServiceContext
service_context = ServiceContext.from_defaults(
embed_model="local:BAAI/bge-small-en-v1.5",
)
```
*Note*, in case you are using Large Language Model different from OpenAI's ChatGPT, you should specify
`llm` parameter for `ServiceContext`.
We can also control how our documents are split into chunks, or nodes using LLamaIndex's terminology.
The `SimpleNodeParser` splits documents into fixed length chunks with an overlap. The defaults are
reasonable, but we can also adjust them if we want to. Both values are defined in tokens.
```python
from llama_index.core.node_parser import SimpleNodeParser
node_parser = SimpleNodeParser.from_defaults(chunk_size=512, chunk_overlap=32)
```
Now we also need to inform the `ServiceContext` about our choices:
```python
service_context = ServiceContext.from_defaults(
embed_model="local:BAAI/bge-large-en-v1.5",
node_parser=node_parser,
)
```
Both embedding model and selected node parser will be implicitly used during the indexing and querying.
### Combining everything together
The last missing piece, before we can start indexing, is the `VectorStoreIndex`. It is a wrapper around
`VectorStore` that provides a convenient interface for indexing and querying. It also requires a
`ServiceContext` to be initialized.
```python
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_vector_store(
vector_store=vector_store, service_context=service_context
)
```
## Indexing documents
No matter how our documents are generated, LlamaIndex will automatically split them into nodes, if
required, encode using selected embedding model, and then store in the vector store. Let's define
some documents manually and insert them into Qdrant collection. Our documents are going to have
a single metadata attribute - a library name they belong to.
```python
from llama_index.core.schema import Document
documents = [
Document(
text="LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models.",
metadata={
"library": "llama-index",
},
),
Document(
text="Qdrant is a vector database & vector similarity search engine.",
metadata={
"library": "qdrant",
},
),
]
```
Now we can index them using our `VectorStoreIndex`:
```python
for document in documents:
index.insert(document)
```
### Performance considerations
Our documents have been split into nodes, encoded using the embedding model, and stored in the vector
store. However, we don't want to allow our users to search for all the documents in the collection,
but only for the documents that belong to a library they are interested in. For that reason, we need
to set up the Qdrant [payload index](/documentation/concepts/indexing/#payload-index), so the search
is more efficient.
```python
from qdrant_client import models
client.create_payload_index(
collection_name="my_collection",
field_name="metadata.library",
field_type=models.PayloadSchemaType.KEYWORD,
)
```
The payload index is not the only thing we want to change. Since none of the search
queries will be executed on the whole collection, we can also change its configuration, so the HNSW
graph is not built globally. This is also done due to [performance reasons](/documentation/guides/multiple-partitions/#calibrate-performance).
**You should not be changing these parameters, if you know there will be some global search operations
done on the collection.**
```python
client.update_collection(
collection_name="my_collection",
hnsw_config=models.HnswConfigDiff(payload_m=16, m=0),
)
```
Once both operations are completed, we can start searching for our documents.
<aside role="status">These steps are done just once, when you index your first documents!</aside>
## Querying documents with constraints
Let's assume we are searching for some information about large language models, but are only allowed to
use Qdrant documentation. LlamaIndex has a concept of retrievers, responsible for finding the most
relevant nodes for a given query. Our `VectorStoreIndex` can be used as a retriever, with some additional
constraints - in our case value of the `library` metadata attribute.
```python
from llama_index.core.vector_stores.types import MetadataFilters, ExactMatchFilter
qdrant_retriever = index.as_retriever(
filters=MetadataFilters(
filters=[
ExactMatchFilter(
key="library",
value="qdrant",
)
]
)
)
nodes_with_scores = qdrant_retriever.retrieve("large language models")
for node in nodes_with_scores:
print(node.text, node.score)
# Output: Qdrant is a vector database & vector similarity search engine. 0.60551536
```
The description of Qdrant was the best match, even though it didn't mention large language models
at all. However, it was the only document that belonged to the `qdrant` library, so there was no
other choice. Let's try to search for something that is not present in the collection.
Let's define another retrieve, this time for the `llama-index` library:
```python
llama_index_retriever = index.as_retriever(
filters=MetadataFilters(
filters=[
ExactMatchFilter(
key="library",
value="llama-index",
)
]
)
)
nodes_with_scores = llama_index_retriever.retrieve("large language models")
for node in nodes_with_scores:
print(node.text, node.score)
# Output: LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. 0.63576734
```
The results returned by both retrievers are different, due to the different constraints, so we implemented
a real multitenant search application!
| documentation/examples/llama-index-multitenancy.md |
---
title: Build Prototypes
weight: 19
---
# Examples
| End-to-End Code Samples | Description | Stack |
|---------------------------------------------------------------------------------|-------------------------------------------------------------------|---------------------------------------------|
| [Multitenancy with LlamaIndex](../examples/llama-index-multitenancy/) | Handle data coming from multiple users in LlamaIndex. | Qdrant, Python, LlamaIndex |
| [Implement custom connector for Cohere RAG](../examples/cohere-rag-connector/) | Bring data stored in Qdrant to Cohere RAG | Qdrant, Cohere, FastAPI |
| [Chatbot for Interactive Learning](../examples/rag-chatbot-red-hat-openshift-haystack/) | Build a Private RAG Chatbot for Interactive Learning | Qdrant, Haystack, OpenShift |
| [Information Extraction Engine](../examples/rag-chatbot-vultr-dspy-ollama/) | Build a Private RAG Information Extraction Engine | Qdrant, Vultr, DSPy, Ollama |
| [System for Employee Onboarding](../examples/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) | Build a RAG System for Employee Onboarding | Qdrant, Cohere, LangChain |
| [System for Contract Management](../examples/rag-contract-management-stackit-aleph-alpha/) | Build a Region-Specific RAG System for Contract Management | Qdrant, Aleph Alpha, STACKIT |
| [Question-Answering System for Customer Support](../examples/rag-customer-support-cohere-airbyte-aws/) | Build a RAG System for AI Customer Support | Qdrant, Cohere, Airbyte, AWS |
| [Hybrid Search on PDF Documents](../examples/hybrid-search-llamaindex-jinaai/) | Develop a Hybrid Search System for Product PDF Manuals | Qdrant, LlamaIndex, Jina AI
| [Blog-Reading RAG Chatbot](../examples/rag-chatbot-scaleway) | Develop a RAG-based Chatbot on Scaleway and with LangChain | Qdrant, LangChain, GPT-4o
| [Movie Recommendation System](../examples/recommendation-system-ovhcloud/) | Build a Movie Recommendation System with LlamaIndex and With JinaAI | Qdrant |
## Notebooks
Our Notebooks offer complex instructions that are supported with a throrough explanation. Follow along by trying out the code and get the most out of each example.
| Example | Description | Stack |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|----------------------------|
| [Intro to Semantic Search and Recommendations Systems](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_getting_started/getting_started.ipynb) | Learn how to get started building semantic search and recommendation systems. | Qdrant |
| [Search and Recommend Newspaper Articles](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_text_data/qdrant_and_text_data.ipynb) | Work with text data to develop a semantic search and a recommendation engine for news articles. | Qdrant |
| [Recommendation System for Songs](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_audio_data/03_qdrant_101_audio.ipynb) | Use Qdrant to develop a music recommendation engine based on audio embeddings. | Qdrant |
| [Image Comparison System for Skin Conditions](https://colab.research.google.com/github/qdrant/examples/blob/master/qdrant_101_image_data/04_qdrant_101_cv.ipynb) | Use Qdrant to compare challenging images with labels representing different skin diseases. | Qdrant |
| [Question and Answer System with LlamaIndex](https://githubtocolab.com/qdrant/examples/blob/master/llama_index_recency/Qdrant%20and%20LlamaIndex%20%E2%80%94%20A%20new%20way%20to%20keep%20your%20Q%26A%20systems%20up-to-date.ipynb) | Combine Qdrant and LlamaIndex to create a self-updating Q&A system. | Qdrant, LlamaIndex, Cohere |
| [Extractive QA System](https://githubtocolab.com/qdrant/examples/blob/master/extractive_qa/extractive-question-answering.ipynb) | Extract answers directly from context to generate highly relevant answers. | Qdrant |
| [Ecommerce Reverse Image Search](https://githubtocolab.com/qdrant/examples/blob/master/ecommerce_reverse_image_search/ecommerce-reverse-image-search.ipynb) | Accept images as search queries to receive semantically appropriate answers. | Qdrant |
| [Basic RAG](https://githubtocolab.com/qdrant/examples/blob/master/rag-openai-qdrant/rag-openai-qdrant.ipynb) | Basic RAG pipeline with Qdrant and OpenAI SDKs. | OpenAI, Qdrant, FastEmbed |
| documentation/examples/_index.md |
---
title: RAG System for Employee Onboarding
weight: 30
social_preview_image: /blog/hybrid-cloud-oracle-cloud-infrastructure/hybrid-cloud-oracle-cloud-infrastructure-tutorial.png
aliases:
- /documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/
---
# RAG System for Employee Onboarding
Public websites are a great way to share information with a wide audience. However, finding the right information can be
challenging, if you are not familiar with the website's structure or the terminology used. That's what the search bar is
for, but it is not always easy to formulate a query that will return the desired results, if you are not yet familiar
with the content. This is even more important in a corporate environment, and for the new employees, who are just
starting to learn the ropes, and don't even know how to ask the right questions yet. You may have even the best intranet
pages, but onboarding is more than just reading the documentation, it is about understanding the processes. Semantic
search can help with finding right resources easier, but wouldn't it be easier to just chat with the website, like you
would with a colleague?
Technological advancements have made it possible to interact with websites using natural language. This tutorial will
guide you through the process of integrating [Cohere](https://cohere.com/)'s language models with Qdrant to enable
natural language search on your documentation. We are going to use [LangChain](https://langchain.com/) as an
orchestrator. Everything will be hosted on [Oracle Cloud Infrastructure (OCI)](https://www.oracle.com/cloud/), so you
can scale your application as needed, and do not send your data to third parties. That is especially important when you
are working with confidential or sensitive data.
## Building up the application
Our application will consist of two main processes: indexing and searching. Langchain will glue everything together,
as we will use a few components, including Cohere and Qdrant, as well as some OCI services. Here is a high-level
overview of the architecture:
![Architecture diagram of the target system](/documentation/examples/faq-oci-cohere-langchain/architecture-diagram.png)
### Prerequisites
Before we dive into the implementation, make sure to set up all the necessary accounts and tools.
#### Libraries
We are going to use a few Python libraries. Of course, Langchain will be our main framework, but the Cohere models on
OCI are accessible via the [OCI SDK](https://docs.oracle.com/en-us/iaas/tools/python/2.125.1/). Let's install all the
necessary libraries:
```shell
pip install langchain oci qdrant-client langchainhub
```
#### Oracle Cloud
Our application will be fully running on Oracle Cloud Infrastructure (OCI). It's up to you to choose how you want to
deploy your application. Qdrant Hybrid Cloud will be running in your [Kubernetes cluster running on Oracle Cloud
(OKE)](https://www.oracle.com/cloud/cloud-native/container-engine-kubernetes/), so all the processes might be also
deployed there. You can get started with signing up for an account on [Oracle Cloud](https://signup.cloud.oracle.com/).
Cohere models are available on OCI as a part of the [Generative AI
Service](https://www.oracle.com/artificial-intelligence/generative-ai/generative-ai-service/). We need both the
[Generation models](https://docs.oracle.com/en-us/iaas/Content/generative-ai/use-playground-generate.htm) and the
[Embedding models](https://docs.oracle.com/en-us/iaas/Content/generative-ai/use-playground-embed.htm). Please follow the
linked tutorials to grasp the basics of using Cohere models there.
Accessing the models programmatically requires knowing the compartment OCID. Please refer to the [documentation that
describes how to find it](https://docs.oracle.com/en-us/iaas/Content/GSG/Tasks/contactingsupport_topic-Locating_Oracle_Cloud_Infrastructure_IDs.htm#Finding_the_OCID_of_a_Compartment).
For the further reference, we will assume that the compartment OCID is stored in the environment variable:
```shell
export COMPARTMENT_OCID="<your-compartment-ocid>"
```
```python
import os
os.environ["COMPARTMENT_OCID"] = "<your-compartment-ocid>"
```
#### Qdrant Hybrid Cloud
Qdrant Hybrid Cloud running on Oracle Cloud helps you build a solution without sending your data to external services. Our documentation provides a step-by-step guide on how to [deploy Qdrant Hybrid Cloud on Oracle
Cloud](/documentation/hybrid-cloud/platform-deployment-options/#oracle-cloud-infrastructure).
Qdrant will be running on a specific URL and access will be restricted by the API key. Make sure to store them both as environment variables as well:
```shell
export QDRANT_URL="https://qdrant.example.com"
export QDRANT_API_KEY="your-api-key"
```
*Optional:* Whenever you use LangChain, you can also [configure LangSmith](https://docs.smith.langchain.com/), which will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/).
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY="your-api-key"
export LANGCHAIN_PROJECT="your-project" # if not specified, defaults to "default"
```
Now you can get started:
```python
import os
os.environ["QDRANT_URL"] = "https://qdrant.example.com"
os.environ["QDRANT_API_KEY"] = "your-api-key"
```
Let's create the collection that will store the indexed documents. We will use the `qdrant-client` library, and our
collection will be named `oracle-cloud-website`. Our embedding model, `cohere.embed-english-v3.0`, produces embeddings
of size 1024, and we have to specify that when creating the collection.
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(
location=os.environ.get("QDRANT_URL"),
api_key=os.environ.get("QDRANT_API_KEY"),
)
client.create_collection(
collection_name="oracle-cloud-website",
vectors_config=models.VectorParams(
size=1024,
distance=models.Distance.COSINE,
),
)
```
### Indexing process
We have all the necessary tools set up, so let's start with the indexing process. We will use the Cohere Embedding
models to convert the text into vectors, and then store them in Qdrant. Langchain is integrated with OCI Generative AI
Service, so we can easily access the models.
Our dataset will be fairly simple, as it will consist of the questions and answers from the [Oracle Cloud Free Tier
FAQ page](https://www.oracle.com/cloud/free/faq/).
![Some examples of the Oracle Cloud FAQ](/documentation/examples/faq-oci-cohere-langchain/oracle-faq.png)
Questions and answers are presented in an HTML format, but we don't want to manually extract the text and adapt it for
each subpage. Instead, we will use the `WebBaseLoader` that just loads the HTML content from given URL and converts it
to text.
```python
from langchain_community.document_loaders.web_base import WebBaseLoader
loader = WebBaseLoader("https://www.oracle.com/cloud/free/faq/")
documents = loader.load()
```
Our `documents` is a list with just a single element, which is the text of the whole page. We need to split it into
meaningful parts, so we will use the `RecursiveCharacterTextSplitter` component. It will try to keep all paragraphs (and
then sentences, and then words) together as long as possible, as those would generically seem to be the strongest
semantically related pieces of text. The chunk size and overlap are both parameters that can be adjusted to fit the
specific use case.
```python
from langchain_text_splitters import RecursiveCharacterTextSplitter
splitter = RecursiveCharacterTextSplitter(chunk_size=300, chunk_overlap=100)
split_documents = splitter.split_documents(documents)
```
Our documents might be now indexed, but we need to convert them into vectors. Let's configure the embeddings so the
`cohere.embed-english-v3.0` is used. Not all the regions support the Generative AI Service, so we need to specify the
region where the models are stored. We will use the `us-chicago-1`, but please check the
[documentation](https://docs.oracle.com/en-us/iaas/Content/generative-ai/overview.htm#regions) for the most up-to-date
list of supported regions.
```python
from langchain_community.embeddings.oci_generative_ai import OCIGenAIEmbeddings
embeddings = OCIGenAIEmbeddings(
model_id="cohere.embed-english-v3.0",
service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
compartment_id=os.environ.get("COMPARTMENT_OCID"),
)
```
Now we can embed the documents and store them in Qdrant. We will create an instance of `Qdrant` and add the split
documents to the collection.
```python
from langchain.vectorstores.qdrant import Qdrant
qdrant = Qdrant(
client=client,
collection_name="oracle-cloud-website",
embeddings=embeddings,
)
qdrant.add_documents(split_documents, batch_size=20)
```
Our documents should be now indexed and ready for searching. Let's move to the next step.
### Speaking to the website
The intended method of interaction with the website is through the chatbot. Large Language Model, in our case [Cohere
Command](https://cohere.com/command), will be answering user's questions based on the relevant documents that Qdrant
will return using the question as a query. Our LLM is also hosted on OCI, so we can access it similarly to the embedding
model:
```python
from langchain_community.llms.oci_generative_ai import OCIGenAI
llm = OCIGenAI(
model_id="cohere.command",
service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
compartment_id=os.environ.get("COMPARTMENT_OCID"),
)
```
Connection to Qdrant might be established in the same way as we did during the indexing process. We can use it to create
a retrieval chain, which implements the question-answering process. The retrieval chain also requires an additional
chain that will combine retrieved documents before sending them to an LLM.
```python
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains.retrieval import create_retrieval_chain
from langchain import hub
retriever = qdrant.as_retriever()
combine_docs_chain = create_stuff_documents_chain(
llm=llm,
# Default prompt is loaded from the hub, but we can also modify it
prompt=hub.pull("langchain-ai/retrieval-qa-chat"),
)
retrieval_qa_chain = create_retrieval_chain(
retriever=retriever,
combine_docs_chain=combine_docs_chain,
)
response = retrieval_qa_chain.invoke({"input": "What is the Oracle Cloud Free Tier?"})
```
The output of the `.invoke` method is a dictionary-like structure with the query and answer, but we can also access the
source documents used to generate the response. This might be useful for debugging or for further processing.
```python
{
'input': 'What is the Oracle Cloud Free Tier?',
'context': [
Document(
page_content='* Free Tier is generally available in regions where commercial Oracle Cloud Infrastructure service is available. See the data regions page for detailed service availability (the exact regions available for Free Tier may differ during the sign-up process). The US$300 cloud credit is available in',
metadata={
'language': 'en-US',
'source': 'https://www.oracle.com/cloud/free/faq/',
'title': "FAQ on Oracle's Cloud Free Tier",
'_id': 'c8cf98e0-4b88-4750-be42-4157495fed2c',
'_collection_name': 'oracle-cloud-website'
}
),
Document(
page_content='Oracle Cloud Free Tier allows you to sign up for an Oracle Cloud account which provides a number of Always Free services and a Free Trial with US$300 of free credit to use on all eligible Oracle Cloud Infrastructure services for up to 30 days. The Always Free services are available for an unlimited',
metadata={
'language': 'en-US',
'source': 'https://www.oracle.com/cloud/free/faq/',
'title': "FAQ on Oracle's Cloud Free Tier",
'_id': 'dc291430-ff7b-4181-944a-39f6e7a0de69',
'_collection_name': 'oracle-cloud-website'
}
),
Document(
page_content='Oracle Cloud Free Tier does not include SLAs. Community support through our forums is available to all customers. Customers using only Always Free resources are not eligible for Oracle Support. Limited support is available for Oracle Cloud Free Tier with Free Trial credits. After you use all of',
metadata={
'language': 'en-US',
'source': 'https://www.oracle.com/cloud/free/faq/',
'title': "FAQ on Oracle's Cloud Free Tier",
'_id': '9e831039-7ccc-47f7-9301-20dbddd2fc07',
'_collection_name': 'oracle-cloud-website'
}
),
Document(
page_content='looking to test things before moving to cloud, a student wanting to learn, or an academic developing curriculum in the cloud, Oracle Cloud Free Tier enables you to learn, explore, build and test for free.',
metadata={
'language': 'en-US',
'source': 'https://www.oracle.com/cloud/free/faq/',
'title': "FAQ on Oracle's Cloud Free Tier",
'_id': 'e2dc43e1-50ee-4678-8284-6df60a835cf5',
'_collection_name': 'oracle-cloud-website'
}
)
],
'answer': ' Oracle Cloud Free Tier is a subscription that gives you access to Always Free services and a Free Trial with $300 of credit that can be used on all eligible Oracle Cloud Infrastructure services for up to 30 days. \n\nThrough this Free Tier, you can learn, explore, build, and test for free. It is aimed at those who want to experiment with cloud services before making a commitment, as wellTheir use cases range from testing prior to cloud migration to learning and academic curriculum development. '
}
```
#### Other experiments
Asking the basic questions is just the beginning. What you want to avoid is a hallucination, where the model generates
an answer that is not based on the actual content. The default prompt of Langchain should already prevent this, but you
might still want to check it. Let's ask a question that is not directly answered on the FAQ page:
```python
response = retrieval_qa.invoke({
"input": "Is Oracle Generative AI Service included in the free tier?"
})
```
Output:
> Oracle Generative AI Services are not specifically mentioned as being available in the free tier. As per the text, the
> $300 free credit can be used on all eligible services for up to 30 days. To confirm if Oracle Generative AI Services
> are included in the free credit offer, it is best to check the official Oracle Cloud website or contact their support.
It seems that Cohere Command model could not find the exact answer in the provided documents, but it tried to interpret
the context and provide a reasonable answer, without making up the information. This is a good sign that the model is
not hallucinating in that case.
## Wrapping up
This tutorial has shown how to integrate Cohere's language models with Qdrant to enable natural language search on your
website. We have used Langchain as an orchestrator, and everything was hosted on Oracle Cloud Infrastructure (OCI).
Real world would require integrating this mechanism into your organization's systems, but we built a solid foundation
that can be further developed.
| documentation/examples/natural-language-search-oracle-cloud-infrastructure-cohere-langchain.md |
---
title: Authentication
weight: 30
---
# Authenticating to Qdrant Cloud
This page shows you how to use the Qdrant Cloud Console to create a custom API key for a cluster. You will learn how to connect to your cluster using the new API key.
## Create API keys
The API key is only shown once after creation. If you lose it, you will need to create a new one.
However, we recommend rotating the keys from time to time. To create additional API keys do the following.
1. Go to the [Cloud Dashboard](https://qdrant.to/cloud).
2. Select **Access Management** to display available API keys, or go to the **API Keys** section of the Cluster detail page.
3. Click **Create** and choose a cluster name from the dropdown menu.
> **Note:** You can create a key that provides access to multiple clusters. Select desired clusters in the dropdown box.
4. Click **OK** and retrieve your API key.
## Test cluster access
After creation, you will receive a code snippet to access your cluster. Your generated request should look very similar to this one:
```bash
curl \
-X GET 'https://xyz-example.eu-central.aws.cloud.qdrant.io:6333' \
--header 'api-key: <paste-your-api-key-here>'
```
Open Terminal and run the request. You should get a response that looks like this:
```bash
{"title":"qdrant - vector search engine","version":"1.8.1"}
```
> **Note:** You need to include the API key in the request header for every
> request over REST or gRPC.
## Authenticate via SDK
Now that you have created your first cluster and key, you might want to access Qdrant Cloud from within your application.
Our official Qdrant clients for Python, TypeScript, Go, Rust, .NET and Java all support the API key parameter.
```bash
curl \
-X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \
--header 'api-key: <provide-your-own-key>'
# Alternatively, you can use the `Authorization` header with the `Bearer` prefix
curl \
-X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \
--header 'Authorization: Bearer <provide-your-own-key>'
```
```python
from qdrant_client import QdrantClient
qdrant_client = QdrantClient(
"xyz-example.eu-central.aws.cloud.qdrant.io",
api_key="<paste-your-api-key-here>",
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({
host: "xyz-example.eu-central.aws.cloud.qdrant.io",
apiKey: "<paste-your-api-key-here>",
});
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("https://xyz-example.eu-central.aws.cloud.qdrant.io:6334")
.api_key("<paste-your-api-key-here>")
.build()?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(
QdrantGrpcClient.newBuilder(
"xyz-example.eu-central.aws.cloud.qdrant.io",
6334,
true)
.withApiKey("<paste-your-api-key-here>")
.build());
```
```csharp
using Qdrant.Client;
var client = new QdrantClient(
host: "xyz-example.eu-central.aws.cloud.qdrant.io",
https: true,
apiKey: "<paste-your-api-key-here>"
);
```
```go
import "github.com/qdrant/go-client/qdrant"
client, err := qdrant.NewClient(&qdrant.Config{
Host: "xyz-example.eu-central.aws.cloud.qdrant.io",
Port: 6334,
APIKey: "<paste-your-api-key-here>",
UseTLS: true,
})
```
| documentation/cloud/authentication.md |
---
title: Account Setup
weight: 10
aliases:
---
# Setting up a Qdrant Cloud Account
## Registration
There are different ways to register for a Qdrant Cloud account:
* With an email address and passwordless login via email
* With a Google account
* With a GitHub account
* By connection an enterprise SSO solution
Every account is tied to an email address. You can invite additional users to your account and manage their permissions.
### Email registration
1. Register for a [Cloud account](https://cloud.qdrant.io/) with your email, Google or GitHub credentials.
## Inviting additional users to an account
You can invite additional users to your account, and manage their permissions on the *Account Management* page in the Qdrant Cloud Console.
![Invitations](/documentation/cloud/invitations.png)
Invited users will receive an email with an invitation link to join Qdrant Cloud. Once they signed up, they can accept the invitation from the Overview page.
![Accepting invitation](/documentation/cloud/accept-invitation.png)
## Switching between accounts
If you have access to multiple accounts, you can switch between accounts with the account switcher on the top menu bar of the Qdrant Cloud Console.
![Switching between accounts](/documentation/cloud/account-switcher.png)
## Account settings
You can configure your account settings in the Qdrant Cloud Console, by clicking on your account picture in the top right corner, and selecting *Profile*.
The following functionality is available.
### Renaming an account
If you use multiple accounts for different purposes, it is a good idea to give them descriptive names, for example *Development*, *Production*, *Testing*. You can also choose which account should be the default one, when you log in.
![Account management](/documentation/cloud/account-management.png)
### Deleting an account
When you delete an account, all database clusters and associated data will be deleted.
| documentation/cloud/qdrant-cloud-setup.md |
---
title: Create a Cluster
weight: 20
---
# Creating a Qdrant Cloud Cluster
Qdrant Cloud offers two types of clusters: **Free** and **Standard**.
## Free Clusters
Free tier clusters are perfect for prototyping and testing. You don't need a credit card to join.
A free tier cluster only includes 1 single node with the following resources:
| Resource | Value |
|------------|-------|
| RAM | 1 GB |
| vCPU | 0.5 |
| Disk space | 4 GB |
| Nodes | 1 |
This configuration supports serving about 1 M vectors of 768 dimensions. To calculate your needs, refer to our documentation on [Capacity and sizing](/documentation/cloud/capacity-sizing/).
The choice of cloud providers and regions is limited.
It includes:
- Standard Support
- Basic monitoring
- Basic log access
- Basic alerting
- Version upgrades with downtime
- Only manual snapshots and restores via API
- No dedicated resources
If unused, free tier clusters are automatically suspended after 1 week, and deleted after 4 weeks of inactivity if not reactivated.
You can always upgrade to a standard cluster with more resources and features.
## Standard Clusters
On top of the Free cluster features, Standard clusters offer:
- Response time and uptime SLAs
- Dedicated resources
- Backup and disaster recovery
- Multi-node clusters for high availability
- Horizontal and vertical scaling
- Monitoring and log management
- Zero-downtime upgrades for multi-node clusters with replication
You have a broad choice of regions on AWS, Azure and Google Cloud.
For payment information see [**Pricing and Payments**](/documentation/cloud/pricing-payments/).
## Create a cluster
This page shows you how to use the Qdrant Cloud Console to create a custom Qdrant Cloud cluster.
> **Prerequisite:** Please make sure you have provided billing information before creating a custom cluster.
1. Start in the **Clusters** section of the [Cloud Dashboard](https://cloud.qdrant.io/).
1. Select **Clusters** and then click **+ Create**.
1. In the **Create a cluster** screen select **Free** or **Standard**
Most of the remaining configuration options are only available for standard clusters.
1. Select a provider. Currently, you can deploy to:
- Amazon Web Services (AWS)
- Google Cloud Platform (GCP)
- Microsoft Azure
- Your own [Hybrid Cloud](/documentation/hybrid-cloud/) Infrastructure
1. Choose your data center region or Hybrid Cloud environment.
1. Configure RAM for each node.
> For more information, see our [**Capacity and Sizing**](/documentation/cloud/capacity-sizing/) guidance.
1. Choose the number of vCPUs per node. If you add more
RAM, the menu provides different options for vCPUs.
1. Select the number of nodes you want the cluster to be deployed on.
> Each node is automatically attached with a disk, that has enough space to store data with Qdrant's default collection configuration.
1. Select additional disk space for your deployment.
> Depending on your collection configuration, you may need more disk space per RAM. For example, if you configure `on_disk: true` and only use RAM for caching.
1. Review your cluster configuration and pricing.
1. When you're ready, select **Create**. It takes some time to provision your cluster.
Once provisioned, you can access your cluster on ports 443 and 6333 (REST) and 6334 (gRPC).
![Cluster configured in the UI](/docs/cloud/create-cluster-test.png)
You should now see the new cluster in the **Clusters** menu.
## Next steps
You will need to connect to your new Qdrant Cloud cluster. Follow [**Authentication**](/documentation/cloud/authentication/) to create one or more API keys.
You can also scale your cluster both horizontally and vertically. Read more in [**Cluster Scaling**](/documentation/cloud/cluster-scaling/).
If a new Qdrant version becomes available, you can upgrade your cluster. See [**Cluster Upgrades**](/documentation/cloud/cluster-upgrades/).
For more information on creating and restoring backups of a cluster, see [**Backups**](/documentation/cloud/backups/).
| documentation/cloud/create-cluster.md |
---
title: Cloud Support
weight: 99
aliases:
---
# Qdrant Cloud Support and Troubleshooting
All Qdrant Cloud users are welcome to join our [Discord community](https://qdrant.to/discord/). Our Support Engineers are available to help you anytime.
![Discord](/documentation/cloud/discord.png)
Paid customers can also contact support directly. Links to the support portal are available in the Qdrant Cloud Console.
![Support Portal](/documentation/cloud/support-portal.png)
| documentation/cloud/support.md |
---
title: Backup Clusters
weight: 61
---
# Backing up Qdrant Cloud Clusters
Qdrant organizes cloud instances as clusters. On occasion, you may need to
restore your cluster because of application or system failure.
You may already have a source of truth for your data in a regular database. If you
have a problem, you could reindex the data into your Qdrant vector search cluster.
However, this process can take time. For high availability critical projects we
recommend replication. It guarantees the proper cluster functionality as long as
at least one replica is running.
For other use-cases such as disaster recovery, you can set up automatic or
self-service backups.
## Prerequisites
You can back up your Qdrant clusters though the Qdrant Cloud
Dashboard at https://cloud.qdrant.io. This section assumes that you've already
set up your cluster, as described in the following sections:
- [Create a cluster](/documentation/cloud/create-cluster/)
- Set up [Authentication](/documentation/cloud/authentication/)
- Configure one or more [Collections](/documentation/concepts/collections/)
## Automatic backups
You can set up automatic backups of your clusters with our Cloud UI. With the
procedures listed in this page, you can set up
snapshots on a daily/weekly/monthly basis. You can keep as many snapshots as you
need. You can restore a cluster from the snapshot of your choice.
> Note: When you restore a snapshot, consider the following:
> - The affected cluster is not available while a snapshot is being restored.
> - If you changed the cluster setup after the copy was created, the cluster
resets to the previous configuration.
> - The previous configuration includes:
> - CPU
> - Memory
> - Node count
> - Qdrant version
### Configure a backup
After you have taken the prerequisite steps, you can configure a backup with the
[Qdrant Cloud Dashboard](https://cloud.qdrant.io). To do so, take these steps:
1. Sign in to the dashboard
1. Select Clusters.
1. Select the cluster that you want to back up.
![Select a cluster](/documentation/cloud/select-cluster.png)
1. Find and select the **Backups** tab.
1. Now you can set up a backup schedule.
The **Days of Retention** is the number of days after a backup snapshot is
deleted.
1. Alternatively, you can select **Backup now** to take an immediate snapshot.
![Configure a cluster backup](/documentation/cloud/backup-schedule.png)
### Restore a backup
If you have a backup, it appears in the list of **Available Backups**. You can
choose to restore or delete the backups of your choice.
![Restore or delete a cluster backup](/documentation/cloud/restore-delete.png)
<!-- I think we should move this to the Snapshot page, but I'll do it later -->
## Backups with a snapshot
Qdrant also offers a snapshot API which allows you to create a snapshot
of a specific collection or your entire cluster. For more information, see our
[snapshot documentation](/documentation/concepts/snapshots/).
Here is how you can take a snapshot and recover a collection:
1. Take a snapshot:
- For a single node cluster, call the snapshot endpoint on the exposed URL.
- For a multi node cluster call a snapshot on each node of the collection.
Specifically, prepend `node-{num}-` to your cluster URL.
Then call the [snapshot endpoint](../../concepts/snapshots/#create-snapshot) on the individual hosts. Start with node 0.
- In the response, you'll see the name of the snapshot.
2. Delete and recreate the collection.
3. Recover the snapshot:
- Call the [recover endpoint](../../concepts/snapshots/#recover-in-cluster-deployment). Set a location which points to the snapshot file (`file:///qdrant/snapshots/{collection_name}/{snapshot_file_name}`) for each host.
## Backup considerations
Backups are incremental. For example, if you have two backups, backup number 2
contains only the data that changed since backup number 1. This reduces the
total cost of your backups.
You can create multiple backup schedules.
When you restore a snapshot, any changes made after the date of the snapshot
are lost.
| documentation/cloud/backups.md |
---
title: Configure Size & Capacity
weight: 40
aliases:
- capacity
---
# Configuring Qdrant Cloud Cluster Capacity and Size
We have been asked a lot about the optimal cluster configuration to serve a number of vectors.
The only right answer is “It depends”.
It depends on a number of factors and options you can choose for your collections.
## Basic configuration
If you need to keep all vectors in memory for maximum performance, there is a very rough formula for estimating the needed memory size looks like this:
```text
memory_size = number_of_vectors * vector_dimension * 4 bytes * 1.5
```
Extra 50% is needed for metadata (indexes, point versions, etc.) as well as for temporary segments constructed during the optimization process.
If you need to have payloads along with the vectors, it is recommended to store it on the disc, and only keep [indexed fields](../../concepts/indexing/#payload-index) in RAM.
Read more about the payload storage in the [Storage](../../concepts/storage/#payload-storage) section.
## Storage focused configuration
If your priority is to serve large amount of vectors with an average search latency, it is recommended to configure [mmap storage](../../concepts/storage/#configuring-memmap-storage).
In this case vectors will be stored on the disc in memory-mapped files, and only the most frequently used vectors will be kept in RAM.
The amount of available RAM will significantly affect the performance of the search.
As a rule of thumb, if you keep 2 times less vectors in RAM, the search latency will be 2 times lower.
The speed of disks is also important. [Let us know](mailto:cloud@qdrant.io) if you have special requirements for a high-volume search.
## Sub-groups oriented configuration
If your use case assumes that the vectors are split into multiple collections or sub-groups based on payload values,
it is recommended to configure memory-map storage.
For example, if you serve search for multiple users, but each of them has an subset of vectors which they use independently.
In this scenario only the active subset of vectors will be kept in RAM, which allows
the fast search for the most active and recent users.
In this case you can estimate required memory size as follows:
```text
memory_size = number_of_active_vectors * vector_dimension * 4 bytes * 1.5
```
## Disk space
Clusters that support vector search require significant disk space. If you're
running low on disk space in your cluster, you can use the UI at
[cloud.qdrant.io](https://cloud.qdrant.io/) to **Scale Up** your cluster.
<aside role="status">If you use the Qdrant UI to increase the disk space in your cluster, you
cannot decrease that allocation later.</aside>
If you're running low on disk space, consider the following advantages:
- Larger Datasets: Supports larger datasets. With vector search,
larger datasets can improve the relevance and quality of search results.
- Improved Indexing: Supports the use of indexing strategies such as
HNSW (Hierarchical Navigable Small World).
- Caching: Improves speed when you cache frequently accessed data on disk.
- Backups and Redundancy: Allows more frequent backups. Perhaps the most important advantage.
| documentation/cloud/capacity-sizing.md |
---
title: Scale Clusters
weight: 50
---
# Scaling Qdrant Cloud Clusters
The amount of data is always growing and at some point you might need to upgrade or downgrade the capacity of your cluster.
![Cluster Scaling](/documentation/cloud/cluster-scaling.png)
There are different options for how it can be done.
## Vertical scaling
Vertical scaling is the process of increasing the capacity of a cluster by adding or removing CPU, storage and memory resources on each database node.
You can start with a minimal cluster configuration of 2GB of RAM and resize it up to 64GB of RAM (or even more if desired) over the time step by step with the growing amount of data in your application. If your cluster consists of several nodes each node will need to be scaled to the same size. Please note that vertical cluster scaling will require a short downtime period to restart your cluster. In order to avoid a downtime you can make use of data replication, which can be configured on the collection level. Vertical scaling can be initiated on the cluster detail page via the button "scale".
If you want to scale your cluster down, the new, smaller memory size must be still sufficient to store all the data in the cluster. Otherwise, the database cluster could run out of memory and crash. Therefore, the new memory size must be at least as large as the current memory usage of the database cluster including a bit of buffer. Qdrant Cloud will automatically prevent you from scaling down the Qdrant datab ase cluster with a too small memory size.
Note, that it is not possible to scale down the disk space of the cluster due to technical limitations of the underlying cloud providers.
## Horizontal scaling
Vertical scaling can be an effective way to improve the performance of a cluster and extend the capacity, but it has some limitations. The main disadvantage of vertical scaling is that there are limits to how much a cluster can be expanded. At some point, adding more resources to a cluster can become impractical or cost-prohibitive.
In such cases, horizontal scaling may be a more effective solution.
Horizontal scaling, also known as horizontal expansion, is the process of increasing the capacity of a cluster by adding more nodes and distributing the load and data among them. The horizontal scaling at Qdrant starts on the collection level. You have to choose the number of shards you want to distribute your collection around while creating the collection. Please refer to the [sharding documentation](../../guides/distributed_deployment/#sharding) section for details.
After that, you can configure, or change the amount of Qdrant database nodes within a cluster during cluster creation, or on the cluster detail page via "Scale" button.
Important: The number of shards means the maximum amount of nodes you can add to your cluster. In the beginning, all the shards can reside on one node. With the growing amount of data you can add nodes to your cluster and move shards to the dedicated nodes using the [cluster setup API](../../guides/distributed_deployment/#cluster-scaling).
Note, that it is currently not possible to horizontally scale down the cluster in the Qdrant Cloud UI. If you require a horizontal scale down, please open a support ticket.
We will be glad to consult you on an optimal strategy for scaling.
[Let us know](mailto:cloud@qdrant.io) your needs and decide together on a proper solution.
| documentation/cloud/cluster-scaling.md |
---
title: Monitor Clusters
weight: 55
---
# Monitoring Qdrant Cloud Clusters
## Telemetry
Qdrant Cloud provides you with a set of metrics to monitor the health of your database cluster. You can access these metrics in the Qdrant Cloud Console in the **Metrics** and **Request** sections of the cluster details page.
## Logs
Logs of the database cluster are available in the Qdrant Cloud Console in the **Logs** section of the cluster details page.
## Alerts
You will receive automatic alerts via email before your cluster reaches the currently configured memory or storage limits, including recommendations for scaling your cluster.
| documentation/cloud/cluster-monitoring.md |
---
title: Billing & Payments
weight: 65
aliases:
- aws-marketplace
- gcp-marketplace
- azure-marketplace
---
# Qdrant Cloud Billing & Payments
Qdrant database clusters in Qdrant Cloud are priced based on CPU, memory, and disk storage usage. To get a clearer idea for the pricing structure, based on the amounts of vectors you want to store, please use our [Pricing Calculator](https://cloud.qdrant.io/calculator).
## Billing
You can pay for your Qdrant Cloud database clusters either with a credit card or through an AWS, GCP, or Azure Marketplace subscription.
Your payment method is charged at the beginning of each month for the previous month's usage. There is no difference in pricing between the different payment methods.
If you choose to pay through a marketplace, the Qdrant Cloud usage costs are added as usage units to your existing billing for your cloud provider services. A detailed breakdown of your usage is available in the Qdrant Cloud Console.
Note: Even if you pay using a marketplace subscription, your database clusters will still be deployed into Qdrant-owned infrastructure. The setup and management of Qdrant database clusters will also still be done via the Qdrant Cloud Console UI.
If you wish to deploy Qdrant database clusters into your own environment from Qdrant Cloud then we recommend our [Hybrid Cloud](/documentation/hybrid-cloud/) solution.
![Payment Options](/documentation/cloud/payment-options.png)
### Credit Card
Credit card payments are processed through Stripe. To set up a credit card, go to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/), select **Stripe** as the payment method, and enter your credit card details.
### AWS Marketplace
Our [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-rtphb42tydtzg) listing streamlines access to Qdrant for users who rely on Amazon Web Services for hosting and application development.
To subscribe:
1. Go to Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/)
2. Select **AWS Marketplace** as the payment method. You will be redirected to the AWS Marketplace listing for Qdrant.
3. Click the bright orange button - **View purchase options**.
4. On the next screen, under Purchase, click **Subscribe**.
5. Up top, on the green banner, click **Set up your account**.
You will be redirected to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/). From there you can start to create Qdrant database clusters.
### GCP Marketplace
Our [GCP Marketplace](https://console.cloud.google.com/marketplace/product/qdrant-public/qdrant) listing streamlines access to Qdrant for users who rely on the Google Cloud Platform for hosting and application development.
To subscribe:
1. Go to Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/)
2. Select **GCP Marketplace** as the payment method. You will be redirected to the GCP Marketplace listing for Qdrant.
3. Select **Subscribe**. (If you have already subscribed, select **Manage on Provider**.)
4. On the next screen, choose options as required, and select **Subscribe**.
5. On the pop-up window that appers, select **Sign up with Qdrant**.
You will be redirected to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/). From there you can start to create Qdrant database clusters.
### Azure Marketplace
Our [Azure Marketplace](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/qdrantsolutionsgmbh1698769709989.qdrant-db/selectionMode~/false/resourceGroupId//resourceGroupLocation//dontDiscardJourney~/false/selectedMenuId/home/launchingContext~/%7B%22galleryItemId%22%3A%22qdrantsolutionsgmbh1698769709989.qdrant-dbqdrant_cloud_unit%22%2C%22source%22%3A%5B%22GalleryFeaturedMenuItemPart%22%2C%22VirtualizedTileDetails%22%5D%2C%22menuItemId%22%3A%22home%22%2C%22subMenuItemId%22%3A%22Search%20results%22%2C%22telemetryId%22%3A%221df5537b-8b29-4200-80ce-0cd38c7e0e56%22%7D/searchTelemetryId/6b44fb90-7b9c-4286-aad8-59f88f3cc2ff) listing streamlines access to Qdrant for users who rely on Microsoft Azure for hosting and application development.
To subscribe:
1. Go to Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/)
2. Select **Azure Marketplace** as the payment method. You will be redirected to the Azure Marketplace listing for Qdrant.
3. Select **Subscribe**.
4. On the next screen, choose options as required, and select **Review + Subscribe**.
5. After reviewing all settings, select **Subscribe**.
6. Once the SaaS subscription is created, select **Configure account now**.
You will be redirected to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/). From there you can start to create Qdrant database clusters.
| documentation/cloud/pricing-payments.md |
---
title: Upgrade Clusters
weight: 55
---
# Upgrading Qdrant Cloud Clusters
As soon as a new Qdrant version is available. Qdrant Cloud will show you an upgrade notification in the Cluster list and on the Cluster details page.
To upgrade to a new version, go to the Cluster details page, choose the new version from the version dropdown and click **Upgrade**.
![Cluster Upgrades](/documentation/cloud/cluster-upgrades.png)
If you have a multi-node cluster and if your collections have a replication factor of at least **2**, the upgrade process will be zero-downtime and done in a rolling fashion. You will be able to use your database cluster normally.
If you have a single-node cluster or a collection with a replication factor of **1**, the upgrade process will require a short downtime period to restart your cluster with the new version.
| documentation/cloud/cluster-upgrades.md |
---
title: Managed Cloud
weight: 8
aliases:
- /documentation/overview/qdrant-alternatives/documentation/cloud/
---
# About Qdrant Managed Cloud
Qdrant Managed Cloud is our SaaS (software-as-a-service) solution, providing managed Qdrant database clusters on the cloud. We provide you the same fast and reliable similarity search engine, but without the need to maintain your own infrastructure.
Transitioning to the Managed Cloud version of Qdrant does not change how you interact with the service. All you need is a [Qdrant Cloud account](https://qdrant.to/cloud/) and an [API key](/documentation/cloud/authentication/) for each request.
You can also attach your own infrastructure as a Hybrid Cloud Environment. For details, see our [Hybrid Cloud](/documentation/hybrid-cloud/) documentation.
## Cluster configuration
Each database cluster comes pre-configured with the following tools, features, and support services:
- Allows the creation of highly available clusters with automatic failover.
- Supports upgrades to later versions of Qdrant as they are released.
- Upgrades are zero-downtime on highly available clusters.
- Includes monitoring and logging to observe the health of each cluster.
- Horizontally and vertically scalable.
- Available natively on AWS and GCP, and Azure.
- Available on your own infrastructure and other providers if you use the Hybrid Cloud.
## Getting started with Qdrant Cloud
To get started with Qdrant Cloud:
1. [**Set up an account**](/documentation/cloud/qdrant-cloud-setup/)
2. [**Create a Qdrant cluster**](/documentation/cloud/create-cluster/)
| documentation/cloud/_index.md |
---
title: Storage
weight: 80
aliases:
- ../storage
---
# Storage
All data within one collection is divided into segments.
Each segment has its independent vector and payload storage as well as indexes.
Data stored in segments usually do not overlap.
However, storing the same point in different segments will not cause problems since the search contains a deduplication mechanism.
The segments consist of vector and payload storages, vector and payload [indexes](../indexing/), and id mapper, which stores the relationship between internal and external ids.
A segment can be `appendable` or `non-appendable` depending on the type of storage and index used.
You can freely add, delete and query data in the `appendable` segment.
With `non-appendable` segment can only read and delete data.
The configuration of the segments in the collection can be different and independent of one another, but at least one `appendable' segment must be present in a collection.
## Vector storage
Depending on the requirements of the application, Qdrant can use one of the data storage options.
The choice has to be made between the search speed and the size of the RAM used.
**In-memory storage** - Stores all vectors in RAM, has the highest speed since disk access is required only for persistence.
**Memmap storage** - Creates a virtual address space associated with the file on disk. [Wiki](https://en.wikipedia.org/wiki/Memory-mapped_file).
Mmapped files are not directly loaded into RAM. Instead, they use page cache to access the contents of the file.
This scheme allows flexible use of available memory. With sufficient RAM, it is almost as fast as in-memory storage.
### Configuring Memmap storage
There are two ways to configure the usage of memmap(also known as on-disk) storage:
- Set up `on_disk` option for the vectors in the collection create API:
*Available as of v1.2.0*
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine",
"on_disk": true
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(
size=768, distance=models.Distance.COSINE, on_disk=True
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
on_disk: true,
},
});
```
```rust
use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true)),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.VectorParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
"{collection_name}",
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.setOnDisk(true)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
"{collection_name}",
new VectorParams
{
Size = 768,
Distance = Distance.Cosine,
OnDisk = true
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
OnDisk: qdrant.PtrOf(true),
}),
})
```
This will create a collection with all vectors immediately stored in memmap storage.
This is the recommended way, in case your Qdrant instance operates with fast disks and you are working with large collections.
- Set up `memmap_threshold_kb` option (deprecated). This option will set the threshold after which the segment will be converted to memmap storage.
There are two ways to do this:
1. You can set the threshold globally in the [configuration file](../../guides/configuration/). The parameter is called `memmap_threshold_kb`.
2. You can set the threshold for each collection separately during [creation](../collections/#create-collection) or [update](../collections/#update-collection-parameters).
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine"
},
"optimizers_config": {
"memmap_threshold": 20000
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
},
optimizers_config: {
memmap_threshold: 20000,
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, OptimizersConfigDiffBuilder, VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
.optimizers_config(OptimizersConfigDiffBuilder::default().memmap_threshold(20000)),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.build())
.build())
.setOptimizersConfig(
OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
}),
OptimizersConfig: &qdrant.OptimizersConfigDiff{
MaxSegmentSize: qdrant.PtrOf(uint64(20000)),
},
})
```
The rule of thumb to set the memmap threshold parameter is simple:
- if you have a balanced use scenario - set memmap threshold the same as `indexing_threshold` (default is 20000). In this case the optimizer will not make any extra runs and will optimize all thresholds at once.
- if you have a high write load and low RAM - set memmap threshold lower than `indexing_threshold` to e.g. 10000. In this case the optimizer will convert the segments to memmap storage first and will only apply indexing after that.
In addition, you can use memmap storage not only for vectors, but also for HNSW index.
To enable this, you need to set the `hnsw_config.on_disk` parameter to `true` during collection [creation](../collections/#create-a-collection) or [updating](../collections/#update-collection-parameters).
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine"
},
"optimizers_config": {
"memmap_threshold": 20000
},
"hnsw_config": {
"on_disk": true
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
hnsw_config=models.HnswConfigDiff(on_disk=True),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
},
optimizers_config: {
memmap_threshold: 20000,
},
hnsw_config: {
on_disk: true,
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, HnswConfigDiffBuilder, OptimizersConfigDiffBuilder,
VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
.optimizers_config(OptimizersConfigDiffBuilder::default().memmap_threshold(20000))
.hnsw_config(HnswConfigDiffBuilder::default().on_disk(true)),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.HnswConfigDiff;
import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(768)
.setDistance(Distance.Cosine)
.build())
.build())
.setOptimizersConfig(
OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build())
.setHnswConfig(HnswConfigDiff.newBuilder().setOnDisk(true).build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 },
hnswConfig: new HnswConfigDiff { OnDisk = true }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 768,
Distance: qdrant.Distance_Cosine,
}),
OptimizersConfig: &qdrant.OptimizersConfigDiff{
MaxSegmentSize: qdrant.PtrOf(uint64(20000)),
},
HnswConfig: &qdrant.HnswConfigDiff{
OnDisk: qdrant.PtrOf(true),
},
})
```
## Payload storage
Qdrant supports two types of payload storages: InMemory and OnDisk.
InMemory payload storage is organized in the same way as in-memory vectors.
The payload data is loaded into RAM at service startup while disk and [RocksDB](https://rocksdb.org/) are used for persistence only.
This type of storage works quite fast, but it may require a lot of space to keep all the data in RAM, especially if the payload has large values attached - abstracts of text or even images.
In the case of large payload values, it might be better to use OnDisk payload storage.
This type of storage will read and write payload directly to RocksDB, so it won't require any significant amount of RAM to store.
The downside, however, is the access latency.
If you need to query vectors with some payload-based conditions - checking values stored on disk might take too much time.
In this scenario, we recommend creating a payload index for each field used in filtering conditions to avoid disk access.
Once you create the field index, Qdrant will preserve all values of the indexed field in RAM regardless of the payload storage type.
You can specify the desired type of payload storage with [configuration file](../../guides/configuration/) or with collection parameter `on_disk_payload` during [creation](../collections/#create-collection) of the collection.
## Versioning
To ensure data integrity, Qdrant performs all data changes in 2 stages.
In the first step, the data is written to the Write-ahead-log(WAL), which orders all operations and assigns them a sequential number.
Once a change has been added to the WAL, it will not be lost even if a power loss occurs.
Then the changes go into the segments.
Each segment stores the last version of the change applied to it as well as the version of each individual point.
If the new change has a sequential number less than the current version of the point, the updater will ignore the change.
This mechanism allows Qdrant to safely and efficiently restore the storage from the WAL in case of an abnormal shutdown.
| documentation/concepts/storage.md |
---
title: Explore
weight: 55
aliases:
- ../explore
---
# Explore the data
After mastering the concepts in [search](../search/), you can start exploring your data in other ways. Qdrant provides a stack of APIs that allow you to find similar vectors in a different fashion, as well as to find the most dissimilar ones. These are useful tools for recommendation systems, data exploration, and data cleaning.
## Recommendation API
In addition to the regular search, Qdrant also allows you to search based on multiple positive and negative examples. The API is called ***recommend***, and the examples can be point IDs, so that you can leverage the already encoded objects; and, as of v1.6, you can also use raw vectors as input, so that you can create your vectors on the fly without uploading them as points.
REST API - API Schema definition is available [here](https://api.qdrant.tech/api-reference/search/recommend-points)
```http
POST /collections/{collection_name}/points/query
{
"query": {
"recommend": {
"positive": [100, 231],
"negative": [718, [0.2, 0.3, 0.4, 0.5]],
"strategy": "average_vector"
}
},
"filter": {
"must": [
{
"key": "city",
"match": {
"value": "London"
}
}
]
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=models.RecommendQuery(
recommend=models.RecommendInput(
positive=[100, 231],
negative=[718, [0.2, 0.3, 0.4, 0.5]],
strategy=models.RecommendStrategy.AVERAGE_VECTOR,
)
),
query_filter=models.Filter(
must=[
models.FieldCondition(
key="city",
match=models.MatchValue(
value="London",
),
)
]
),
limit=3,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: {
recommend: {
positive: [100, 231],
negative: [718, [0.2, 0.3, 0.4, 0.5]],
strategy: "average_vector"
}
},
filter: {
must: [
{
key: "city",
match: {
value: "London",
},
},
],
},
limit: 3
});
```
```rust
use qdrant_client::qdrant::{
Condition, Filter, QueryPointsBuilder, RecommendInputBuilder, RecommendStrategy,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(
RecommendInputBuilder::default()
.add_positive(100)
.add_positive(231)
.add_positive(vec![0.2, 0.3, 0.4, 0.5])
.add_negative(718)
.strategy(RecommendStrategy::AverageVector)
.build(),
)
.limit(3)
.filter(Filter::must([Condition::matches(
"city",
"London".to_string(),
)])),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.RecommendInput;
import io.qdrant.client.grpc.Points.RecommendStrategy;
import io.qdrant.client.grpc.Points.Filter;
import static io.qdrant.client.ConditionFactory.matchKeyword;
import static io.qdrant.client.VectorInputFactory.vectorInput;
import static io.qdrant.client.QueryFactory.recommend;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(recommend(RecommendInput.newBuilder()
.addAllPositive(List.of(vectorInput(100), vectorInput(200), vectorInput(100.0f, 231.0f)))
.addAllNegative(List.of(vectorInput(718), vectorInput(0.2f, 0.3f, 0.4f, 0.5f)))
.setStrategy(RecommendStrategy.AverageVector)
.build()))
.setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London")))
.setLimit(3)
.build()).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new RecommendInput {
Positive = { 100, 231 },
Negative = { 718 }
},
filter: MatchKeyword("city", "London"),
limit: 3
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{
Positive: []*qdrant.VectorInput{
qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
qdrant.NewVectorInputID(qdrant.NewIDNum(231)),
},
Negative: []*qdrant.VectorInput{
qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
},
}),
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("city", "London"),
},
},
})
```
Example result of this API would be
```json
{
"result": [
{ "id": 10, "score": 0.81 },
{ "id": 14, "score": 0.75 },
{ "id": 11, "score": 0.73 }
],
"status": "ok",
"time": 0.001
}
```
The algorithm used to get the recommendations is selected from the available `strategy` options. Each of them has its own strengths and weaknesses, so experiment and choose the one that works best for your case.
### Average vector strategy
The default and first strategy added to Qdrant is called `average_vector`. It preprocesses the input examples to create a single vector that is used for the search. Since the preprocessing step happens very fast, the performance of this strategy is on-par with regular search. The intuition behind this kind of recommendation is that each vector component represents an independent feature of the data, so, by averaging the examples, we should get a good recommendation.
The way to produce the searching vector is by first averaging all the positive and negative examples separately, and then combining them into a single vector using the following formula:
```rust
avg_positive + avg_positive - avg_negative
```
In the case of not having any negative examples, the search vector will simply be equal to `avg_positive`.
This is the default strategy that's going to be set implicitly, but you can explicitly define it by setting `"strategy": "average_vector"` in the recommendation request.
### Best score strategy
*Available as of v1.6.0*
A new strategy introduced in v1.6, is called `best_score`. It is based on the idea that the best way to find similar vectors is to find the ones that are closer to a positive example, while avoiding the ones that are closer to a negative one.
The way it works is that each candidate is measured against every example, then we select the best positive and best negative scores. The final score is chosen with this step formula:
```rust
let score = if best_positive_score > best_negative_score {
best_positive_score
} else {
-(best_negative_score * best_negative_score)
};
```
<aside role="alert">
The performance of <code>best_score</code> strategy will be linearly impacted by the amount of examples.
</aside>
Since we are computing similarities to every example at each step of the search, the performance of this strategy will be linearly impacted by the amount of examples. This means that the more examples you provide, the slower the search will be. However, this strategy can be very powerful and should be more embedding-agnostic.
<aside role="status">
Accuracy may be impacted with this strategy. To improve it, increasing the <code>ef</code> search parameter to something above 32 will already be much better than the default 16, e.g: <code>"params": { "ef": 64 }</code>
</aside>
To use this algorithm, you need to set `"strategy": "best_score"` in the recommendation request.
#### Using only negative examples
A beneficial side-effect of `best_score` strategy is that you can use it with only negative examples. This will allow you to find the most dissimilar vectors to the ones you provide. This can be useful for finding outliers in your data, or for finding the most dissimilar vectors to a given one.
Combining negative-only examples with filtering can be a powerful tool for data exploration and cleaning.
### Multiple vectors
*Available as of v0.10.0*
If the collection was created with multiple vectors, the name of the vector should be specified in the recommendation request:
```http
POST /collections/{collection_name}/points/query
{
"query": {
"recommend": {
"positive": [100, 231],
"negative": [718]
}
},
"using": "image",
"limit": 10
}
```
```python
client.query_points(
collection_name="{collection_name}",
query=models.RecommendQuery(
recommend=models.RecommendInput(
positive=[100, 231],
negative=[718],
)
),
using="image",
limit=10,
)
```
```typescript
client.query("{collection_name}", {
query: {
recommend: {
positive: [100, 231],
negative: [718],
}
},
using: "image",
limit: 10
});
```
```rust
use qdrant_client::qdrant::{QueryPointsBuilder, RecommendInputBuilder};
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(
RecommendInputBuilder::default()
.add_positive(100)
.add_positive(231)
.add_negative(718)
.build(),
)
.limit(10)
.using("image"),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.RecommendInput;
import static io.qdrant.client.VectorInputFactory.vectorInput;
import static io.qdrant.client.QueryFactory.recommend;
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(recommend(RecommendInput.newBuilder()
.addAllPositive(List.of(vectorInput(100), vectorInput(231)))
.addAllNegative(List.of(vectorInput(718)))
.build()))
.setUsing("image")
.setLimit(10)
.build()).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new RecommendInput {
Positive = { 100, 231 },
Negative = { 718 }
},
usingVector: "image",
limit: 10
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{
Positive: []*qdrant.VectorInput{
qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
qdrant.NewVectorInputID(qdrant.NewIDNum(231)),
},
Negative: []*qdrant.VectorInput{
qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
},
}),
Using: qdrant.PtrOf("image"),
})
```
Parameter `using` specifies which stored vectors to use for the recommendation.
### Lookup vectors from another collection
*Available as of v0.11.6*
If you have collections with vectors of the same dimensionality,
and you want to look for recommendations in one collection based on the vectors of another collection,
you can use the `lookup_from` parameter.
It might be useful, e.g. in the item-to-user recommendations scenario.
Where user and item embeddings, although having the same vector parameters (distance type and dimensionality), are usually stored in different collections.
```http
POST /collections/{collection_name}/points/query
{
"query": {
"recommend": {
"positive": [100, 231],
"negative": [718]
}
},
"limit": 10,
"lookup_from": {
"collection": "{external_collection_name}",
"vector": "{external_vector_name}"
}
}
```
```python
client.query_points(
collection_name="{collection_name}",
query=models.RecommendQuery(
recommend=models.RecommendInput(
positive=[100, 231],
negative=[718],
)
),
using="image",
limit=10,
lookup_from=models.LookupLocation(
collection="{external_collection_name}", vector="{external_vector_name}"
),
)
```
```typescript
client.query("{collection_name}", {
query: {
recommend: {
positive: [100, 231],
negative: [718],
}
},
using: "image",
limit: 10,
lookup_from: {
collection: "{external_collection_name}",
vector: "{external_vector_name}"
}
});
```
```rust
use qdrant_client::qdrant::{LookupLocationBuilder, QueryPointsBuilder, RecommendInputBuilder};
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(
RecommendInputBuilder::default()
.add_positive(100)
.add_positive(231)
.add_negative(718)
.build(),
)
.limit(10)
.using("image")
.lookup_from(
LookupLocationBuilder::new("{external_collection_name}")
.vector_name("{external_vector_name}"),
),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.grpc.Points.LookupLocation;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.RecommendInput;
import static io.qdrant.client.VectorInputFactory.vectorInput;
import static io.qdrant.client.QueryFactory.recommend;
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(recommend(RecommendInput.newBuilder()
.addAllPositive(List.of(vectorInput(100), vectorInput(231)))
.addAllNegative(List.of(vectorInput(718)))
.build()))
.setUsing("image")
.setLimit(10)
.setLookupFrom(
LookupLocation.newBuilder()
.setCollectionName("{external_collection_name}")
.setVectorName("{external_vector_name}")
.build())
.build()).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new RecommendInput {
Positive = { 100, 231 },
Negative = { 718 }
},
usingVector: "image",
limit: 10,
lookupFrom: new LookupLocation
{
CollectionName = "{external_collection_name}",
VectorName = "{external_vector_name}",
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{
Positive: []*qdrant.VectorInput{
qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
qdrant.NewVectorInputID(qdrant.NewIDNum(231)),
},
Negative: []*qdrant.VectorInput{
qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
},
}),
Using: qdrant.PtrOf("image"),
LookupFrom: &qdrant.LookupLocation{
CollectionName: "{external_collection_name}",
VectorName: qdrant.PtrOf("{external_vector_name}"),
},
})
```
Vectors are retrieved from the external collection by ids provided in the `positive` and `negative` lists.
These vectors then used to perform the recommendation in the current collection, comparing against the "using" or default vector.
## Batch recommendation API
*Available as of v0.10.0*
Similar to the batch search API in terms of usage and advantages, it enables the batching of recommendation requests.
```http
POST /collections/{collection_name}/query/batch
{
"searches": [
{
"query": {
"recommend": {
"positive": [100, 231],
"negative": [718]
}
},
"filter": {
"must": [
{
"key": "city",
"match": {
"value": "London"
}
}
]
},
"limit": 10
},
{
"query": {
"recommend": {
"positive": [200, 67],
"negative": [300]
}
},
"filter": {
"must": [
{
"key": "city",
"match": {
"value": "London"
}
}
]
},
"limit": 10
}
]
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
filter_ = models.Filter(
must=[
models.FieldCondition(
key="city",
match=models.MatchValue(
value="London",
),
)
]
)
recommend_queries = [
models.QueryRequest(
query=models.RecommendQuery(
recommend=models.RecommendInput(positive=[100, 231], negative=[718])
),
filter=filter_,
limit=3,
),
models.QueryRequest(
query=models.RecommendQuery(
recommend=models.RecommendInput(positive=[200, 67], negative=[300])
),
filter=filter_,
limit=3,
),
]
client.query_batch_points(
collection_name="{collection_name}", requests=recommend_queries
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
const filter = {
must: [
{
key: "city",
match: {
value: "London",
},
},
],
};
const searches = [
{
query: {
recommend: {
positive: [100, 231],
negative: [718]
}
},
filter,
limit: 3,
},
{
query: {
recommend: {
positive: [200, 67],
negative: [300]
}
},
filter,
limit: 3,
},
];
client.queryBatch("{collection_name}", {
searches,
});
```
```rust
use qdrant_client::qdrant::{
Condition, Filter, QueryBatchPointsBuilder, QueryPointsBuilder,
RecommendInputBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
let filter = Filter::must([Condition::matches("city", "London".to_string())]);
let recommend_queries = vec![
QueryPointsBuilder::new("{collection_name}")
.query(
RecommendInputBuilder::default()
.add_positive(100)
.add_positive(231)
.add_negative(718)
.build(),
)
.filter(filter.clone())
.build(),
QueryPointsBuilder::new("{collection_name}")
.query(
RecommendInputBuilder::default()
.add_positive(200)
.add_positive(67)
.add_negative(300)
.build(),
)
.filter(filter)
.build(),
];
client
.query_batch(QueryBatchPointsBuilder::new(
"{collection_name}",
recommend_queries,
))
.await?;
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.RecommendInput;
import static io.qdrant.client.ConditionFactory.matchKeyword;
import static io.qdrant.client.VectorInputFactory.vectorInput;
import static io.qdrant.client.QueryFactory.recommend;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
Filter filter = Filter.newBuilder().addMust(matchKeyword("city", "London")).build();
List<QueryPoints> recommendQueries = List.of(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(recommend(
RecommendInput.newBuilder()
.addAllPositive(List.of(vectorInput(100), vectorInput(231)))
.addAllNegative(List.of(vectorInput(731)))
.build()))
.setFilter(filter)
.setLimit(3)
.build(),
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(recommend(
RecommendInput.newBuilder()
.addAllPositive(List.of(vectorInput(200), vectorInput(67)))
.addAllNegative(List.of(vectorInput(300)))
.build()))
.setFilter(filter)
.setLimit(3)
.build());
client.queryBatchAsync("{collection_name}", recommendQueries).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
var filter = MatchKeyword("city", "london");
await client.QueryBatchAsync(
collectionName: "{collection_name}",
queries:
[
new QueryPoints()
{
CollectionName = "{collection_name}",
Query = new RecommendInput {
Positive = { 100, 231 },
Negative = { 718 },
},
Limit = 3,
Filter = filter,
},
new QueryPoints()
{
CollectionName = "{collection_name}",
Query = new RecommendInput {
Positive = { 200, 67 },
Negative = { 300 },
},
Limit = 3,
Filter = filter,
}
]
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
filter := qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("city", "London"),
},
}
client.QueryBatch(context.Background(), &qdrant.QueryBatchPoints{
CollectionName: "{collection_name}",
QueryPoints: []*qdrant.QueryPoints{
{
CollectionName: "{collection_name}",
Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{
Positive: []*qdrant.VectorInput{
qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
qdrant.NewVectorInputID(qdrant.NewIDNum(231)),
},
Negative: []*qdrant.VectorInput{
qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
},
},
),
Filter: &filter,
},
{
CollectionName: "{collection_name}",
Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{
Positive: []*qdrant.VectorInput{
qdrant.NewVectorInputID(qdrant.NewIDNum(200)),
qdrant.NewVectorInputID(qdrant.NewIDNum(67)),
},
Negative: []*qdrant.VectorInput{
qdrant.NewVectorInputID(qdrant.NewIDNum(300)),
},
},
),
Filter: &filter,
},
},
},
)
```
The result of this API contains one array per recommendation requests.
```json
{
"result": [
[
{ "id": 10, "score": 0.81 },
{ "id": 14, "score": 0.75 },
{ "id": 11, "score": 0.73 }
],
[
{ "id": 1, "score": 0.92 },
{ "id": 3, "score": 0.89 },
{ "id": 9, "score": 0.75 }
]
],
"status": "ok",
"time": 0.001
}
```
## Discovery API
*Available as of v1.7*
REST API Schema definition available [here](https://api.qdrant.tech/api-reference/search/discover-points)
In this API, Qdrant introduces the concept of `context`, which is used for splitting the space. Context is a set of positive-negative pairs, and each pair divides the space into positive and negative zones. In that mode, the search operation prefers points based on how many positive zones they belong to (or how much they avoid negative zones).
The interface for providing context is similar to the recommendation API (ids or raw vectors). Still, in this case, they need to be provided in the form of positive-negative pairs.
Discovery API lets you do two new types of search:
- **Discovery search**: Uses the context (the pairs of positive-negative vectors) and a target to return the points more similar to the target, but constrained by the context.
- **Context search**: Using only the context pairs, get the points that live in the best zone, where loss is minimized
The way positive and negative examples should be arranged in the context pairs is completely up to you. So you can have the flexibility of trying out different permutation techniques based on your model and data.
<aside role="alert">The speed of search is linearly related to the amount of examples you provide in the query.</aside>
### Discovery search
This type of search works specially well for combining multimodal, vector-constrained searches. Qdrant already has extensive support for filters, which constrain the search based on its payload, but using discovery search, you can also constrain the vector space in which the search is performed.
![Discovery search](/docs/discovery-search.png)
The formula for the discovery score can be expressed as:
$$
\text{rank}(v^+, v^-) = \begin{cases}
1, &\quad s(v^+) \geq s(v^-) \\\\
-1, &\quad s(v^+) < s(v^-)
\end{cases}
$$
where $v^+$ represents a positive example, $v^-$ represents a negative example, and $s(v)$ is the similarity score of a vector $v$ to the target vector. The discovery score is then computed as:
$$
\text{discovery score} = \text{sigmoid}(s(v_t))+ \sum \text{rank}(v_i^+, v_i^-),
$$
where $s(v)$ is the similarity function, $v_t$ is the target vector, and again $v_i^+$ and $v_i^-$ are the positive and negative examples, respectively. The sigmoid function is used to normalize the score between 0 and 1 and the sum of ranks is used to penalize vectors that are closer to the negative examples than to the positive ones. In other words, the sum of individual ranks determines how many positive zones a point is in, while the closeness hierarchy comes second.
Example:
```http
POST /collections/{collection_name}/points/query
{
"query": {
"discover": {
"target": [0.2, 0.1, 0.9, 0.7],
"context": [
{
"positive": 100,
"negative": 718
},
{
"positive": 200,
"negative": 300
}
]
}
},
"limit": 10
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
discover_queries = [
models.QueryRequest(
query=models.DiscoverQuery(
discover=models.DiscoverInput(
target=[0.2, 0.1, 0.9, 0.7],
context=[
models.ContextPair(
positive=100,
negative=718,
),
models.ContextPair(
positive=200,
negative=300,
),
],
)
),
limit=10,
),
]
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: {
discover: {
target: [0.2, 0.1, 0.9, 0.7],
context: [
{
positive: 100,
negative: 718,
},
{
positive: 200,
negative: 300,
},
],
}
},
limit: 10,
});
```
```rust
use qdrant_client::qdrant::{ContextInputBuilder, DiscoverInputBuilder, QueryPointsBuilder};
use qdrant_client::Qdrant;
client
.query(
QueryPointsBuilder::new("{collection_name}").query(
DiscoverInputBuilder::new(
vec![0.2, 0.1, 0.9, 0.7],
ContextInputBuilder::default()
.add_pair(100, 718)
.add_pair(200, 300),
)
.build(),
),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.ContextInput;
import io.qdrant.client.grpc.Points.ContextInputPair;
import io.qdrant.client.grpc.Points.DiscoverInput;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.VectorInputFactory.vectorInput;
import static io.qdrant.client.QueryFactory.discover;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(discover(DiscoverInput.newBuilder()
.setTarget(vectorInput(0.2f, 0.1f, 0.9f, 0.7f))
.setContext(ContextInput.newBuilder()
.addAllPairs(List.of(
ContextInputPair.newBuilder()
.setPositive(vectorInput(100))
.setNegative(vectorInput(718))
.build(),
ContextInputPair.newBuilder()
.setPositive(vectorInput(200))
.setNegative(vectorInput(300))
.build()))
.build())
.build()))
.setLimit(10)
.build()).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new DiscoverInput {
Target = new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
Context = new ContextInput {
Pairs = {
new ContextInputPair {
Positive = 100,
Negative = 718
},
new ContextInputPair {
Positive = 200,
Negative = 300
},
}
},
},
limit: 10
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQueryDiscover(&qdrant.DiscoverInput{
Target: qdrant.NewVectorInput(0.2, 0.1, 0.9, 0.7),
Context: &qdrant.ContextInput{
Pairs: []*qdrant.ContextInputPair{
{
Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
},
{
Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(200)),
Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(300)),
},
},
},
}),
})
```
<aside role="status">
Notes about discovery search:
* When providing ids as examples, they will be excluded from the results.
* Score is always in descending order (larger is better), regardless of the metric used.
* Since the space is hard-constrained by the context, accuracy is normal to drop when using default settings. To mitigate this, increasing the `ef` search parameter to something above 64 will already be much better than the default 16, e.g: `"params": { "ef": 128 }`
</aside>
### Context search
Conversely, in the absence of a target, a rigid integer-by-integer function doesn't provide much guidance for the search when utilizing a proximity graph like HNSW. Instead, context search employs a function derived from the [triplet-loss](/articles/triplet-loss/) concept, which is usually applied during model training. For context search, this function is adapted to steer the search towards areas with fewer negative examples.
![Context search](/docs/context-search.png)
We can directly associate the score function to a loss function, where 0.0 is the maximum score a point can have, which means it is only in positive areas. As soon as a point exists closer to a negative example, its loss will simply be the difference of the positive and negative similarities.
$$
\text{context score} = \sum \min(s(v^+_i) - s(v^-_i), 0.0)
$$
Where $v^+_i$ and $v^-_i$ are the positive and negative examples of each pair, and $s(v)$ is the similarity function.
Using this kind of search, you can expect the output to not necessarily be around a single point, but rather, to be any point that isn’t closer to a negative example, which creates a constrained diverse result. So, even when the API is not called [`recommend`](#recommendation-api), recommendation systems can also use this approach and adapt it for their specific use-cases.
Example:
```http
POST /collections/{collection_name}/points/query
{
"query": {
"context": [
{
"positive": 100,
"negative": 718
},
{
"positive": 200,
"negative": 300
}
]
},
"limit": 10
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
discover_queries = [
models.QueryRequest(
query=models.ContextQuery(
context=[
models.ContextPair(
positive=100,
negative=718,
),
models.ContextPair(
positive=200,
negative=300,
),
],
),
limit=10,
),
]
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: {
context: [
{
positive: 100,
negative: 718,
},
{
positive: 200,
negative: 300,
},
]
},
limit: 10,
});
```
```rust
use qdrant_client::qdrant::{ContextInputBuilder, QueryPointsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}").query(
ContextInputBuilder::default()
.add_pair(100, 718)
.add_pair(200, 300)
.build(),
),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.ContextInput;
import io.qdrant.client.grpc.Points.ContextInputPair;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.VectorInputFactory.vectorInput;
import static io.qdrant.client.QueryFactory.context;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(context(ContextInput.newBuilder()
.addAllPairs(List.of(
ContextInputPair.newBuilder()
.setPositive(vectorInput(100))
.setNegative(vectorInput(718))
.build(),
ContextInputPair.newBuilder()
.setPositive(vectorInput(200))
.setNegative(vectorInput(300))
.build()))
.build()))
.setLimit(10)
.build()).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new ContextInput {
Pairs = {
new ContextInputPair {
Positive = 100,
Negative = 718
},
new ContextInputPair {
Positive = 200,
Negative = 300
},
}
},
limit: 10
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQueryContext(&qdrant.ContextInput{
Pairs: []*qdrant.ContextInputPair{
{
Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
},
{
Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(200)),
Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(300)),
},
},
}),
})
```
<aside role="status">
Notes about context search:
* When providing ids as examples, they will be excluded from the results.
* Score is always in descending order (larger is better), regardless of the metric used.
* Best possible score is `0.0`, and it is normal that many points get this score.
</aside>
| documentation/concepts/explore.md |
---
title: Optimizer
weight: 70
aliases:
- ../optimizer
---
# Optimizer
It is much more efficient to apply changes in batches than perform each change individually, as many other databases do. Qdrant here is no exception. Since Qdrant operates with data structures that are not always easy to change, it is sometimes necessary to rebuild those structures completely.
Storage optimization in Qdrant occurs at the segment level (see [storage](../storage/)).
In this case, the segment to be optimized remains readable for the time of the rebuild.
![Segment optimization](/docs/optimization.svg)
The availability is achieved by wrapping the segment into a proxy that transparently handles data changes.
Changed data is placed in the copy-on-write segment, which has priority for retrieval and subsequent updates.
## Vacuum Optimizer
The simplest example of a case where you need to rebuild a segment repository is to remove points.
Like many other databases, Qdrant does not delete entries immediately after a query.
Instead, it marks records as deleted and ignores them for future queries.
This strategy allows us to minimize disk access - one of the slowest operations.
However, a side effect of this strategy is that, over time, deleted records accumulate, occupy memory and slow down the system.
To avoid these adverse effects, Vacuum Optimizer is used.
It is used if the segment has accumulated too many deleted records.
The criteria for starting the optimizer are defined in the configuration file.
Here is an example of parameter values:
```yaml
storage:
optimizers:
# The minimal fraction of deleted vectors in a segment, required to perform segment optimization
deleted_threshold: 0.2
# The minimal number of vectors in a segment, required to perform segment optimization
vacuum_min_vector_number: 1000
```
## Merge Optimizer
The service may require the creation of temporary segments.
Such segments, for example, are created as copy-on-write segments during optimization itself.
It is also essential to have at least one small segment that Qdrant will use to store frequently updated data.
On the other hand, too many small segments lead to suboptimal search performance.
There is the Merge Optimizer, which combines the smallest segments into one large segment. It is used if too many segments are created.
The criteria for starting the optimizer are defined in the configuration file.
Here is an example of parameter values:
```yaml
storage:
optimizers:
# If the number of segments exceeds this value, the optimizer will merge the smallest segments.
max_segment_number: 5
```
## Indexing Optimizer
Qdrant allows you to choose the type of indexes and data storage methods used depending on the number of records.
So, for example, if the number of points is less than 10000, using any index would be less efficient than a brute force scan.
The Indexing Optimizer is used to implement the enabling of indexes and memmap storage when the minimal amount of records is reached.
The criteria for starting the optimizer are defined in the configuration file.
Here is an example of parameter values:
```yaml
storage:
optimizers:
# Maximum size (in kilobytes) of vectors to store in-memory per segment.
# Segments larger than this threshold will be stored as read-only memmaped file.
# Memmap storage is disabled by default, to enable it, set this threshold to a reasonable value.
# To disable memmap storage, set this to `0`.
# Note: 1Kb = 1 vector of size 256
memmap_threshold_kb: 200000
# Maximum size (in kilobytes) of vectors allowed for plain index, exceeding this threshold will enable vector indexing
# Default value is 20,000, based on <https://github.com/google-research/google-research/blob/master/scann/docs/algorithms.md>.
# To disable vector indexing, set to `0`.
# Note: 1kB = 1 vector of size 256.
indexing_threshold_kb: 20000
```
In addition to the configuration file, you can also set optimizer parameters separately for each [collection](../collections/).
Dynamic parameter updates may be useful, for example, for more efficient initial loading of points. You can disable indexing during the upload process with these settings and enable it immediately after it is finished. As a result, you will not waste extra computation resources on rebuilding the index. | documentation/concepts/optimizer.md |
---
title: Search
weight: 50
aliases:
- ../search
---
# Similarity search
Searching for the nearest vectors is at the core of many representational learning applications.
Modern neural networks are trained to transform objects into vectors so that objects close in the real world appear close in vector space.
It could be, for example, texts with similar meanings, visually similar pictures, or songs of the same genre.
{{< figure src="/docs/encoders.png" caption="This is how vector similarity works" width="70%" >}}
## Query API
*Available as of v1.10.0*
Qdrant provides a single interface for all kinds of search and exploration requests - the `Query API`.
Here is a reference list of what kind of queries you can perform with the `Query API` in Qdrant:
Depending on the `query` parameter, Qdrant might prefer different strategies for the search.
| | |
| --- | --- |
| Nearest Neighbors Search | Vector Similarity Search, also known as k-NN |
| Search By Id | Search by an already stored vector - skip embedding model inference |
| [Recommendations](../explore/#recommendation-api) | Provide positive and negative examples |
| [Discovery Search](../explore/#discovery-api) | Guide the search using context as a one-shot training set |
| [Scroll](../points/#scroll-points) | Get all points with optional filtering |
| [Grouping](../search/#grouping-api) | Group results by a certain field |
| [Order By](../hybrid-queries/#re-ranking-with-stored-values) | Order points by payload key |
| [Hybrid Search](../hybrid-queries/#hybrid-search) | Combine multiple queries to get better results |
| [Multi-Stage Search](../hybrid-queries/#multi-stage-queries) | Optimize performance for large embeddings |
| [Random Sampling](#random-sampling) | Get random points from the collection |
**Nearest Neighbors Search**
```http
POST /collections/{collection_name}/points/query
{
"query": [0.2, 0.1, 0.9, 0.7] // <--- Dense vector
}
```
```python
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7], # <--- Dense vector
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7], // <--- Dense vector
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{Condition, Filter, Query, QueryPointsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(Query::new_nearest(vec![0.2, 0.1, 0.9, 0.7]))
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.QueryFactory.nearest;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("{collectionName}")
.setQuery(nearest(List.of(0.2f, 0.1f, 0.9f, 0.7f)))
.build()).get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
})
```
**Search By Id**
```http
POST /collections/{collection_name}/points/query
{
"query": "43cf51e2-8777-4f52-bc74-c2cbde0c8b04" // <--- point id
}
```
```python
client.query_points(
collection_name="{collection_name}",
query="43cf51e2-8777-4f52-bc74-c2cbde0c8b04", # <--- point id
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: '43cf51e2-8777-4f52-bc74-c2cbde0c8b04', // <--- point id
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{Condition, Filter, PointId, Query, QueryPointsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(Query::new_nearest(PointId::new("43cf51e2-8777-4f52-bc74-c2cbde0c8b04")))
)
.await?;
```
```java
import java.util.UUID;
import static io.qdrant.client.QueryFactory.nearest;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("{collectionName}")
.setQuery(nearest(UUID.fromString("43cf51e2-8777-4f52-bc74-c2cbde0c8b04")))
.build()).get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: Guid.Parse("43cf51e2-8777-4f52-bc74-c2cbde0c8b04")
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQueryID(qdrant.NewID("43cf51e2-8777-4f52-bc74-c2cbde0c8b04")),
})
```
## Metrics
There are many ways to estimate the similarity of vectors with each other.
In Qdrant terms, these ways are called metrics.
The choice of metric depends on the vectors obtained and, in particular, on the neural network encoder training method.
Qdrant supports these most popular types of metrics:
* Dot product: `Dot` - <https://en.wikipedia.org/wiki/Dot_product>
* Cosine similarity: `Cosine` - <https://en.wikipedia.org/wiki/Cosine_similarity>
* Euclidean distance: `Euclid` - <https://en.wikipedia.org/wiki/Euclidean_distance>
* Manhattan distance: `Manhattan`*- <https://en.wikipedia.org/wiki/Taxicab_geometry> <i><sup>*Available as of v1.7</sup></i>
The most typical metric used in similarity learning models is the cosine metric.
![Embeddings](/docs/cos.png)
Qdrant counts this metric in 2 steps, due to which a higher search speed is achieved.
The first step is to normalize the vector when adding it to the collection.
It happens only once for each vector.
The second step is the comparison of vectors.
In this case, it becomes equivalent to dot production - a very fast operation due to SIMD.
Depending on the query configuration, Qdrant might prefer different strategies for the search.
Read more about it in the [query planning](#query-planning) section.
## Search API
Let's look at an example of a search query.
REST API - API Schema definition is available [here](https://api.qdrant.tech/api-reference/search/query-points)
```http
POST /collections/{collection_name}/points/query
{
"query": [0.2, 0.1, 0.9, 0.79],
"filter": {
"must": [
{
"key": "city",
"match": {
"value": "London"
}
}
]
},
"params": {
"hnsw_ef": 128,
"exact": false
},
"limit": 3
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
query_filter=models.Filter(
must=[
models.FieldCondition(
key="city",
match=models.MatchValue(
value="London",
),
)
]
),
search_params=models.SearchParams(hnsw_ef=128, exact=False),
limit=3,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
filter: {
must: [
{
key: "city",
match: {
value: "London",
},
},
],
},
params: {
hnsw_ef: 128,
exact: false,
},
limit: 3,
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, QueryPointsBuilder, SearchParamsBuilder};
use qdrant_client::Qdrant;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(3)
.filter(Filter::must([Condition::matches(
"city",
"London".to_string(),
)]))
.params(SearchParamsBuilder::default().hnsw_ef(128).exact(false)),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.ConditionFactory.matchKeyword;
import static io.qdrant.client.QueryFactory.nearest;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.SearchParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London")).build())
.setParams(SearchParams.newBuilder().setExact(false).setHnswEf(128).build())
.setLimit(3)
.build()).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
filter: MatchKeyword("city", "London"),
searchParams: new SearchParams { Exact = false, HnswEf = 128 },
limit: 3
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("city", "London"),
},
},
Params: &qdrant.SearchParams{
Exact: qdrant.PtrOf(false),
HnswEf: qdrant.PtrOf(uint64(128)),
},
})
```
In this example, we are looking for vectors similar to vector `[0.2, 0.1, 0.9, 0.7]`.
Parameter `limit` (or its alias - `top`) specifies the amount of most similar results we would like to retrieve.
Values under the key `params` specify custom parameters for the search.
Currently, it could be:
* `hnsw_ef` - value that specifies `ef` parameter of the HNSW algorithm.
* `exact` - option to not use the approximate search (ANN). If set to true, the search may run for a long as it performs a full scan to retrieve exact results.
* `indexed_only` - With this option you can disable the search in those segments where vector index is not built yet. This may be useful if you want to minimize the impact to the search performance whilst the collection is also being updated. Using this option may lead to a partial result if the collection is not fully indexed yet, consider using it only if eventual consistency is acceptable for your use case.
Since the `filter` parameter is specified, the search is performed only among those points that satisfy the filter condition.
See details of possible filters and their work in the [filtering](../filtering/) section.
Example result of this API would be
```json
{
"result": [
{ "id": 10, "score": 0.81 },
{ "id": 14, "score": 0.75 },
{ "id": 11, "score": 0.73 }
],
"status": "ok",
"time": 0.001
}
```
The `result` contains ordered by `score` list of found point ids.
Note that payload and vector data is missing in these results by default.
See [payload and vector in the result](#payload-and-vector-in-the-result) on how
to include it.
*Available as of v0.10.0*
If the collection was created with multiple vectors, the name of the vector to use for searching should be provided:
```http
POST /collections/{collection_name}/points/query
{
"query": [0.2, 0.1, 0.9, 0.7],
"using": "image",
"limit": 3
}
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
using="image",
limit=3,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
using: "image",
limit: 3,
});
```
```rust
use qdrant_client::qdrant::QueryPointsBuilder;
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(3)
.using("image"),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.QueryFactory.nearest;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setUsing("image")
.setLimit(3)
.build()).get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
usingVector: "image",
limit: 3
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
Using: qdrant.PtrOf("image"),
})
```
Search is processing only among vectors with the same name.
*Available as of v1.7.0*
If the collection was created with sparse vectors, the name of the sparse vector to use for searching should be provided:
You can still use payload filtering and other features of the search API with sparse vectors.
There are however important differences between dense and sparse vector search:
| Index| Sparse Query | Dense Query |
| --- | --- | --- |
| Scoring Metric | Default is `Dot product`, no need to specify it | `Distance` has supported metrics e.g. Dot, Cosine |
| Search Type | Always exact in Qdrant | HNSW is an approximate NN |
| Return Behaviour | Returns only vectors with non-zero values in the same indices as the query vector | Returns `limit` vectors |
In general, the speed of the search is proportional to the number of non-zero values in the query vector.
```http
POST /collections/{collection_name}/points/query
{
"query": {
"indices": [6, 7],
"values": [1, 2]
},
"using": "text",
"limit": 3
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=models.SparseVector(
indices=[1, 7],
values=[2.0, 1.0],
),
using="text",
limit=3,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: {
indices: [1, 7],
values: [2.0, 1.0]
},
using: "text",
limit: 3,
});
```
```rust
use qdrant_client::qdrant::QueryPointsBuilder;
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![(1, 2.0), (7, 1.0)])
.limit(3)
.using("text"),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.QueryFactory.nearest;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setUsing("text")
.setQuery(nearest(List.of(2.0f, 1.0f), List.of(1, 7)))
.setLimit(3)
.build())
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new (float, uint)[] { (2.0f, 1), (1.0f, 2) },
usingVector: "text",
limit: 3
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuerySparse(
[]uint32{1, 2},
[]float32{2.0, 1.0}),
Using: qdrant.PtrOf("text"),
})
```
### Filtering results by score
In addition to payload filtering, it might be useful to filter out results with a low similarity score.
For example, if you know the minimal acceptance score for your model and do not want any results which are less similar than the threshold.
In this case, you can use `score_threshold` parameter of the search query.
It will exclude all results with a score worse than the given.
<aside role="status">This parameter may exclude lower or higher scores depending on the used metric. For example, higher scores of Euclidean metric are considered more distant and, therefore, will be excluded.</aside>
### Payload and vector in the result
By default, retrieval methods do not return any stored information such as
payload and vectors. Additional parameters `with_vectors` and `with_payload`
alter this behavior.
Example:
```http
POST /collections/{collection_name}/points/query
{
"": [0.2, 0.1, 0.9, 0.7],
"with_vectors": true,
"with_payload": true
}
```
```python
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
with_vectors=True,
with_payload=True,
)
```
```typescript
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
with_vector: true,
with_payload: true,
});
```
```rust
use qdrant_client::qdrant::QueryPointsBuilder;
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(3)
.with_payload(true)
.with_vectors(true),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.WithVectorsSelectorFactory;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.QueryFactory.nearest;
import static io.qdrant.client.WithPayloadSelectorFactory.enable;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setWithPayload(enable(true))
.setWithVectors(WithVectorsSelectorFactory.enable(true))
.setLimit(3)
.build())
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
payloadSelector: true,
vectorsSelector: true,
limit: 3
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
WithPayload: qdrant.NewWithPayload(true),
WithVectors: qdrant.NewWithVectors(true),
})
```
You can use `with_payload` to scope to or filter a specific payload subset.
You can even specify an array of items to include, such as `city`,
`village`, and `town`:
```http
POST /collections/{collection_name}/points/query
{
"query": [0.2, 0.1, 0.9, 0.7],
"with_payload": ["city", "village", "town"]
}
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
with_payload=["city", "village", "town"],
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
with_payload: ["city", "village", "town"],
});
```
```rust
use qdrant_client::qdrant::{with_payload_selector::SelectorOptions, QueryPointsBuilder};
use qdrant_client::Qdrant;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(3)
.with_payload(SelectorOptions::Include(
vec![
"city".to_string(),
"village".to_string(),
"town".to_string(),
]
.into(),
))
.with_vectors(true),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.QueryFactory.nearest;
import static io.qdrant.client.WithPayloadSelectorFactory.include;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setWithPayload(include(List.of("city", "village", "town")))
.setLimit(3)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
payloadSelector: new WithPayloadSelector
{
Include = new PayloadIncludeSelector
{
Fields = { new string[] { "city", "village", "town" } }
}
},
limit: 3
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
WithPayload: qdrant.NewWithPayloadInclude("city", "village", "town"),
})
```
Or use `include` or `exclude` explicitly. For example, to exclude `city`:
```http
POST /collections/{collection_name}/points/query
{
"query": [0.2, 0.1, 0.9, 0.7],
"with_payload": {
"exclude": ["city"]
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
with_payload=models.PayloadSelectorExclude(
exclude=["city"],
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
with_payload: {
exclude: ["city"],
},
});
```
```rust
use qdrant_client::qdrant::{with_payload_selector::SelectorOptions, QueryPointsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(3)
.with_payload(SelectorOptions::Exclude(vec!["city".to_string()].into()))
.with_vectors(true),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.QueryFactory.nearest;
import static io.qdrant.client.WithPayloadSelectorFactory.exclude;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setWithPayload(exclude(List.of("city")))
.setLimit(3)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
payloadSelector: new WithPayloadSelector
{
Exclude = new PayloadExcludeSelector { Fields = { new string[] { "city" } } }
},
limit: 3
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
WithPayload: qdrant.NewWithPayloadExclude("city"),
})
```
It is possible to target nested fields using a dot notation:
* `payload.nested_field` - for a nested field
* `payload.nested_array[].sub_field` - for projecting nested fields within an array
Accessing array elements by index is currently not supported.
## Batch search API
*Available as of v0.10.0*
The batch search API enables to perform multiple search requests via a single request.
Its semantic is straightforward, `n` batched search requests are equivalent to `n` singular search requests.
This approach has several advantages. Logically, fewer network connections are required which can be very beneficial on its own.
More importantly, batched requests will be efficiently processed via the query planner which can detect and optimize requests if they have the same `filter`.
This can have a great effect on latency for non trivial filters as the intermediary results can be shared among the request.
In order to use it, simply pack together your search requests. All the regular attributes of a search request are of course available.
```http
POST /collections/{collection_name}/points/query/batch
{
"searches": [
{
"query": [0.2, 0.1, 0.9, 0.7],
"filter": {
"must": [
{
"key": "city",
"match": {
"value": "London"
}
}
]
},
"limit": 3
},
{
"query": [0.5, 0.3, 0.2, 0.3],
"filter": {
"must": [
{
"key": "city",
"match": {
"value": "London"
}
}
]
},
"limit": 3
}
]
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
filter_ = models.Filter(
must=[
models.FieldCondition(
key="city",
match=models.MatchValue(
value="London",
),
)
]
)
search_queries = [
models.QueryRequest(query=[0.2, 0.1, 0.9, 0.7], filter=filter_, limit=3),
models.QueryRequest(query=[0.5, 0.3, 0.2, 0.3], filter=filter_, limit=3),
]
client.query_batch_points(collection_name="{collection_name}", requests=search_queries)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
const filter = {
must: [
{
key: "city",
match: {
value: "London",
},
},
],
};
const searches = [
{
query: [0.2, 0.1, 0.9, 0.7],
filter,
limit: 3,
},
{
query: [0.5, 0.3, 0.2, 0.3],
filter,
limit: 3,
},
];
client.queryBatch("{collection_name}", {
searches,
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, QueryBatchPointsBuilder, QueryPointsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
let filter = Filter::must([Condition::matches("city", "London".to_string())]);
let searches = vec![
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.1, 0.2, 0.3, 0.4])
.limit(3)
.filter(filter.clone())
.build(),
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.5, 0.3, 0.2, 0.3])
.limit(3)
.filter(filter)
.build(),
];
client
.query_batch(QueryBatchPointsBuilder::new("{collection_name}", searches))
.await?;
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.QueryFactory.nearest;
import static io.qdrant.client.ConditionFactory.matchKeyword;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
Filter filter = Filter.newBuilder().addMust(matchKeyword("city", "London")).build();
List<QueryPoints> searches = List.of(
QueryPoints.newBuilder()
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setFilter(filter)
.setLimit(3)
.build(),
QueryPoints.newBuilder()
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setFilter(filter)
.setLimit(3)
.build());
client.queryBatchAsync("{collection_name}", searches).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
var filter = MatchKeyword("city", "London");
var queries = new List<QueryPoints>
{
new()
{
CollectionName = "{collection_name}",
Query = new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
Filter = filter,
Limit = 3
},
new()
{
CollectionName = "{collection_name}",
Query = new float[] { 0.5f, 0.3f, 0.2f, 0.3f },
Filter = filter,
Limit = 3
}
};
await client.QueryBatchAsync(collectionName: "{collection_name}", queries: queries);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
filter := qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("city", "London"),
},
}
client.QueryBatch(context.Background(), &qdrant.QueryBatchPoints{
CollectionName: "{collection_name}",
QueryPoints: []*qdrant.QueryPoints{
{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
Filter: &filter,
},
{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.5, 0.3, 0.2, 0.3),
Filter: &filter,
},
},
})
```
The result of this API contains one array per search requests.
```json
{
"result": [
[
{ "id": 10, "score": 0.81 },
{ "id": 14, "score": 0.75 },
{ "id": 11, "score": 0.73 }
],
[
{ "id": 1, "score": 0.92 },
{ "id": 3, "score": 0.89 },
{ "id": 9, "score": 0.75 }
]
],
"status": "ok",
"time": 0.001
}
```
## Pagination
*Available as of v0.8.3*
Search and [recommendation](../explore/#recommendation-api) APIs allow to skip first results of the search and return only the result starting from some specified offset:
Example:
```http
POST /collections/{collection_name}/points/query
{
"query": [0.2, 0.1, 0.9, 0.7],
"with_vectors": true,
"with_payload": true,
"limit": 10,
"offset": 100
}
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[0.2, 0.1, 0.9, 0.7],
with_vectors=True,
with_payload=True,
limit=10,
offset=100,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: [0.2, 0.1, 0.9, 0.7],
with_vector: true,
with_payload: true,
limit: 10,
offset: 100,
});
```
```rust
use qdrant_client::qdrant::QueryPointsBuilder;
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![0.2, 0.1, 0.9, 0.7])
.with_payload(true)
.with_vectors(true)
.limit(10)
.offset(100),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.WithVectorsSelectorFactory;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.QueryFactory.nearest;
import static io.qdrant.client.WithPayloadSelectorFactory.enable;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setWithPayload(enable(true))
.setWithVectors(WithVectorsSelectorFactory.enable(true))
.setLimit(10)
.setOffset(100)
.build())
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
payloadSelector: true,
vectorsSelector: true,
limit: 10,
offset: 100
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
WithPayload: qdrant.NewWithPayload(true),
WithVectors: qdrant.NewWithVectors(true),
Offset: qdrant.PtrOf(uint64(100)),
})
```
Is equivalent to retrieving the 11th page with 10 records per page.
<aside role="alert">Large offset values may cause performance issues</aside>
Vector-based retrieval in general and HNSW index in particular, are not designed to be paginated.
It is impossible to retrieve Nth closest vector without retrieving the first N vectors first.
However, using the offset parameter saves the resources by reducing network traffic and the number of times the storage is accessed.
Using an `offset` parameter, will require to internally retrieve `offset + limit` points, but only access payload and vector from the storage those points which are going to be actually returned.
## Grouping API
*Available as of v1.2.0*
It is possible to group results by a certain field. This is useful when you have multiple points for the same item, and you want to avoid redundancy of the same item in the results.
For example, if you have a large document split into multiple chunks, and you want to search or [recommend](../explore/#recommendation-api) on a per-document basis, you can group the results by the document ID.
Consider having points with the following payloads:
```json
[
{
"id": 0,
"payload": {
"chunk_part": 0,
"document_id": "a"
},
"vector": [0.91]
},
{
"id": 1,
"payload": {
"chunk_part": 1,
"document_id": ["a", "b"]
},
"vector": [0.8]
},
{
"id": 2,
"payload": {
"chunk_part": 2,
"document_id": "a"
},
"vector": [0.2]
},
{
"id": 3,
"payload": {
"chunk_part": 0,
"document_id": 123
},
"vector": [0.79]
},
{
"id": 4,
"payload": {
"chunk_part": 1,
"document_id": 123
},
"vector": [0.75]
},
{
"id": 5,
"payload": {
"chunk_part": 0,
"document_id": -10
},
"vector": [0.6]
}
]
```
With the ***groups*** API, you will be able to get the best *N* points for each document, assuming that the payload of the points contains the document ID. Of course there will be times where the best *N* points cannot be fulfilled due to lack of points or a big distance with respect to the query. In every case, the `group_size` is a best-effort parameter, akin to the `limit` parameter.
### Search groups
REST API ([Schema](https://api.qdrant.tech/api-reference/search/query-points-groups)):
```http
POST /collections/{collection_name}/points/query/groups
{
// Same as in the regular query API
"query": [1.1],
// Grouping parameters
"group_by": "document_id", // Path of the field to group by
"limit": 4, // Max amount of groups
"group_size": 2 // Max amount of points per group
}
```
```python
client.query_points_groups(
collection_name="{collection_name}",
# Same as in the regular query_points() API
query=[1.1],
# Grouping parameters
group_by="document_id", # Path of the field to group by
limit=4, # Max amount of groups
group_size=2, # Max amount of points per group
)
```
```typescript
client.queryGroups("{collection_name}", {
query: [1.1],
group_by: "document_id",
limit: 4,
group_size: 2,
});
```
```rust
use qdrant_client::qdrant::QueryPointGroupsBuilder;
client
.query_groups(
QueryPointGroupsBuilder::new("{collection_name}", "document_id")
.query(vec![0.2, 0.1, 0.9, 0.7])
.group_size(2u64)
.with_payload(true)
.with_vectors(true)
.limit(4u64),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.grpc.Points.SearchPointGroups;
client.queryGroupsAsync(
QueryPointGroups.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setGroupBy("document_id")
.setLimit(4)
.setGroupSize(2)
.build())
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.QueryGroupsAsync(
collectionName: "{collection_name}",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
groupBy: "document_id",
limit: 4,
groupSize: 2
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
GroupBy: "document_id",
GroupSize: qdrant.PtrOf(uint64(2)),
})
```
The output of a ***groups*** call looks like this:
```json
{
"result": {
"groups": [
{
"id": "a",
"hits": [
{ "id": 0, "score": 0.91 },
{ "id": 1, "score": 0.85 }
]
},
{
"id": "b",
"hits": [
{ "id": 1, "score": 0.85 }
]
},
{
"id": 123,
"hits": [
{ "id": 3, "score": 0.79 },
{ "id": 4, "score": 0.75 }
]
},
{
"id": -10,
"hits": [
{ "id": 5, "score": 0.6 }
]
}
]
},
"status": "ok",
"time": 0.001
}
```
The groups are ordered by the score of the top point in the group. Inside each group the points are sorted too.
If the `group_by` field of a point is an array (e.g. `"document_id": ["a", "b"]`), the point can be included in multiple groups (e.g. `"document_id": "a"` and `document_id: "b"`).
<aside role="status">This feature relies heavily on the `group_by` key provided. To improve performance, make sure to create a dedicated index for it.</aside>
**Limitations**:
* Only [keyword](../payload/#keyword) and [integer](../payload/#integer) payload values are supported for the `group_by` parameter. Payload values with other types will be ignored.
* At the moment, pagination is not enabled when using **groups**, so the `offset` parameter is not allowed.
### Lookup in groups
*Available as of v1.3.0*
Having multiple points for parts of the same item often introduces redundancy in the stored data. Which may be fine if the information shared by the points is small, but it can become a problem if the payload is large, because it multiplies the storage space needed to store the points by a factor of the amount of points we have per group.
One way of optimizing storage when using groups is to store the information shared by the points with the same group id in a single point in another collection. Then, when using the [**groups** API](#grouping-api), add the `with_lookup` parameter to bring the information from those points into each group.
![Group id matches point id](/docs/lookup_id_linking.png)
This has the extra benefit of having a single point to update when the information shared by the points in a group changes.
For example, if you have a collection of documents, you may want to chunk them and store the points for the chunks in a separate collection, making sure that you store the point id from the document it belongs in the payload of the chunk point.
In this case, to bring the information from the documents into the chunks grouped by the document id, you can use the `with_lookup` parameter:
```http
POST /collections/chunks/points/query/groups
{
// Same as in the regular query API
"query": [1.1],
// Grouping parameters
"group_by": "document_id",
"limit": 2,
"group_size": 2,
// Lookup parameters
"with_lookup": {
// Name of the collection to look up points in
"collection": "documents",
// Options for specifying what to bring from the payload
// of the looked up point, true by default
"with_payload": ["title", "text"],
// Options for specifying what to bring from the vector(s)
// of the looked up point, true by default
"with_vectors": false
}
}
```
```python
client.query_points_groups(
collection_name="chunks",
# Same as in the regular search() API
query=[1.1],
# Grouping parameters
group_by="document_id", # Path of the field to group by
limit=2, # Max amount of groups
group_size=2, # Max amount of points per group
# Lookup parameters
with_lookup=models.WithLookup(
# Name of the collection to look up points in
collection="documents",
# Options for specifying what to bring from the payload
# of the looked up point, True by default
with_payload=["title", "text"],
# Options for specifying what to bring from the vector(s)
# of the looked up point, True by default
with_vectors=False,
),
)
```
```typescript
client.queryGroups("{collection_name}", {
query: [1.1],
group_by: "document_id",
limit: 2,
group_size: 2,
with_lookup: {
collection: "documents",
with_payload: ["title", "text"],
with_vectors: false,
},
});
```
```rust
use qdrant_client::qdrant::{with_payload_selector::SelectorOptions, QueryPointGroupsBuilder, WithLookupBuilder};
client
.query_groups(
QueryPointGroupsBuilder::new("{collection_name}", "document_id")
.query(vec![0.2, 0.1, 0.9, 0.7])
.limit(2u64)
.limit(2u64)
.with_lookup(
WithLookupBuilder::new("documents")
.with_payload(SelectorOptions::Include(
vec!["title".to_string(), "text".to_string()].into(),
))
.with_vectors(false),
),
)
.await?;
```
```java
import java.util.List;
import io.qdrant.client.grpc.Points.QueryPointGroups;
import io.qdrant.client.grpc.Points.WithLookup;
import static io.qdrant.client.QueryFactory.nearest;
import static io.qdrant.client.WithVectorsSelectorFactory.enable;
import static io.qdrant.client.WithPayloadSelectorFactory.include;
client.queryGroupsAsync(
QueryPointGroups.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setGroupBy("document_id")
.setLimit(2)
.setGroupSize(2)
.setWithLookup(
WithLookup.newBuilder()
.setCollection("documents")
.setWithPayload(include(List.of("title", "text")))
.setWithVectors(enable(false))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.SearchGroupsAsync(
collectionName: "{collection_name}",
vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f},
groupBy: "document_id",
limit: 2,
groupSize: 2,
withLookup: new WithLookup
{
Collection = "documents",
WithPayload = new WithPayloadSelector
{
Include = new PayloadIncludeSelector { Fields = { new string[] { "title", "text" } } }
},
WithVectors = false
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
GroupBy: "document_id",
GroupSize: qdrant.PtrOf(uint64(2)),
WithLookup: &qdrant.WithLookup{
Collection: "documents",
WithPayload: qdrant.NewWithPayloadInclude("title", "text"),
},
})
```
For the `with_lookup` parameter, you can also use the shorthand `with_lookup="documents"` to bring the whole payload and vector(s) without explicitly specifying it.
The looked up result will show up under `lookup` in each group.
```json
{
"result": {
"groups": [
{
"id": 1,
"hits": [
{ "id": 0, "score": 0.91 },
{ "id": 1, "score": 0.85 }
],
"lookup": {
"id": 1,
"payload": {
"title": "Document A",
"text": "This is document A"
}
}
},
{
"id": 2,
"hits": [
{ "id": 1, "score": 0.85 }
],
"lookup": {
"id": 2,
"payload": {
"title": "Document B",
"text": "This is document B"
}
}
}
]
},
"status": "ok",
"time": 0.001
}
```
Since the lookup is done by matching directly with the point id, any group id that is not an existing (and valid) point id in the lookup collection will be ignored, and the `lookup` field will be empty.
## Random Sampling
*Available as of v1.11.0*
In some cases it might be useful to retrieve a random sample of points from the collection. This can be useful for debugging, testing, or for providing entry points for exploration.
Random sampling API is a part of [Universal Query API](#query-api) and can be used in the same way as regular search API.
```http
{
"collection_name": "{collection_name}",
"query": {
"sample": "random"
}
}
```
```python
from qdrant_client import QdrantClient, models
sampled = client.query_points(
collection_name="{collection_name}",
query=models.SampleQuery(sample=models.Sample.Random)
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
const sampled = await client.query("{collection_name}", {
query: {
sample: "random",
},
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{Query, QueryPointsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
let sampled = client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(Query::new_sample(Sample::Random))
)
.await?;
```
```java
import static io.qdrant.client.QueryFactory.sample;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
import io.qdrant.client.grpc.Points.Sample;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(sample(Sample.Random))
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(collectionName: "{collection_name}", query: Sample.Random);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{
CollectionName: "{collection_name}",
Query: qdrant.NewQuerySample(qdrant.Sample_Random),
})
```
## Query planning
Depending on the filter used in the search - there are several possible scenarios for query execution.
Qdrant chooses one of the query execution options depending on the available indexes, the complexity of the conditions and the cardinality of the filtering result.
This process is called query planning.
The strategy selection process relies heavily on heuristics and can vary from release to release.
However, the general principles are:
* planning is performed for each segment independently (see [storage](../storage/) for more information about segments)
* prefer a full scan if the amount of points is below a threshold
* estimate the cardinality of a filtered result before selecting a strategy
* retrieve points using payload index (see [indexing](../indexing/)) if cardinality is below threshold
* use filterable vector index if the cardinality is above a threshold
You can adjust the threshold using a [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), as well as independently for each collection.
| documentation/concepts/search.md |
---
title: Payload
weight: 45
aliases:
- ../payload
---
# Payload
One of the significant features of Qdrant is the ability to store additional information along with vectors.
This information is called `payload` in Qdrant terminology.
Qdrant allows you to store any information that can be represented using JSON.
Here is an example of a typical payload:
```json
{
"name": "jacket",
"colors": ["red", "blue"],
"count": 10,
"price": 11.99,
"locations": [
{
"lon": 52.5200,
"lat": 13.4050
}
],
"reviews": [
{
"user": "alice",
"score": 4
},
{
"user": "bob",
"score": 5
}
]
}
```
## Payload types
In addition to storing payloads, Qdrant also allows you search based on certain kinds of values.
This feature is implemented as additional filters during the search and will enable you to incorporate custom logic on top of semantic similarity.
During the filtering, Qdrant will check the conditions over those values that match the type of the filtering condition. If the stored value type does not fit the filtering condition - it will be considered not satisfied.
For example, you will get an empty output if you apply the [range condition](../filtering/#range) on the string data.
However, arrays (multiple values of the same type) are treated a little bit different. When we apply a filter to an array, it will succeed if at least one of the values inside the array meets the condition.
The filtering process is discussed in detail in the section [Filtering](../filtering/).
Let's look at the data types that Qdrant supports for searching:
### Integer
`integer` - 64-bit integer in the range from `-9223372036854775808` to `9223372036854775807`.
Example of single and multiple `integer` values:
```json
{
"count": 10,
"sizes": [35, 36, 38]
}
```
### Float
`float` - 64-bit floating point number.
Example of single and multiple `float` values:
```json
{
"price": 11.99,
"ratings": [9.1, 9.2, 9.4]
}
```
### Bool
Bool - binary value. Equals to `true` or `false`.
Example of single and multiple `bool` values:
```json
{
"is_delivered": true,
"responses": [false, false, true, false]
}
```
### Keyword
`keyword` - string value.
Example of single and multiple `keyword` values:
```json
{
"name": "Alice",
"friends": [
"bob",
"eva",
"jack"
]
}
```
### Geo
`geo` is used to represent geographical coordinates.
Example of single and multiple `geo` values:
```json
{
"location": {
"lon": 52.5200,
"lat": 13.4050
},
"cities": [
{
"lon": 51.5072,
"lat": 0.1276
},
{
"lon": 40.7128,
"lat": 74.0060
}
]
}
```
Coordinate should be described as an object containing two fields: `lon` - for longitude, and `lat` - for latitude.
### Datetime
*Available as of v1.8.0*
`datetime` - date and time in [RFC 3339] format.
See the following examples of single and multiple `datetime` values:
```json
{
"created_at": "2023-02-08T10:49:00Z",
"updated_at": [
"2023-02-08T13:52:00Z",
"2023-02-21T21:23:00Z"
]
}
```
The following formats are supported:
- `"2023-02-08T10:49:00Z"` ([RFC 3339], UTC)
- `"2023-02-08T11:49:00+01:00"` ([RFC 3339], with timezone)
- `"2023-02-08T10:49:00"` (without timezone, UTC is assumed)
- `"2023-02-08T10:49"` (without timezone and seconds)
- `"2023-02-08"` (only date, midnight is assumed)
Notes about the format:
- `T` can be replaced with a space.
- The `T` and `Z` symbols are case-insensitive.
- UTC is always assumed when the timezone is not specified.
- Timezone can have the following formats: `±HH:MM`, `±HHMM`, `±HH`, or `Z`.
- Seconds can have up to 6 decimals, so the finest granularity for `datetime` is microseconds.
[RFC 3339]: https://datatracker.ietf.org/doc/html/rfc3339#section-5.6
### UUID
*Available as of v1.11.0*
In addition to the basic `keyword` type, Qdrant supports `uuid` type for storing UUID values.
Functionally, it works the same as `keyword`, internally stores parsed UUID values.
```json
{
"uuid": "550e8400-e29b-41d4-a716-446655440000",
"uuids": [
"550e8400-e29b-41d4-a716-446655440000",
"550e8400-e29b-41d4-a716-446655440001"
]
}
```
String representation of UUID (e.g. `550e8400-e29b-41d4-a716-446655440000`) occupies 36 bytes.
But when numeric representation is used, it is only 128 bits (16 bytes).
Usage of `uuid` index type is recommended in payload-heavy collections to save RAM and improve search performance.
## Create point with payload
REST API ([Schema](https://api.qdrant.tech/api-reference/points/upsert-points))
```http
PUT /collections/{collection_name}/points
{
"points": [
{
"id": 1,
"vector": [0.05, 0.61, 0.76, 0.74],
"payload": {"city": "Berlin", "price": 1.99}
},
{
"id": 2,
"vector": [0.19, 0.81, 0.75, 0.11],
"payload": {"city": ["Berlin", "London"], "price": 1.99}
},
{
"id": 3,
"vector": [0.36, 0.55, 0.47, 0.94],
"payload": {"city": ["Berlin", "Moscow"], "price": [1.99, 2.99]}
}
]
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.upsert(
collection_name="{collection_name}",
points=[
models.PointStruct(
id=1,
vector=[0.05, 0.61, 0.76, 0.74],
payload={
"city": "Berlin",
"price": 1.99,
},
),
models.PointStruct(
id=2,
vector=[0.19, 0.81, 0.75, 0.11],
payload={
"city": ["Berlin", "London"],
"price": 1.99,
},
),
models.PointStruct(
id=3,
vector=[0.36, 0.55, 0.47, 0.94],
payload={
"city": ["Berlin", "Moscow"],
"price": [1.99, 2.99],
},
),
],
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.upsert("{collection_name}", {
points: [
{
id: 1,
vector: [0.05, 0.61, 0.76, 0.74],
payload: {
city: "Berlin",
price: 1.99,
},
},
{
id: 2,
vector: [0.19, 0.81, 0.75, 0.11],
payload: {
city: ["Berlin", "London"],
price: 1.99,
},
},
{
id: 3,
vector: [0.36, 0.55, 0.47, 0.94],
payload: {
city: ["Berlin", "Moscow"],
price: [1.99, 2.99],
},
},
],
});
```
```rust
use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
use qdrant_client::{Payload, Qdrant, QdrantError};
use serde_json::json;
let client = Qdrant::from_url("http://localhost:6334").build()?;
let points = vec![
PointStruct::new(
1,
vec![0.05, 0.61, 0.76, 0.74],
Payload::try_from(json!({"city": "Berlin", "price": 1.99})).unwrap(),
),
PointStruct::new(
2,
vec![0.19, 0.81, 0.75, 0.11],
Payload::try_from(json!({"city": ["Berlin", "London"]})).unwrap(),
),
PointStruct::new(
3,
vec![0.36, 0.55, 0.47, 0.94],
Payload::try_from(json!({"city": ["Berlin", "Moscow"], "price": [1.99, 2.99]}))
.unwrap(),
),
];
client
.upsert_points(UpsertPointsBuilder::new("{collection_name}", points).wait(true))
.await?;
```
```java
import java.util.List;
import java.util.Map;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.ValueFactory.value;
import static io.qdrant.client.VectorsFactory.vectors;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.PointStruct;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.upsertAsync(
"{collection_name}",
List.of(
PointStruct.newBuilder()
.setId(id(1))
.setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
.putAllPayload(Map.of("city", value("Berlin"), "price", value(1.99)))
.build(),
PointStruct.newBuilder()
.setId(id(2))
.setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f))
.putAllPayload(
Map.of("city", list(List.of(value("Berlin"), value("London")))))
.build(),
PointStruct.newBuilder()
.setId(id(3))
.setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f))
.putAllPayload(
Map.of(
"city",
list(List.of(value("Berlin"), value("London"))),
"price",
list(List.of(value(1.99), value(2.99)))))
.build()))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpsertAsync(
collectionName: "{collection_name}",
points: new List<PointStruct>
{
new PointStruct
{
Id = 1,
Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f },
Payload = { ["city"] = "Berlin", ["price"] = 1.99 }
},
new PointStruct
{
Id = 2,
Vectors = new[] { 0.19f, 0.81f, 0.75f, 0.11f },
Payload = { ["city"] = new[] { "Berlin", "London" } }
},
new PointStruct
{
Id = 3,
Vectors = new[] { 0.36f, 0.55f, 0.47f, 0.94f },
Payload =
{
["city"] = new[] { "Berlin", "Moscow" },
["price"] = new Value
{
ListValue = new ListValue { Values = { new Value[] { 1.99, 2.99 } } }
}
}
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "{collection_name}",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewIDNum(1),
Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74),
Payload: qdrant.NewValueMap(map[string]any{
"city": "Berlin", "price": 1.99}),
},
{
Id: qdrant.NewIDNum(2),
Vectors: qdrant.NewVectors(0.19, 0.81, 0.75, 0.11),
Payload: qdrant.NewValueMap(map[string]any{
"city": []any{"Berlin", "London"}}),
},
{
Id: qdrant.NewIDNum(3),
Vectors: qdrant.NewVectors(0.36, 0.55, 0.47, 0.94),
Payload: qdrant.NewValueMap(map[string]any{
"city": []any{"Berlin", "London"},
"price": []any{1.99, 2.99}}),
},
},
})
```
## Update payload
### Set payload
Set only the given payload values on a point.
REST API ([Schema](https://api.qdrant.tech/api-reference/points/set-payload)):
```http
POST /collections/{collection_name}/points/payload
{
"payload": {
"property1": "string",
"property2": "string"
},
"points": [
0, 3, 100
]
}
```
```python
client.set_payload(
collection_name="{collection_name}",
payload={
"property1": "string",
"property2": "string",
},
points=[0, 3, 10],
)
```
```typescript
client.setPayload("{collection_name}", {
payload: {
property1: "string",
property2: "string",
},
points: [0, 3, 10],
});
```
```rust
use qdrant_client::qdrant::{
PointsIdsList, SetPayloadPointsBuilder,
};
use qdrant_client::Payload,;
use serde_json::json;
client
.set_payload(
SetPayloadPointsBuilder::new(
"{collection_name}",
Payload::try_from(json!({
"property1": "string",
"property2": "string",
}))
.unwrap(),
)
.points_selector(PointsIdsList {
ids: vec![0.into(), 3.into(), 10.into()],
})
.wait(true),
)
.await?;
```
```java
import java.util.List;
import java.util.Map;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.ValueFactory.value;
client
.setPayloadAsync(
"{collection_name}",
Map.of("property1", value("string"), "property2", value("string")),
List.of(id(0), id(3), id(10)),
true,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.SetPayloadAsync(
collectionName: "{collection_name}",
payload: new Dictionary<string, Value> { { "property1", "string" }, { "property2", "string" } },
ids: new ulong[] { 0, 3, 10 }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.SetPayload(context.Background(), &qdrant.SetPayloadPoints{
CollectionName: "{collection_name}",
Payload: qdrant.NewValueMap(
map[string]any{"property1": "string", "property2": "string"}),
PointsSelector: qdrant.NewPointsSelector(
qdrant.NewIDNum(0),
qdrant.NewIDNum(3)),
})
```
You don't need to know the ids of the points you want to modify. The alternative
is to use filters.
```http
POST /collections/{collection_name}/points/payload
{
"payload": {
"property1": "string",
"property2": "string"
},
"filter": {
"must": [
{
"key": "color",
"match": {
"value": "red"
}
}
]
}
}
```
```python
client.set_payload(
collection_name="{collection_name}",
payload={
"property1": "string",
"property2": "string",
},
points=models.Filter(
must=[
models.FieldCondition(
key="color",
match=models.MatchValue(value="red"),
),
],
),
)
```
```typescript
client.setPayload("{collection_name}", {
payload: {
property1: "string",
property2: "string",
},
filter: {
must: [
{
key: "color",
match: {
value: "red",
},
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, SetPayloadPointsBuilder};
use qdrant_client::Payload;
use serde_json::json;
client
.set_payload(
SetPayloadPointsBuilder::new(
"{collection_name}",
Payload::try_from(json!({
"property1": "string",
"property2": "string",
}))
.unwrap(),
)
.points_selector(Filter::must([Condition::matches(
"color",
"red".to_string(),
)]))
.wait(true),
)
.await?;
```
```java
import java.util.Map;
import static io.qdrant.client.ConditionFactory.matchKeyword;
import static io.qdrant.client.ValueFactory.value;
client
.setPayloadAsync(
"{collection_name}",
Map.of("property1", value("string"), "property2", value("string")),
Filter.newBuilder().addMust(matchKeyword("color", "red")).build(),
true,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.SetPayloadAsync(
collectionName: "{collection_name}",
payload: new Dictionary<string, Value> { { "property1", "string" }, { "property2", "string" } },
filter: MatchKeyword("color", "red")
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.SetPayload(context.Background(), &qdrant.SetPayloadPoints{
CollectionName: "{collection_name}",
Payload: qdrant.NewValueMap(
map[string]any{"property1": "string", "property2": "string"}),
PointsSelector: qdrant.NewPointsSelectorFilter(&qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("color", "red"),
},
}),
})
```
_Available as of v1.8.0_
It is possible to modify only a specific key of the payload by using the `key` parameter.
For instance, given the following payload JSON object on a point:
```json
{
"property1": {
"nested_property": "foo",
},
"property2": {
"nested_property": "bar",
}
}
```
You can modify the `nested_property` of `property1` with the following request:
```http
POST /collections/{collection_name}/points/payload
{
"payload": {
"nested_property": "qux",
},
"key": "property1",
"points": [1]
}
```
Resulting in the following payload:
```json
{
"property1": {
"nested_property": "qux",
},
"property2": {
"nested_property": "bar",
}
}
```
### Overwrite payload
Fully replace any existing payload with the given one.
REST API ([Schema](https://api.qdrant.tech/api-reference/points/overwrite-payload)):
```http
PUT /collections/{collection_name}/points/payload
{
"payload": {
"property1": "string",
"property2": "string"
},
"points": [
0, 3, 100
]
}
```
```python
client.overwrite_payload(
collection_name="{collection_name}",
payload={
"property1": "string",
"property2": "string",
},
points=[0, 3, 10],
)
```
```typescript
client.overwritePayload("{collection_name}", {
payload: {
property1: "string",
property2: "string",
},
points: [0, 3, 10],
});
```
```rust
use qdrant_client::qdrant::{PointsIdsList, SetPayloadPointsBuilder};
use qdrant_client::Payload;
use serde_json::json;
client
.overwrite_payload(
SetPayloadPointsBuilder::new(
"{collection_name}",
Payload::try_from(json!({
"property1": "string",
"property2": "string",
}))
.unwrap(),
)
.points_selector(PointsIdsList {
ids: vec![0.into(), 3.into(), 10.into()],
})
.wait(true),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.ValueFactory.value;
client
.overwritePayloadAsync(
"{collection_name}",
Map.of("property1", value("string"), "property2", value("string")),
List.of(id(0), id(3), id(10)),
true,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.OverwritePayloadAsync(
collectionName: "{collection_name}",
payload: new Dictionary<string, Value> { { "property1", "string" }, { "property2", "string" } },
ids: new ulong[] { 0, 3, 10 }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.OverwritePayload(context.Background(), &qdrant.SetPayloadPoints{
CollectionName: "{collection_name}",
Payload: qdrant.NewValueMap(
map[string]any{"property1": "string", "property2": "string"}),
PointsSelector: qdrant.NewPointsSelector(
qdrant.NewIDNum(0),
qdrant.NewIDNum(3)),
})
```
Like [set payload](#set-payload), you don't need to know the ids of the points
you want to modify. The alternative is to use filters.
### Clear payload
This method removes all payload keys from specified points
REST API ([Schema](https://api.qdrant.tech/api-reference/points/clear-payload)):
```http
POST /collections/{collection_name}/points/payload/clear
{
"points": [0, 3, 100]
}
```
```python
client.clear_payload(
collection_name="{collection_name}",
points_selector=[0, 3, 100],
)
```
```typescript
client.clearPayload("{collection_name}", {
points: [0, 3, 100],
});
```
```rust
use qdrant_client::qdrant::{ClearPayloadPointsBuilder, PointsIdsList};
client
.clear_payload(
ClearPayloadPointsBuilder::new("{collection_name}")
.points(PointsIdsList {
ids: vec![0.into(), 3.into(), 10.into()],
})
.wait(true),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.PointIdFactory.id;
client
.clearPayloadAsync("{collection_name}", List.of(id(0), id(3), id(100)), true, null, null)
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.ClearPayloadAsync(collectionName: "{collection_name}", ids: new ulong[] { 0, 3, 100 });
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.ClearPayload(context.Background(), &qdrant.ClearPayloadPoints{
CollectionName: "{collection_name}",
Points: qdrant.NewPointsSelector(
qdrant.NewIDNum(0),
qdrant.NewIDNum(3)),
})
```
<aside role="status">
You can also use <code>models.FilterSelector</code> to remove the points matching given filter criteria, instead of providing the ids.
</aside>
### Delete payload keys
Delete specific payload keys from points.
REST API ([Schema](https://api.qdrant.tech/api-reference/points/delete-payload)):
```http
POST /collections/{collection_name}/points/payload/delete
{
"keys": ["color", "price"],
"points": [0, 3, 100]
}
```
```python
client.delete_payload(
collection_name="{collection_name}",
keys=["color", "price"],
points=[0, 3, 100],
)
```
```typescript
client.deletePayload("{collection_name}", {
keys: ["color", "price"],
points: [0, 3, 100],
});
```
```rust
use qdrant_client::qdrant::{DeletePayloadPointsBuilder, PointsIdsList};
client
.delete_payload(
DeletePayloadPointsBuilder::new(
"{collection_name}",
vec!["color".to_string(), "price".to_string()],
)
.points_selector(PointsIdsList {
ids: vec![0.into(), 3.into(), 10.into()],
})
.wait(true),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.PointIdFactory.id;
client
.deletePayloadAsync(
"{collection_name}",
List.of("color", "price"),
List.of(id(0), id(3), id(100)),
true,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.DeletePayloadAsync(
collectionName: "{collection_name}",
keys: ["color", "price"],
ids: new ulong[] { 0, 3, 100 }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.DeletePayload(context.Background(), &qdrant.DeletePayloadPoints{
CollectionName: "{collection_name}",
Keys: []string{"color", "price"},
PointsSelector: qdrant.NewPointsSelector(
qdrant.NewIDNum(0),
qdrant.NewIDNum(3)),
})
```
Alternatively, you can use filters to delete payload keys from the points.
```http
POST /collections/{collection_name}/points/payload/delete
{
"keys": ["color", "price"],
"filter": {
"must": [
{
"key": "color",
"match": {
"value": "red"
}
}
]
}
}
```
```python
client.delete_payload(
collection_name="{collection_name}",
keys=["color", "price"],
points=models.Filter(
must=[
models.FieldCondition(
key="color",
match=models.MatchValue(value="red"),
),
],
),
)
```
```typescript
client.deletePayload("{collection_name}", {
keys: ["color", "price"],
filter: {
must: [
{
key: "color",
match: {
value: "red",
},
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, DeletePayloadPointsBuilder, Filter};
client
.delete_payload(
DeletePayloadPointsBuilder::new(
"{collection_name}",
vec!["color".to_string(), "price".to_string()],
)
.points_selector(Filter::must([Condition::matches(
"color",
"red".to_string(),
)]))
.wait(true),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.ConditionFactory.matchKeyword;
client
.deletePayloadAsync(
"{collection_name}",
List.of("color", "price"),
Filter.newBuilder().addMust(matchKeyword("color", "red")).build(),
true,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.DeletePayloadAsync(
collectionName: "{collection_name}",
keys: ["color", "price"],
filter: MatchKeyword("color", "red")
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.DeletePayload(context.Background(), &qdrant.DeletePayloadPoints{
CollectionName: "{collection_name}",
Keys: []string{"color", "price"},
PointsSelector: qdrant.NewPointsSelectorFilter(
&qdrant.Filter{
Must: []*qdrant.Condition{qdrant.NewMatch("color", "red")},
},
),
})
```
## Payload indexing
To search more efficiently with filters, Qdrant allows you to create indexes for payload fields by specifying the name and type of field it is intended to be.
The indexed fields also affect the vector index. See [Indexing](../indexing/) for details.
In practice, we recommend creating an index on those fields that could potentially constrain the results the most.
For example, using an index for the object ID will be much more efficient, being unique for each record, than an index by its color, which has only a few possible values.
In compound queries involving multiple fields, Qdrant will attempt to use the most restrictive index first.
To create index for the field, you can use the following:
REST API ([Schema](https://api.qdrant.tech/api-reference/indexes/create-field-index))
```http
PUT /collections/{collection_name}/index
{
"field_name": "name_of_the_field_to_index",
"field_schema": "keyword"
}
```
```python
client.create_payload_index(
collection_name="{collection_name}",
field_name="name_of_the_field_to_index",
field_schema="keyword",
)
```
```typescript
client.createPayloadIndex("{collection_name}", {
field_name: "name_of_the_field_to_index",
field_schema: "keyword",
});
```
```rust
use qdrant_client::qdrant::{CreateFieldIndexCollectionBuilder, FieldType};
client
.create_field_index(
CreateFieldIndexCollectionBuilder::new(
"{collection_name}",
"name_of_the_field_to_index",
FieldType::Keyword,
)
.wait(true),
)
.await?;
```
```java
import io.qdrant.client.grpc.Collections.PayloadSchemaType;
client.createPayloadIndexAsync(
"{collection_name}",
"name_of_the_field_to_index",
PayloadSchemaType.Keyword,
null,
true,
null,
null);
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.CreatePayloadIndexAsync(
collectionName: "{collection_name}",
fieldName: "name_of_the_field_to_index"
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
CollectionName: "{collection_name}",
FieldName: "name_of_the_field_to_index",
FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(),
})
```
The index usage flag is displayed in the payload schema with the [collection info API](https://api.qdrant.tech/api-reference/collections/get-collection).
Payload schema example:
```json
{
"payload_schema": {
"property1": {
"data_type": "keyword"
},
"property2": {
"data_type": "integer"
}
}
}
```
| documentation/concepts/payload.md |
---
title: Collections
weight: 30
aliases:
- ../collections
- /concepts/collections/
- /documentation/frameworks/fondant/documentation/concepts/collections/
---
# Collections
A collection is a named set of points (vectors with a payload) among which you can search. The vector of each point within the same collection must have the same dimensionality and be compared by a single metric. [Named vectors](#collection-with-multiple-vectors) can be used to have multiple vectors in a single point, each of which can have their own dimensionality and metric requirements.
Distance metrics are used to measure similarities among vectors.
The choice of metric depends on the way vectors obtaining and, in particular, on the method of neural network encoder training.
Qdrant supports these most popular types of metrics:
* Dot product: `Dot` - [[wiki]](https://en.wikipedia.org/wiki/Dot_product)
* Cosine similarity: `Cosine` - [[wiki]](https://en.wikipedia.org/wiki/Cosine_similarity)
* Euclidean distance: `Euclid` - [[wiki]](https://en.wikipedia.org/wiki/Euclidean_distance)
* Manhattan distance: `Manhattan` - [[wiki]](https://en.wikipedia.org/wiki/Taxicab_geometry)
<aside role="status">For search efficiency, Cosine similarity is implemented as dot-product over normalized vectors. Vectors are automatically normalized during upload</aside>
In addition to metrics and vector size, each collection uses its own set of parameters that controls collection optimization, index construction, and vacuum.
These settings can be changed at any time by a corresponding request.
## Setting up multitenancy
**How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called [multitenancy](https://en.wikipedia.org/wiki/Multitenancy). It is efficient for most of users, but it requires additional configuration. [Learn how to set it up](../../tutorials/multiple-partitions/)
**When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise.
## Create a collection
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 300,
"distance": "Cosine"
}
}
```
```bash
curl -X PUT http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"vectors": {
"size": 300,
"distance": "Cosine"
}
}'
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: { size: 100, distance: "Cosine" },
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{CreateCollectionBuilder, VectorParamsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(100, Distance::Cosine)),
)
.await?;
```
```java
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client = new QdrantClient(
QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.createCollectionAsync("{collection_name}",
VectorParams.newBuilder().setDistance(Distance.Cosine).setSize(100).build()).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 100,
Distance: qdrant.Distance_Cosine,
}),
})
```
In addition to the required options, you can also specify custom values for the following collection options:
* `hnsw_config` - see [indexing](../indexing/#vector-index) for details.
* `wal_config` - Write-Ahead-Log related configuration. See more details about [WAL](../storage/#versioning)
* `optimizers_config` - see [optimizer](../optimizer/) for details.
* `shard_number` - which defines how many shards the collection should have. See [distributed deployment](../../guides/distributed_deployment/#sharding) section for details.
* `on_disk_payload` - defines where to store payload data. If `true` - payload will be stored on disk only. Might be useful for limiting the RAM usage in case of large payload.
* `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details.
Default parameters for the optional collection parameters are defined in [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml).
See [schema definitions](https://api.qdrant.tech/api-reference/collections/create-collection) and a [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml) for more information about collection and vector parameters.
*Available as of v1.2.0*
Vectors all live in RAM for very quick access. The `on_disk` parameter can be
set in the vector configuration. If true, all vectors will live on disk. This
will enable the use of
[memmaps](../../concepts/storage/#configuring-memmap-storage),
which is suitable for ingesting a large amount of data.
### Create collection from another collection
*Available as of v1.0.0*
It is possible to initialize a collection from another existing collection.
This might be useful for experimenting quickly with different configurations for the same data set.
Make sure the vectors have the same `size` and `distance` function when setting up the vectors configuration in the new collection. If you used the previous sample
code, `"size": 300` and `"distance": "Cosine"`.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 100,
"distance": "Cosine"
},
"init_from": {
"collection": "{from_collection_name}"
}
}
```
```bash
curl -X PUT http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"vectors": {
"size": 300,
"distance": "Cosine"
},
"init_from": {
"collection": {from_collection_name}
}
}'
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE),
init_from=models.InitFrom(collection="{from_collection_name}"),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: { size: 100, distance: "Cosine" },
init_from: { collection: "{from_collection_name}" },
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(VectorParamsBuilder::new(100, Distance::Cosine))
.init_from_collection("{from_collection_name}"),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(
VectorsConfig.newBuilder()
.setParams(
VectorParams.newBuilder()
.setSize(100)
.setDistance(Distance.Cosine)
.build()))
.setInitFromCollection("{from_collection_name}")
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine },
initFromCollection: "{from_collection_name}"
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 100,
Distance: qdrant.Distance_Cosine,
}),
InitFromCollection: qdrant.PtrOf("{from_collection_name}"),
})
```
### Collection with multiple vectors
*Available as of v0.10.0*
It is possible to have multiple vectors per record.
This feature allows for multiple vector storages per collection.
To distinguish vectors in one record, they should have a unique name defined when creating the collection.
Each named vector in this mode has its distance and size:
```http
PUT /collections/{collection_name}
{
"vectors": {
"image": {
"size": 4,
"distance": "Dot"
},
"text": {
"size": 8,
"distance": "Cosine"
}
}
}
```
```bash
curl -X PUT http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"vectors": {
"image": {
"size": 4,
"distance": "Dot"
},
"text": {
"size": 8,
"distance": "Cosine"
}
}
}'
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config={
"image": models.VectorParams(size=4, distance=models.Distance.DOT),
"text": models.VectorParams(size=8, distance=models.Distance.COSINE),
},
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
image: { size: 4, distance: "Dot" },
text: { size: 8, distance: "Cosine" },
},
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, VectorParamsBuilder, VectorsConfigBuilder,
};
let client = Qdrant::from_url("http://localhost:6334").build()?;
let mut vectors_config = VectorsConfigBuilder::default();
vectors_config
.add_named_vector_params("image", VectorParamsBuilder::new(4, Distance::Dot).build());
vectors_config.add_named_vector_params(
"text",
VectorParamsBuilder::new(8, Distance::Cosine).build(),
);
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}").vectors_config(vectors_config),
)
.await?;
```
```java
import java.util.Map;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.VectorParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
"{collection_name}",
Map.of(
"image", VectorParams.newBuilder().setSize(4).setDistance(Distance.Dot).build(),
"text",
VectorParams.newBuilder().setSize(8).setDistance(Distance.Cosine).build()))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParamsMap
{
Map =
{
["image"] = new VectorParams { Size = 4, Distance = Distance.Dot },
["text"] = new VectorParams { Size = 8, Distance = Distance.Cosine },
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfigMap(
map[string]*qdrant.VectorParams{
"image": {
Size: 4,
Distance: qdrant.Distance_Dot,
},
"text": {
Size: 8,
Distance: qdrant.Distance_Cosine,
},
}),
})
```
For rare use cases, it is possible to create a collection without any vector storage.
*Available as of v1.1.1*
For each named vector you can optionally specify
[`hnsw_config`](../indexing/#vector-index) or
[`quantization_config`](../../guides/quantization/#setting-up-quantization-in-qdrant) to
deviate from the collection configuration. This can be useful to fine-tune
search performance on a vector level.
*Available as of v1.2.0*
Vectors all live in RAM for very quick access. On a per-vector basis you can set
`on_disk` to true to store all vectors on disk at all times. This will enable
the use of
[memmaps](../../concepts/storage/#configuring-memmap-storage),
which is suitable for ingesting a large amount of data.
### Vector datatypes
*Available as of v1.9.0*
Some embedding providers may provide embeddings in a pre-quantized format.
One of the most notable examples is the [Cohere int8 & binary embeddings](https://cohere.com/blog/int8-binary-embeddings).
Qdrant has direct support for uint8 embeddings, which you can also use in combination with binary quantization.
To create a collection with uint8 embeddings, you can use the following configuration:
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 1024,
"distance": "Cosine",
"datatype": "uint8"
}
}
```
```bash
curl -X PUT http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"vectors": {
"size": 1024,
"distance": "Cosine",
"datatype": "uint8"
}
}'
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(
size=1024,
distance=models.Distance.COSINE,
datatype=models.Datatype.UINT8,
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
image: { size: 1024, distance: "Cosine", datatype: "uint8" },
},
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{
CreateCollectionBuilder, Datatype, Distance, VectorParamsBuilder,
};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}").vectors_config(
VectorParamsBuilder::new(1024, Distance::Cosine).datatype(Datatype::Uint8),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.grpc.Collections.Datatype;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.VectorParams;
QdrantClient client = new QdrantClient(
QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync("{collection_name}",
VectorParams.newBuilder()
.setSize(1024)
.setDistance(Distance.Cosine)
.setDatatype(Datatype.Uint8)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams {
Size = 1024, Distance = Distance.Cosine, Datatype = Datatype.Uint8
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 1024,
Distance: qdrant.Distance_Cosine,
Datatype: qdrant.Datatype_Uint8.Enum(),
}),
})
```
Vectors with `uint8` datatype are stored in a more compact format, which can save memory and improve search speed at the cost of some precision.
If you choose to use the `uint8` datatype, elements of the vector will be stored as unsigned 8-bit integers, which can take values **from 0 to 255**.
### Collection with sparse vectors
*Available as of v1.7.0*
Qdrant supports sparse vectors as a first-class citizen.
Sparse vectors are useful for text search, where each word is represented as a separate dimension.
Collections can contain sparse vectors as additional [named vectors](#collection-with-multiple-vectors) along side regular dense vectors in a single point.
Unlike dense vectors, sparse vectors must be named.
And additionally, sparse vectors and dense vectors must have different names within a collection.
```http
PUT /collections/{collection_name}
{
"sparse_vectors": {
"text": { },
}
}
```
```bash
curl -X PUT http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"sparse_vectors": {
"text": { }
}
}'
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
sparse_vectors_config={
"text": models.SparseVectorParams(),
},
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
sparse_vectors: {
text: { },
},
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{
CreateCollectionBuilder, SparseVectorParamsBuilder, SparseVectorsConfigBuilder,
};
let client = Qdrant::from_url("http://localhost:6334").build()?;
let mut sparse_vector_config = SparseVectorsConfigBuilder::default();
sparse_vector_config.add_named_vector_params("text", SparseVectorParamsBuilder::default());
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.sparse_vectors_config(sparse_vector_config),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.SparseVectorConfig;
import io.qdrant.client.grpc.Collections.SparseVectorParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setSparseVectorsConfig(
SparseVectorConfig.newBuilder()
.putMap("text", SparseVectorParams.getDefaultInstance()))
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
sparseVectorsConfig: ("text", new SparseVectorParams())
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_namee}",
SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
map[string]*qdrant.SparseVectorParams{
"text": {},
}),
})
```
Outside of a unique name, there are no required configuration parameters for sparse vectors.
The distance function for sparse vectors is always `Dot` and does not need to be specified.
However, there are optional parameters to tune the underlying [sparse vector index](../indexing/#sparse-vector-index).
### Check collection existence
*Available as of v1.8.0*
```http
GET http://localhost:6333/collections/{collection_name}/exists
```
```bash
curl -X GET http://localhost:6333/collections/{collection_name}/exists
```
```python
client.collection_exists(collection_name="{collection_name}")
```
```typescript
client.collectionExists("{collection_name}");
```
```rust
client.collection_exists("{collection_name}").await?;
```
```java
client.collectionExistsAsync("{collection_name}").get();
```
```csharp
await client.CollectionExistsAsync("{collection_name}");
```
```go
import "context"
client.CollectionExists(context.Background(), "my_collection")
```
### Delete collection
```http
DELETE http://localhost:6333/collections/{collection_name}
```
```bash
curl -X DELETE http://localhost:6333/collections/{collection_name}
```
```python
client.delete_collection(collection_name="{collection_name}")
```
```typescript
client.deleteCollection("{collection_name}");
```
```rust
client.delete_collection("{collection_name}").await?;
```
```java
client.deleteCollectionAsync("{collection_name}").get();
```
```csharp
await client.DeleteCollectionAsync("{collection_name}");
```
```go
import "context"
client.DeleteCollection(context.Background(), "{collection_name}")
```
### Update collection parameters
Dynamic parameter updates may be helpful, for example, for more efficient initial loading of vectors.
For example, you can disable indexing during the upload process, and enable it immediately after the upload is finished.
As a result, you will not waste extra computation resources on rebuilding the index.
The following command enables indexing for segments that have more than 10000 kB of vectors stored:
```http
PATCH /collections/{collection_name}
{
"optimizers_config": {
"indexing_threshold": 10000
}
}
```
```bash
curl -X PATCH http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"optimizers_config": {
"indexing_threshold": 10000
}
}'
```
```python
client.update_collection(
collection_name="{collection_name}",
optimizer_config=models.OptimizersConfigDiff(indexing_threshold=10000),
)
```
```typescript
client.updateCollection("{collection_name}", {
optimizers_config: {
indexing_threshold: 10000,
},
});
```
```rust
use qdrant_client::qdrant::{OptimizersConfigDiffBuilder, UpdateCollectionBuilder};
client
.update_collection(
UpdateCollectionBuilder::new("{collection_name}").optimizers_config(
OptimizersConfigDiffBuilder::default().indexing_threshold(10000),
),
)
.await?;
```
```java
import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
import io.qdrant.client.grpc.Collections.UpdateCollection;
client.updateCollectionAsync(
UpdateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setOptimizersConfig(
OptimizersConfigDiff.newBuilder().setIndexingThreshold(10000).build())
.build());
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpdateCollectionAsync(
collectionName: "{collection_name}",
optimizersConfig: new OptimizersConfigDiff { IndexingThreshold = 10000 }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.UpdateCollection(context.Background(), &qdrant.UpdateCollection{
CollectionName: "{collection_name}",
OptimizersConfig: &qdrant.OptimizersConfigDiff{
IndexingThreshold: qdrant.PtrOf(uint64(10000)),
},
})
```
The following parameters can be updated:
* `optimizers_config` - see [optimizer](../optimizer/) for details.
* `hnsw_config` - see [indexing](../indexing/#vector-index) for details.
* `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details.
* `vectors` - vector-specific configuration, including individual `hnsw_config`, `quantization_config` and `on_disk` settings.
* `params` - other collection parameters, including `write_consistency_factor` and `on_disk_payload`.
Full API specification is available in [schema definitions](https://api.qdrant.tech/api-reference/collections/update-collection).
Calls to this endpoint may be blocking as it waits for existing optimizers to
finish. We recommended against using this in a production database as it may
introduce huge overhead due to the rebuilding of the index.
#### Update vector parameters
*Available as of v1.4.0*
<aside role="status">To update vector parameters using the collection update API, you must always specify a vector name. If your collection does not have named vectors, use an empty (<code>""</code>) name.</aside>
Qdrant 1.4 adds support for updating more collection parameters at runtime. HNSW
index, quantization and disk configurations can now be changed without
recreating a collection. Segments (with index and quantized data) will
automatically be rebuilt in the background to match updated parameters.
To put vector data on disk for a collection that **does not have** named vectors,
use `""` as name:
```http
PATCH /collections/{collection_name}
{
"vectors": {
"": {
"on_disk": true
}
}
}
```
```bash
curl -X PATCH http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"vectors": {
"": {
"on_disk": true
}
}
}'
```
To put vector data on disk for a collection that **does have** named vectors:
Note: To create a vector name, follow the procedure from our [Points](/documentation/concepts/points/#create-vector-name).
```http
PATCH /collections/{collection_name}
{
"vectors": {
"my_vector": {
"on_disk": true
}
}
}
```
```bash
curl -X PATCH http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"vectors": {
"my_vector": {
"on_disk": true
}
}
}'
```
In the following example the HNSW index and quantization parameters are updated,
both for the whole collection, and for `my_vector` specifically:
```http
PATCH /collections/{collection_name}
{
"vectors": {
"my_vector": {
"hnsw_config": {
"m": 32,
"ef_construct": 123
},
"quantization_config": {
"product": {
"compression": "x32",
"always_ram": true
}
},
"on_disk": true
}
},
"hnsw_config": {
"ef_construct": 123
},
"quantization_config": {
"scalar": {
"type": "int8",
"quantile": 0.8,
"always_ram": false
}
}
}
```
```bash
curl -X PATCH http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"vectors": {
"my_vector": {
"hnsw_config": {
"m": 32,
"ef_construct": 123
},
"quantization_config": {
"product": {
"compression": "x32",
"always_ram": true
}
},
"on_disk": true
}
},
"hnsw_config": {
"ef_construct": 123
},
"quantization_config": {
"scalar": {
"type": "int8",
"quantile": 0.8,
"always_ram": false
}
}
}'
```
```python
client.update_collection(
collection_name="{collection_name}",
vectors_config={
"my_vector": models.VectorParamsDiff(
hnsw_config=models.HnswConfigDiff(
m=32,
ef_construct=123,
),
quantization_config=models.ProductQuantization(
product=models.ProductQuantizationConfig(
compression=models.CompressionRatio.X32,
always_ram=True,
),
),
on_disk=True,
),
},
hnsw_config=models.HnswConfigDiff(
ef_construct=123,
),
quantization_config=models.ScalarQuantization(
scalar=models.ScalarQuantizationConfig(
type=models.ScalarType.INT8,
quantile=0.8,
always_ram=False,
),
),
)
```
```typescript
client.updateCollection("{collection_name}", {
vectors: {
my_vector: {
hnsw_config: {
m: 32,
ef_construct: 123,
},
quantization_config: {
product: {
compression: "x32",
always_ram: true,
},
},
on_disk: true,
},
},
hnsw_config: {
ef_construct: 123,
},
quantization_config: {
scalar: {
type: "int8",
quantile: 0.8,
always_ram: true,
},
},
});
```
```rust
use std::collections::HashMap;
use qdrant_client::qdrant::{
quantization_config_diff::Quantization, vectors_config_diff::Config, HnswConfigDiffBuilder,
QuantizationType, ScalarQuantizationBuilder, UpdateCollectionBuilder, VectorParamsDiffBuilder,
VectorParamsDiffMap,
};
client
.update_collection(
UpdateCollectionBuilder::new("{collection_name}")
.hnsw_config(HnswConfigDiffBuilder::default().ef_construct(123))
.vectors_config(Config::ParamsMap(VectorParamsDiffMap {
map: HashMap::from([(
("my_vector".into()),
VectorParamsDiffBuilder::default()
.hnsw_config(HnswConfigDiffBuilder::default().m(32).ef_construct(123))
.build(),
)]),
}))
.quantization_config(Quantization::Scalar(
ScalarQuantizationBuilder::default()
.r#type(QuantizationType::Int8.into())
.quantile(0.8)
.always_ram(true)
.build(),
)),
)
.await?;
```
```java
import io.qdrant.client.grpc.Collections.HnswConfigDiff;
import io.qdrant.client.grpc.Collections.QuantizationConfigDiff;
import io.qdrant.client.grpc.Collections.QuantizationType;
import io.qdrant.client.grpc.Collections.ScalarQuantization;
import io.qdrant.client.grpc.Collections.UpdateCollection;
import io.qdrant.client.grpc.Collections.VectorParamsDiff;
import io.qdrant.client.grpc.Collections.VectorParamsDiffMap;
import io.qdrant.client.grpc.Collections.VectorsConfigDiff;
client
.updateCollectionAsync(
UpdateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setHnswConfig(HnswConfigDiff.newBuilder().setEfConstruct(123).build())
.setVectorsConfig(
VectorsConfigDiff.newBuilder()
.setParamsMap(
VectorParamsDiffMap.newBuilder()
.putMap(
"my_vector",
VectorParamsDiff.newBuilder()
.setHnswConfig(
HnswConfigDiff.newBuilder()
.setM(3)
.setEfConstruct(123)
.build())
.build())))
.setQuantizationConfig(
QuantizationConfigDiff.newBuilder()
.setScalar(
ScalarQuantization.newBuilder()
.setType(QuantizationType.Int8)
.setQuantile(0.8f)
.setAlwaysRam(true)
.build()))
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpdateCollectionAsync(
collectionName: "{collection_name}",
hnswConfig: new HnswConfigDiff { EfConstruct = 123 },
vectorsConfig: new VectorParamsDiffMap
{
Map =
{
{
"my_vector",
new VectorParamsDiff
{
HnswConfig = new HnswConfigDiff { M = 3, EfConstruct = 123 }
}
}
}
},
quantizationConfig: new QuantizationConfigDiff
{
Scalar = new ScalarQuantization
{
Type = QuantizationType.Int8,
Quantile = 0.8f,
AlwaysRam = true
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.UpdateCollection(context.Background(), &qdrant.UpdateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfigDiffMap(
map[string]*qdrant.VectorParamsDiff{
"my_vector": {
HnswConfig: &qdrant.HnswConfigDiff{
M: qdrant.PtrOf(uint64(3)),
EfConstruct: qdrant.PtrOf(uint64(123)),
},
},
}),
QuantizationConfig: qdrant.NewQuantizationDiffScalar(
&qdrant.ScalarQuantization{
Type: qdrant.QuantizationType_Int8,
Quantile: qdrant.PtrOf(float32(0.8)),
AlwaysRam: qdrant.PtrOf(true),
}),
})
```
## Collection info
Qdrant allows determining the configuration parameters of an existing collection to better understand how the points are
distributed and indexed.
```http
GET /collections/{collection_name}
```
```bash
curl -X GET http://localhost:6333/collections/{collection_name}
```
```python
client.get_collection(collection_name="{collection_name}")
```
```typescript
client.getCollection("{collection_name}");
```
```rust
client.collection_info("{collection_name}").await?;
```
```java
client.getCollectionInfoAsync("{collection_name}").get();
```
```csharp
await client.GetCollectionInfoAsync("{collection_name}");
```
```go
import "context"
client.GetCollectionInfo(context.Background(), "{collection_name}")
```
<details>
<summary>Expected result</summary>
```json
{
"result": {
"status": "green",
"optimizer_status": "ok",
"vectors_count": 1068786,
"indexed_vectors_count": 1024232,
"points_count": 1068786,
"segments_count": 31,
"config": {
"params": {
"vectors": {
"size": 384,
"distance": "Cosine"
},
"shard_number": 1,
"replication_factor": 1,
"write_consistency_factor": 1,
"on_disk_payload": false
},
"hnsw_config": {
"m": 16,
"ef_construct": 100,
"full_scan_threshold": 10000,
"max_indexing_threads": 0
},
"optimizer_config": {
"deleted_threshold": 0.2,
"vacuum_min_vector_number": 1000,
"default_segment_number": 0,
"max_segment_size": null,
"memmap_threshold": null,
"indexing_threshold": 20000,
"flush_interval_sec": 5,
"max_optimization_threads": 1
},
"wal_config": {
"wal_capacity_mb": 32,
"wal_segments_ahead": 0
}
},
"payload_schema": {}
},
"status": "ok",
"time": 0.00010143
}
```
</details>
If you insert the vectors into the collection, the `status` field may become
`yellow` whilst it is optimizing. It will become `green` once all the points are
successfully processed.
The following color statuses are possible:
- 🟢 `green`: collection is ready
- 🟡 `yellow`: collection is optimizing
- ⚫ `grey`: collection is pending optimization ([help](#grey-collection-status))
- 🔴 `red`: an error occurred which the engine could not recover from
### Grey collection status
_Available as of v1.9.0_
A collection may have the grey ⚫ status or show "optimizations pending,
awaiting update operation" as optimization status. This state is normally caused
by restarting a Qdrant instance while optimizations were ongoing.
It means the collection has optimizations pending, but they are paused. You must
send any update operation to trigger and start the optimizations again.
For example:
```http
PATCH /collections/{collection_name}
{
"optimizers_config": {}
}
```
```bash
curl -X PATCH http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"optimizers_config": {}
}'
```
```python
client.update_collection(
collection_name="{collection_name}",
optimizer_config=models.OptimizersConfigDiff(),
)
```
```typescript
client.updateCollection("{collection_name}", {
optimizers_config: {},
});
```
```rust
use qdrant_client::qdrant::{OptimizersConfigDiffBuilder, UpdateCollectionBuilder};
client
.update_collection(
UpdateCollectionBuilder::new("{collection_name}")
.optimizers_config(OptimizersConfigDiffBuilder::default()),
)
.await?;
```
```java
import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
import io.qdrant.client.grpc.Collections.UpdateCollection;
client.updateCollectionAsync(
UpdateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setOptimizersConfig(
OptimizersConfigDiff.getDefaultInstance())
.build());
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpdateCollectionAsync(
collectionName: "{collection_name}",
optimizersConfig: new OptimizersConfigDiff { }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.UpdateCollection(context.Background(), &qdrant.UpdateCollection{
CollectionName: "{collection_name}",
OptimizersConfig: &qdrant.OptimizersConfigDiff{},
})
```
### Approximate point and vector counts
You may be interested in the count attributes:
- `points_count` - total number of objects (vectors and their payloads) stored in the collection
- `vectors_count` - total number of vectors in a collection, useful if you have multiple vectors per point
- `indexed_vectors_count` - total number of vectors stored in the HNSW or sparse index. Qdrant does not store all the vectors in the index, but only if an index segment might be created for a given configuration.
The above counts are not exact, but should be considered approximate. Depending
on how you use Qdrant these may give very different numbers than what you may
expect. It's therefore important **not** to rely on them.
More specifically, these numbers represent the count of points and vectors in
Qdrant's internal storage. Internally, Qdrant may temporarily duplicate points
as part of automatic optimizations. It may keep changed or deleted points for a
bit. And it may delay indexing of new points. All of that is for optimization
reasons.
Updates you do are therefore not directly reflected in these numbers. If you see
a wildly different count of points, it will likely resolve itself once a new
round of automatic optimizations has completed.
To clarify: these numbers don't represent the exact amount of points or vectors
you have inserted, nor does it represent the exact number of distinguishable
points or vectors you can query. If you want to know exact counts, refer to the
[count API](../points/#counting-points).
_Note: these numbers may be removed in a future version of Qdrant._
### Indexing vectors in HNSW
In some cases, you might be surprised the value of `indexed_vectors_count` is lower than `vectors_count`. This is an intended behaviour and
depends on the [optimizer configuration](../optimizer/). A new index segment is built if the size of non-indexed vectors is higher than the
value of `indexing_threshold`(in kB). If your collection is very small or the dimensionality of the vectors is low, there might be no HNSW segment
created and `indexed_vectors_count` might be equal to `0`.
It is possible to reduce the `indexing_threshold` for an existing collection by [updating collection parameters](#update-collection-parameters).
## Collection aliases
In a production environment, it is sometimes necessary to switch different versions of vectors seamlessly.
For example, when upgrading to a new version of the neural network.
There is no way to stop the service and rebuild the collection with new vectors in these situations.
Aliases are additional names for existing collections.
All queries to the collection can also be done identically, using an alias instead of the collection name.
Thus, it is possible to build a second collection in the background and then switch alias from the old to the new collection.
Since all changes of aliases happen atomically, no concurrent requests will be affected during the switch.
### Create alias
```http
POST /collections/aliases
{
"actions": [
{
"create_alias": {
"collection_name": "example_collection",
"alias_name": "production_collection"
}
}
]
}
```
```bash
curl -X POST http://localhost:6333/collections/aliases \
-H 'Content-Type: application/json' \
--data-raw '{
"actions": [
{
"create_alias": {
"collection_name": "example_collection",
"alias_name": "production_collection"
}
}
]
}'
```
```python
client.update_collection_aliases(
change_aliases_operations=[
models.CreateAliasOperation(
create_alias=models.CreateAlias(
collection_name="example_collection", alias_name="production_collection"
)
)
]
)
```
```typescript
client.updateCollectionAliases({
actions: [
{
create_alias: {
collection_name: "example_collection",
alias_name: "production_collection",
},
},
],
});
```
```rust
use qdrant_client::qdrant::CreateAliasBuilder;
client
.create_alias(CreateAliasBuilder::new(
"example_collection",
"production_collection",
))
.await?;
```
```java
client.createAliasAsync("production_collection", "example_collection").get();
```
```csharp
await client.CreateAliasAsync(aliasName: "production_collection", collectionName: "example_collection");
```
```go
import "context"
client.CreateAlias(context.Background(), "production_collection", "example_collection")
```
### Remove alias
```bash
curl -X POST http://localhost:6333/collections/aliases \
-H 'Content-Type: application/json' \
--data-raw '{
"actions": [
{
"delete_alias": {
"alias_name": "production_collection"
}
}
]
}'
```
```http
POST /collections/aliases
{
"actions": [
{
"delete_alias": {
"alias_name": "production_collection"
}
}
]
}
```
```python
client.update_collection_aliases(
change_aliases_operations=[
models.DeleteAliasOperation(
delete_alias=models.DeleteAlias(alias_name="production_collection")
),
]
)
```
```typescript
client.updateCollectionAliases({
actions: [
{
delete_alias: {
alias_name: "production_collection",
},
},
],
});
```
```rust
client.delete_alias("production_collection").await?;
```
```java
client.deleteAliasAsync("production_collection").get();
```
```csharp
await client.DeleteAliasAsync("production_collection");
```
```go
import "context"
client.DeleteAlias(context.Background(), "production_collection")
```
### Switch collection
Multiple alias actions are performed atomically.
For example, you can switch underlying collection with the following command:
```http
POST /collections/aliases
{
"actions": [
{
"delete_alias": {
"alias_name": "production_collection"
}
},
{
"create_alias": {
"collection_name": "example_collection",
"alias_name": "production_collection"
}
}
]
}
```
```bash
curl -X POST http://localhost:6333/collections/aliases \
-H 'Content-Type: application/json' \
--data-raw '{
"actions": [
{
"delete_alias": {
"alias_name": "production_collection"
}
},
{
"create_alias": {
"collection_name": "example_collection",
"alias_name": "production_collection"
}
}
]
}'
```
```python
client.update_collection_aliases(
change_aliases_operations=[
models.DeleteAliasOperation(
delete_alias=models.DeleteAlias(alias_name="production_collection")
),
models.CreateAliasOperation(
create_alias=models.CreateAlias(
collection_name="example_collection", alias_name="production_collection"
)
),
]
)
```
```typescript
client.updateCollectionAliases({
actions: [
{
delete_alias: {
alias_name: "production_collection",
},
},
{
create_alias: {
collection_name: "example_collection",
alias_name: "production_collection",
},
},
],
});
```
```rust
use qdrant_client::qdrant::CreateAliasBuilder;
client.delete_alias("production_collection").await?;
client
.create_alias(CreateAliasBuilder::new(
"example_collection",
"production_collection",
))
.await?;
```
```java
client.deleteAliasAsync("production_collection").get();
client.createAliasAsync("production_collection", "example_collection").get();
```
```csharp
await client.DeleteAliasAsync("production_collection");
await client.CreateAliasAsync(aliasName: "production_collection", collectionName: "example_collection");
```
```go
import "context"
client.DeleteAlias(context.Background(), "production_collection")
client.CreateAlias(context.Background(), "production_collection", "example_collection")
```
### List collection aliases
```http
GET /collections/{collection_name}/aliases
```
```bash
curl -X GET http://localhost:6333/collections/{collection_name}/aliases
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.get_collection_aliases(collection_name="{collection_name}")
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.getCollectionAliases("{collection_name}");
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.list_collection_aliases("{collection_name}").await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.listCollectionAliasesAsync("{collection_name}").get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.ListCollectionAliasesAsync("{collection_name}");
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.ListCollectionAliases(context.Background(), "{collection_name}")
```
### List all aliases
```http
GET /aliases
```
```bash
curl -X GET http://localhost:6333/aliases
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.get_aliases()
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.getAliases();
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.list_aliases().await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.listAliasesAsync().get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.ListAliasesAsync();
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.ListAliases(context.Background())
```
### List all collections
```http
GET /collections
```
```bash
curl -X GET http://localhost:6333/collections
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.get_collections()
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.getCollections();
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.list_collections().await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.listCollectionsAsync().get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.ListCollectionsAsync();
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.ListCollections(context.Background())
```
| documentation/concepts/collections.md |
---
title: Indexing
weight: 90
aliases:
- ../indexing
---
# Indexing
A key feature of Qdrant is the effective combination of vector and traditional indexes. It is essential to have this because for vector search to work effectively with filters, having vector index only is not enough. In simpler terms, a vector index speeds up vector search, and payload indexes speed up filtering.
The indexes in the segments exist independently, but the parameters of the indexes themselves are configured for the whole collection.
Not all segments automatically have indexes.
Their necessity is determined by the [optimizer](../optimizer/) settings and depends, as a rule, on the number of stored points.
## Payload Index
Payload index in Qdrant is similar to the index in conventional document-oriented databases.
This index is built for a specific field and type, and is used for quick point requests by the corresponding filtering condition.
The index is also used to accurately estimate the filter cardinality, which helps the [query planning](../search/#query-planning) choose a search strategy.
Creating an index requires additional computational resources and memory, so choosing fields to be indexed is essential. Qdrant does not make this choice but grants it to the user.
To mark a field as indexable, you can use the following:
```http
PUT /collections/{collection_name}/index
{
"field_name": "name_of_the_field_to_index",
"field_schema": "keyword"
}
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.create_payload_index(
collection_name="{collection_name}",
field_name="name_of_the_field_to_index",
field_schema="keyword",
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createPayloadIndex("{collection_name}", {
field_name: "name_of_the_field_to_index",
field_schema: "keyword",
});
```
```rust
use qdrant_client::qdrant::{CreateFieldIndexCollectionBuilder, FieldType};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_field_index(CreateFieldIndexCollectionBuilder::new(
"{collection_name}",
"name_of_the_field_to_index",
FieldType::Keyword,
))
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.PayloadSchemaType;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createPayloadIndexAsync(
"{collection_name}",
"name_of_the_field_to_index",
PayloadSchemaType.Keyword,
null,
null,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.CreatePayloadIndexAsync(collectionName: "{collection_name}", fieldName: "name_of_the_field_to_index");
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
CollectionName: "{collection_name}",
FieldName: "name_of_the_field_to_index",
FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(),
})
```
You can use dot notation to specify a nested field for indexing. Similar to specifying [nested filters](../filtering/#nested-key).
Available field types are:
* `keyword` - for [keyword](../payload/#keyword) payload, affects [Match](../filtering/#match) filtering conditions.
* `integer` - for [integer](../payload/#integer) payload, affects [Match](../filtering/#match) and [Range](../filtering/#range) filtering conditions.
* `float` - for [float](../payload/#float) payload, affects [Range](../filtering/#range) filtering conditions.
* `bool` - for [bool](../payload/#bool) payload, affects [Match](../filtering/#match) filtering conditions (available as of v1.4.0).
* `geo` - for [geo](../payload/#geo) payload, affects [Geo Bounding Box](../filtering/#geo-bounding-box) and [Geo Radius](../filtering/#geo-radius) filtering conditions.
* `datetime` - for [datetime](../payload/#datetime) payload, affects [Range](../filtering/#range) filtering conditions (available as of v1.8.0).
* `text` - a special kind of index, available for [keyword](../payload/#keyword) / string payloads, affects [Full Text search](../filtering/#full-text-match) filtering conditions.
* `uuid` - a special type of index, similar to `keyword`, but optimized for [UUID values](../payload/#uuid).
Affects [Match](../filtering/#match) filtering conditions. (available as of v1.11.0)
Payload index may occupy some additional memory, so it is recommended to only use index for those fields that are used in filtering conditions.
If you need to filter by many fields and the memory limits does not allow to index all of them, it is recommended to choose the field that limits the search result the most.
As a rule, the more different values a payload value has, the more efficiently the index will be used.
### Full-text index
*Available as of v0.10.0*
Qdrant supports full-text search for string payload.
Full-text index allows you to filter points by the presence of a word or a phrase in the payload field.
Full-text index configuration is a bit more complex than other indexes, as you can specify the tokenization parameters.
Tokenization is the process of splitting a string into tokens, which are then indexed in the inverted index.
To create a full-text index, you can use the following:
```http
PUT /collections/{collection_name}/index
{
"field_name": "name_of_the_field_to_index",
"field_schema": {
"type": "text",
"tokenizer": "word",
"min_token_len": 2,
"max_token_len": 20,
"lowercase": true
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_payload_index(
collection_name="{collection_name}",
field_name="name_of_the_field_to_index",
field_schema=models.TextIndexParams(
type="text",
tokenizer=models.TokenizerType.WORD,
min_token_len=2,
max_token_len=15,
lowercase=True,
),
)
```
```typescript
import { QdrantClient, Schemas } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createPayloadIndex("{collection_name}", {
field_name: "name_of_the_field_to_index",
field_schema: {
type: "text",
tokenizer: "word",
min_token_len: 2,
max_token_len: 15,
lowercase: true,
},
});
```
```rust
use qdrant_client::qdrant::{
payload_index_params::IndexParams, CreateFieldIndexCollectionBuilder, FieldType,
PayloadIndexParams, TextIndexParams, TokenizerType,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_field_index(
CreateFieldIndexCollectionBuilder::new(
"{collection_name}",
"name_of_the_field_to_index",
FieldType::Text,
)
.field_index_params(PayloadIndexParams {
index_params: Some(IndexParams::TextIndexParams(TextIndexParams {
tokenizer: TokenizerType::Word as i32,
min_token_len: Some(2),
max_token_len: Some(10),
lowercase: Some(true),
})),
}),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.PayloadIndexParams;
import io.qdrant.client.grpc.Collections.PayloadSchemaType;
import io.qdrant.client.grpc.Collections.TextIndexParams;
import io.qdrant.client.grpc.Collections.TokenizerType;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createPayloadIndexAsync(
"{collection_name}",
"name_of_the_field_to_index",
PayloadSchemaType.Text,
PayloadIndexParams.newBuilder()
.setTextIndexParams(
TextIndexParams.newBuilder()
.setTokenizer(TokenizerType.Word)
.setMinTokenLen(2)
.setMaxTokenLen(10)
.setLowercase(true)
.build())
.build(),
null,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreatePayloadIndexAsync(
collectionName: "{collection_name}",
fieldName: "name_of_the_field_to_index",
schemaType: PayloadSchemaType.Text,
indexParams: new PayloadIndexParams
{
TextIndexParams = new TextIndexParams
{
Tokenizer = TokenizerType.Word,
MinTokenLen = 2,
MaxTokenLen = 10,
Lowercase = true
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
CollectionName: "{collection_name}",
FieldName: "name_of_the_field_to_index",
FieldType: qdrant.FieldType_FieldTypeText.Enum(),
FieldIndexParams: qdrant.NewPayloadIndexParamsText(
&qdrant.TextIndexParams{
Tokenizer: qdrant.TokenizerType_Whitespace,
MinTokenLen: qdrant.PtrOf(uint64(2)),
MaxTokenLen: qdrant.PtrOf(uint64(10)),
Lowercase: qdrant.PtrOf(true),
}),
})
```
Available tokenizers are:
* `word` - splits the string into words, separated by spaces, punctuation marks, and special characters.
* `whitespace` - splits the string into words, separated by spaces.
* `prefix` - splits the string into words, separated by spaces, punctuation marks, and special characters, and then creates a prefix index for each word. For example: `hello` will be indexed as `h`, `he`, `hel`, `hell`, `hello`.
* `multilingual` - special type of tokenizer based on [charabia](https://github.com/meilisearch/charabia) package. It allows proper tokenization and lemmatization for multiple languages, including those with non-latin alphabets and non-space delimiters. See [charabia documentation](https://github.com/meilisearch/charabia) for full list of supported languages supported normalization options. In the default build configuration, qdrant does not include support for all languages, due to the increasing size of the resulting binary. Chinese, Japanese and Korean languages are not enabled by default, but can be enabled by building qdrant from source with `--features multiling-chinese,multiling-japanese,multiling-korean` flags.
See [Full Text match](../filtering/#full-text-match) for examples of querying with full-text index.
### Parameterized index
*Available as of v1.8.0*
We've added a parameterized variant to the `integer` index, which allows
you to fine-tune indexing and search performance.
Both the regular and parameterized `integer` indexes use the following flags:
- `lookup`: enables support for direct lookup using
[Match](/documentation/concepts/filtering/#match) filters.
- `range`: enables support for
[Range](/documentation/concepts/filtering/#range) filters.
The regular `integer` index assumes both `lookup` and `range` are `true`. In
contrast, to configure a parameterized index, you would set only one of these
filters to `true`:
| `lookup` | `range` | Result |
|----------|---------|-----------------------------|
| `true` | `true` | Regular integer index |
| `true` | `false` | Parameterized integer index |
| `false` | `true` | Parameterized integer index |
| `false` | `false` | No integer index |
The parameterized index can enhance performance in collections with millions
of points. We encourage you to try it out. If it does not enhance performance
in your use case, you can always restore the regular `integer` index.
Note: If you set `"lookup": true` with a range filter, that may lead to
significant performance issues.
For example, the following code sets up a parameterized integer index which
supports only range filters:
```http
PUT /collections/{collection_name}/index
{
"field_name": "name_of_the_field_to_index",
"field_schema": {
"type": "integer",
"lookup": false,
"range": true
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_payload_index(
collection_name="{collection_name}",
field_name="name_of_the_field_to_index",
field_schema=models.IntegerIndexParams(
type=models.IntegerIndexType.INTEGER,
lookup=False,
range=True,
),
)
```
```typescript
import { QdrantClient, Schemas } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createPayloadIndex("{collection_name}", {
field_name: "name_of_the_field_to_index",
field_schema: {
type: "integer",
lookup: false,
range: true,
},
});
```
```rust
use qdrant_client::qdrant::{
payload_index_params::IndexParams, CreateFieldIndexCollectionBuilder, FieldType,
IntegerIndexParams, PayloadIndexParams,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_field_index(
CreateFieldIndexCollectionBuilder::new(
"{collection_name}",
"name_of_the_field_to_index",
FieldType::Integer,
)
.field_index_params(PayloadIndexParams {
index_params: Some(IndexParams::IntegerIndexParams(IntegerIndexParams {
lookup: false,
range: true,
})),
}),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.IntegerIndexParams;
import io.qdrant.client.grpc.Collections.PayloadIndexParams;
import io.qdrant.client.grpc.Collections.PayloadSchemaType;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createPayloadIndexAsync(
"{collection_name}",
"name_of_the_field_to_index",
PayloadSchemaType.Integer,
PayloadIndexParams.newBuilder()
.setIntegerIndexParams(
IntegerIndexParams.newBuilder().setLookup(false).setRange(true).build())
.build(),
null,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreatePayloadIndexAsync(
collectionName: "{collection_name}",
fieldName: "name_of_the_field_to_index",
schemaType: PayloadSchemaType.Integer,
indexParams: new PayloadIndexParams
{
IntegerIndexParams = new()
{
Lookup = false,
Range = true
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
CollectionName: "{collection_name}",
FieldName: "name_of_the_field_to_index",
FieldType: qdrant.FieldType_FieldTypeInteger.Enum(),
FieldIndexParams: qdrant.NewPayloadIndexParamsInt(
&qdrant.IntegerIndexParams{
Lookup: false,
Range: true,
}),
})
```
### On-disk payload index
*Available as of v1.11.0*
By default all payload-related structures are stored in memory. In this way, the vector index can quickly access payload values during search.
As latency in this case is critical, it is recommended to keep hot payload indexes in memory.
There are, however, cases when payload indexes are too large or rarely used. In those cases, it is possible to store payload indexes on disk.
<aside role="alert">
On-disk payload index might affect cold requests latency, as it requires additional disk I/O operations.
</aside>
To configure on-disk payload index, you can use the following index parameters:
```http
PUT /collections/{collection_name}/index
{
"field_name": "payload_field_name",
"field_schema": {
"type": "keyword",
"on_disk": true
}
}
```
```python
client.create_payload_index(
collection_name="{collection_name}",
field_name="payload_field_name",
field_schema=models.KeywordIndexParams(
type="keyword",
on_disk=True,
),
)
```
```typescript
client.createPayloadIndex("{collection_name}", {
field_name: "payload_field_name",
field_schema: {
type: "keyword",
on_disk: true
},
});
```
```rust
use qdrant_client::qdrant::{
CreateFieldIndexCollectionBuilder,
KeywordIndexParamsBuilder,
FieldType
};
use qdrant_client::{Qdrant, QdrantError};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.create_field_index(
CreateFieldIndexCollectionBuilder::new(
"{collection_name}",
"payload_field_name",
FieldType::Keyword,
)
.field_index_params(
KeywordIndexParamsBuilder::default()
.on_disk(true),
),
);
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.PayloadIndexParams;
import io.qdrant.client.grpc.Collections.PayloadSchemaType;
import io.qdrant.client.grpc.Collections.KeywordIndexParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createPayloadIndexAsync(
"{collection_name}",
"payload_field_name",
PayloadSchemaType.Keyword,
PayloadIndexParams.newBuilder()
.setKeywordIndexParams(
KeywordIndexParams.newBuilder()
.setOnDisk(true)
.build())
.build(),
null,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreatePayloadIndexAsync(
collectionName: "{collection_name}",
fieldName: "payload_field_name",
schemaType: PayloadSchemaType.Keyword,
indexParams: new PayloadIndexParams
{
KeywordIndexParams = new KeywordIndexParams
{
OnDisk = true
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
CollectionName: "{collection_name}",
FieldName: "name_of_the_field_to_index",
FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(),
FieldIndexParams: qdrant.NewPayloadIndexParamsKeyword(
&qdrant.KeywordIndexParams{
OnDisk: qdrant.PtrOf(true),
}),
})
```
Payload index on-disk is supported for following types:
* `keyword`
* `integer`
* `float`
* `datetime`
* `uuid`
The list will be extended in future versions.
### Tenant Index
*Available as of v1.11.0*
Many vector search use-cases require multitenancy. In a multi-tenant scenario the collection is expected to contain multiple subsets of data, where each subset belongs to a different tenant.
Qdrant supports efficient multi-tenant search by enabling [special configuration](../guides/multiple-partitions/) vector index, which disables global search and only builds sub-indexes for each tenant.
<aside role="note">
In Qdrant, tenants are not necessarily non-overlapping. It is possible to have subsets of data that belong to multiple tenants.
</aside>
However, knowing that the collection contains multiple tenants unlocks more opportunities for optimization.
To optimize storage in Qdrant further, you can enable tenant indexing for payload fields.
This option will tell Qdrant which fields are used for tenant identification and will allow Qdrant to structure storage for faster search of tenant-specific data.
One example of such optimization is localizing tenant-specific data closer on disk, which will reduce the number of disk reads during search.
To enable tenant index for a field, you can use the following index parameters:
```http
PUT /collections/{collection_name}/index
{
"field_name": "payload_field_name",
"field_schema": {
"type": "keyword",
"is_tenant": true
}
}
```
```python
client.create_payload_index(
collection_name="{collection_name}",
field_name="payload_field_name",
field_schema=models.KeywordIndexParams(
type="keyword",
is_tenant=True,
),
)
```
```typescript
client.createPayloadIndex("{collection_name}", {
field_name: "payload_field_name",
field_schema: {
type: "keyword",
is_tenant: true
},
});
```
```rust
use qdrant_client::qdrant::{
CreateFieldIndexCollectionBuilder,
KeywordIndexParamsBuilder,
FieldType
};
use qdrant_client::{Qdrant, QdrantError};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.create_field_index(
CreateFieldIndexCollectionBuilder::new(
"{collection_name}",
"payload_field_name",
FieldType::Keyword,
)
.field_index_params(
KeywordIndexParamsBuilder::default()
.is_tenant(true),
),
);
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.PayloadIndexParams;
import io.qdrant.client.grpc.Collections.PayloadSchemaType;
import io.qdrant.client.grpc.Collections.KeywordIndexParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createPayloadIndexAsync(
"{collection_name}",
"payload_field_name",
PayloadSchemaType.Keyword,
PayloadIndexParams.newBuilder()
.setKeywordIndexParams(
KeywordIndexParams.newBuilder()
.setIsTenant(true)
.build())
.build(),
null,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreatePayloadIndexAsync(
collectionName: "{collection_name}",
fieldName: "payload_field_name",
schemaType: PayloadSchemaType.Keyword,
indexParams: new PayloadIndexParams
{
KeywordIndexParams = new KeywordIndexParams
{
IsTenant = true
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
CollectionName: "{collection_name}",
FieldName: "name_of_the_field_to_index",
FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(),
FieldIndexParams: qdrant.NewPayloadIndexParamsKeyword(
&qdrant.KeywordIndexParams{
IsTenant: qdrant.PtrOf(true),
}),
})
```
Tenant optimization is supported for the following datatypes:
* `keyword`
* `uuid`
### Principal Index
*Available as of v1.11.0*
Similar to the tenant index, the principal index is used to optimize storage for faster search, assuming that the search request is primarily filtered by the principal field.
A good example of a use case for the principal index is time-related data, where each point is associated with a timestamp. In this case, the principal index can be used to optimize storage for faster search with time-based filters.
```http
PUT /collections/{collection_name}/index
{
"field_name": "timestamp",
"field_schema": {
"type": "integer",
"is_principal": true
}
}
```
```python
client.create_payload_index(
collection_name="{collection_name}",
field_name="timestamp",
field_schema=models.KeywordIndexParams(
type="integer",
is_principal=True,
),
)
```
```typescript
client.createPayloadIndex("{collection_name}", {
field_name: "timestamp",
field_schema: {
type: "integer",
is_principal: true
},
});
```
```rust
use qdrant_client::qdrant::{
CreateFieldIndexCollectionBuilder,
IntegerdIndexParamsBuilder,
FieldType
};
use qdrant_client::{Qdrant, QdrantError};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.create_field_index(
CreateFieldIndexCollectionBuilder::new(
"{collection_name}",
"timestamp",
FieldType::Integer,
)
.field_index_params(
IntegerdIndexParamsBuilder::default()
.is_principal(true),
),
);
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.PayloadIndexParams;
import io.qdrant.client.grpc.Collections.PayloadSchemaType;
import io.qdrant.client.grpc.Collections.IntegerIndexParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createPayloadIndexAsync(
"{collection_name}",
"timestamp",
PayloadSchemaType.Integer,
PayloadIndexParams.newBuilder()
.setIntegerIndexParams(
KeywordIndexParams.newBuilder()
.setIsPrincipa(true)
.build())
.build(),
null,
null,
null)
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreatePayloadIndexAsync(
collectionName: "{collection_name}",
fieldName: "timestamp",
schemaType: PayloadSchemaType.Integer,
indexParams: new PayloadIndexParams
{
IntegerIndexParams = new IntegerIndexParams
{
IsPrincipal = true
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
CollectionName: "{collection_name}",
FieldName: "name_of_the_field_to_index",
FieldType: qdrant.FieldType_FieldTypeInteger.Enum(),
FieldIndexParams: qdrant.NewPayloadIndexParamsInt(
&qdrant.IntegerIndexParams{
IsPrincipal: qdrant.PtrOf(true),
}),
})
```
Principal optimization is supported for following types:
* `integer`
* `float`
* `datetime`
## Vector Index
A vector index is a data structure built on vectors through a specific mathematical model.
Through the vector index, we can efficiently query several vectors similar to the target vector.
Qdrant currently only uses HNSW as a dense vector index.
[HNSW](https://arxiv.org/abs/1603.09320) (Hierarchical Navigable Small World Graph) is a graph-based indexing algorithm. It builds a multi-layer navigation structure for an image according to certain rules. In this structure, the upper layers are more sparse and the distances between nodes are farther. The lower layers are denser and the distances between nodes are closer. The search starts from the uppermost layer, finds the node closest to the target in this layer, and then enters the next layer to begin another search. After multiple iterations, it can quickly approach the target position.
In order to improve performance, HNSW limits the maximum degree of nodes on each layer of the graph to `m`. In addition, you can use `ef_construct` (when building index) or `ef` (when searching targets) to specify a search range.
The corresponding parameters could be configured in the configuration file:
```yaml
storage:
# Default parameters of HNSW Index. Could be overridden for each collection or named vector individually
hnsw_index:
# Number of edges per node in the index graph.
# Larger the value - more accurate the search, more space required.
m: 16
# Number of neighbours to consider during the index building.
# Larger the value - more accurate the search, more time required to build index.
ef_construct: 100
# Minimal size (in KiloBytes) of vectors for additional payload-based indexing.
# If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used -
# in this case full-scan search should be preferred by query planner and additional indexing is not required.
# Note: 1Kb = 1 vector of size 256
full_scan_threshold: 10000
```
And so in the process of creating a [collection](../collections/). The `ef` parameter is configured during [the search](../search/) and by default is equal to `ef_construct`.
HNSW is chosen for several reasons.
First, HNSW is well-compatible with the modification that allows Qdrant to use filters during a search.
Second, it is one of the most accurate and fastest algorithms, according to [public benchmarks](https://github.com/erikbern/ann-benchmarks).
*Available as of v1.1.1*
The HNSW parameters can also be configured on a collection and named vector
level by setting [`hnsw_config`](../indexing/#vector-index) to fine-tune search
performance.
## Sparse Vector Index
*Available as of v1.7.0*
Sparse vectors in Qdrant are indexed with a special data structure, which is optimized for vectors that have a high proportion of zeroes. In some ways, this indexing method is similar to the inverted index, which is used in text search engines.
- A sparse vector index in Qdrant is exact, meaning it does not use any approximation algorithms.
- All sparse vectors added to the collection are immediately indexed in the mutable version of a sparse index.
With Qdrant, you can benefit from a more compact and efficient immutable sparse index, which is constructed during the same optimization process as the dense vector index.
This approach is particularly useful for collections storing both dense and sparse vectors.
To configure a sparse vector index, create a collection with the following parameters:
```http
PUT /collections/{collection_name}
{
"sparse_vectors": {
"text": {
"index": {
"on_disk": false
}
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
sparse_vectors={
"text": models.SparseVectorIndexParams(
index=models.SparseVectorIndexType(
on_disk=False,
),
),
},
)
```
```typescript
import { QdrantClient, Schemas } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
sparse_vectors: {
"splade-model-name": {
index: {
on_disk: false
}
}
}
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, SparseIndexConfigBuilder, SparseVectorParamsBuilder,
SparseVectorsConfigBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
let mut sparse_vectors_config = SparseVectorsConfigBuilder::default();
sparse_vectors_config.add_named_vector_params(
"splade-model-name",
SparseVectorParamsBuilder::default()
.index(SparseIndexConfigBuilder::default().on_disk(true)),
);
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.sparse_vectors_config(sparse_vectors_config),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections;
QdrantClient client = new QdrantClient(
QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.createCollectionAsync(
Collections.CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setSparseVectorsConfig(
Collections.SparseVectorConfig.newBuilder().putMap(
"splade-model-name",
Collections.SparseVectorParams.newBuilder()
.setIndex(
Collections.SparseIndexConfig
.newBuilder()
.setOnDisk(false)
.build()
).build()
).build()
).build()
).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
sparseVectorsConfig: ("splade-model-name", new SparseVectorParams{
Index = new SparseIndexConfig {
OnDisk = false,
}
})
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
map[string]*qdrant.SparseVectorParams{
"splade-model-name": {
Index: &qdrant.SparseIndexConfig{
OnDisk: qdrant.PtrOf(false),
}},
}),
})
````
The following parameters may affect performance:
- `on_disk: true` - The index is stored on disk, which lets you save memory. This may slow down search performance.
- `on_disk: false` - The index is still persisted on disk, but it is also loaded into memory for faster search.
Unlike a dense vector index, a sparse vector index does not require a pre-defined vector size. It automatically adjusts to the size of the vectors added to the collection.
**Note:** A sparse vector index only supports dot-product similarity searches. It does not support other distance metrics.
### IDF Modifier
*Available as of v1.10.0*
For many search algorithms, it is important to consider how often an item occurs in a collection.
Intuitively speaking, the less frequently an item appears in a collection, the more important it is in a search.
This is also known as the Inverse Document Frequency (IDF). It is used in text search engines to rank search results based on the rarity of a word in a collection.
IDF depends on the currently stored documents and therefore can't be pre-computed in the sparse vectors in streaming inference mode.
In order to support IDF in the sparse vector index, Qdrant provides an option to modify the sparse vector query with the IDF statistics automatically.
The only requirement is to enable the IDF modifier in the collection configuration:
```http
PUT /collections/{collection_name}
{
"sparse_vectors": {
"text": {
"modifier": "idf"
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
sparse_vectors={
"text": models.SparseVectorParams(
modifier=models.Modifier.IDF,
),
},
)
```
```typescript
import { QdrantClient, Schemas } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
sparse_vectors: {
"text": {
modifier: "idf"
}
}
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Modifier, SparseVectorParamsBuilder, SparseVectorsConfigBuilder,
};
use qdrant_client::{Qdrant, QdrantError};
let client = Qdrant::from_url("http://localhost:6334").build()?;
let mut sparse_vectors_config = SparseVectorsConfigBuilder::default();
sparse_vectors_config.add_named_vector_params(
"text",
SparseVectorParamsBuilder::default().modifier(Modifier::Idf),
);
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.sparse_vectors_config(sparse_vectors_config),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Modifier;
import io.qdrant.client.grpc.Collections.SparseVectorConfig;
import io.qdrant.client.grpc.Collections.SparseVectorParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setSparseVectorsConfig(
SparseVectorConfig.newBuilder()
.putMap("text", SparseVectorParams.newBuilder().setModifier(Modifier.Idf).build()))
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
sparseVectorsConfig: ("text", new SparseVectorParams {
Modifier = Modifier.Idf,
})
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
map[string]*qdrant.SparseVectorParams{
"text": {
Modifier: qdrant.Modifier_Idf.Enum(),
},
}),
})
```
Qdrant uses the following formula to calculate the IDF modifier:
$$
\text{IDF}(q_i) = \ln \left(\frac{N - n(q_i) + 0.5}{n(q_i) + 0.5}+1\right)
$$
Where:
- `N` is the total number of documents in the collection.
- `n` is the number of documents containing non-zero values for the given vector element.
## Filtrable Index
Separately, a payload index and a vector index cannot solve the problem of search using the filter completely.
In the case of weak filters, you can use the HNSW index as it is. In the case of stringent filters, you can use the payload index and complete rescore.
However, for cases in the middle, this approach does not work well.
On the one hand, we cannot apply a full scan on too many vectors. On the other hand, the HNSW graph starts to fall apart when using too strict filters.
![HNSW fail](/docs/precision_by_m.png)
![hnsw graph](/docs/graph.gif)
You can find more information on why this happens in our [blog post](https://blog.vasnetsov.com/posts/categorical-hnsw/).
Qdrant solves this problem by extending the HNSW graph with additional edges based on the stored payload values.
Extra edges allow you to efficiently search for nearby vectors using the HNSW index and apply filters as you search in the graph.
This approach minimizes the overhead on condition checks since you only need to calculate the conditions for a small fraction of the points involved in the search.
| documentation/concepts/indexing.md |
---
title: Points
weight: 40
aliases:
- ../points
---
# Points
The points are the central entity that Qdrant operates with.
A point is a record consisting of a [vector](../vectors/) and an optional [payload](../payload/).
It looks like this:
```json
// This is a simple point
{
"id": 129,
"vector": [0.1, 0.2, 0.3, 0.4],
"payload": {"color": "red"},
}
```
You can search among the points grouped in one [collection](../collections/) based on vector similarity.
This procedure is described in more detail in the [search](../search/) and [filtering](../filtering/) sections.
This section explains how to create and manage vectors.
Any point modification operation is asynchronous and takes place in 2 steps.
At the first stage, the operation is written to the Write-ahead-log.
After this moment, the service will not lose the data, even if the machine loses power supply.
## Point IDs
Qdrant supports using both `64-bit unsigned integers` and `UUID` as identifiers for points.
Examples of UUID string representations:
- simple: `936DA01F9ABD4d9d80C702AF85C822A8`
- hyphenated: `550e8400-e29b-41d4-a716-446655440000`
- urn: `urn:uuid:F9168C5E-CEB2-4faa-B6BF-329BF39FA1E4`
That means that in every request UUID string could be used instead of numerical id.
Example:
```http
PUT /collections/{collection_name}/points
{
"points": [
{
"id": "5c56c793-69f3-4fbf-87e6-c4bf54c28c26",
"payload": {"color": "red"},
"vector": [0.9, 0.1, 0.1]
}
]
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.upsert(
collection_name="{collection_name}",
points=[
models.PointStruct(
id="5c56c793-69f3-4fbf-87e6-c4bf54c28c26",
payload={
"color": "red",
},
vector=[0.9, 0.1, 0.1],
),
],
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.upsert("{collection_name}", {
points: [
{
id: "5c56c793-69f3-4fbf-87e6-c4bf54c28c26",
payload: {
color: "red",
},
vector: [0.9, 0.1, 0.1],
},
],
});
```
```rust
use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.upsert_points(
UpsertPointsBuilder::new(
"{collection_name}",
vec![PointStruct::new(
"5c56c793-69f3-4fbf-87e6-c4bf54c28c26",
vec![0.9, 0.1, 0.1],
[("color", "Red".into())],
)],
)
.wait(true),
)
.await?;
```
```java
import java.util.List;
import java.util.Map;
import java.util.UUID;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.ValueFactory.value;
import static io.qdrant.client.VectorsFactory.vectors;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.PointStruct;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.upsertAsync(
"{collection_name}",
List.of(
PointStruct.newBuilder()
.setId(id(UUID.fromString("5c56c793-69f3-4fbf-87e6-c4bf54c28c26")))
.setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
.putAllPayload(Map.of("color", value("Red")))
.build()))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpsertAsync(
collectionName: "{collection_name}",
points: new List<PointStruct>
{
new()
{
Id = Guid.Parse("5c56c793-69f3-4fbf-87e6-c4bf54c28c26"),
Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f },
Payload = { ["color"] = "Red" }
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "{collection_name}",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewID("5c56c793-69f3-4fbf-87e6-c4bf54c28c26"),
Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74),
Payload: qdrant.NewValueMap(map[string]any{"color": "Red"}),
},
},
})
```
and
```http
PUT /collections/{collection_name}/points
{
"points": [
{
"id": 1,
"payload": {"color": "red"},
"vector": [0.9, 0.1, 0.1]
}
]
}
```
```python
client.upsert(
collection_name="{collection_name}",
points=[
models.PointStruct(
id=1,
payload={
"color": "red",
},
vector=[0.9, 0.1, 0.1],
),
],
)
```
```typescript
client.upsert("{collection_name}", {
points: [
{
id: 1,
payload: {
color: "red",
},
vector: [0.9, 0.1, 0.1],
},
],
});
```
```rust
use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.upsert_points(
UpsertPointsBuilder::new(
"{collection_name}",
vec![PointStruct::new(
1,
vec![0.9, 0.1, 0.1],
[("color", "Red".into())],
)],
)
.wait(true),
)
.await?;
```
```java
import java.util.List;
import java.util.Map;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.ValueFactory.value;
import static io.qdrant.client.VectorsFactory.vectors;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.PointStruct;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.upsertAsync(
"{collection_name}",
List.of(
PointStruct.newBuilder()
.setId(id(1))
.setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
.putAllPayload(Map.of("color", value("Red")))
.build()))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpsertAsync(
collectionName: "{collection_name}",
points: new List<PointStruct>
{
new()
{
Id = 1,
Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f },
Payload = { ["color"] = "Red" }
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "{collection_name}",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewIDNum(1),
Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74),
Payload: qdrant.NewValueMap(map[string]any{"color": "Red"}),
},
},
})
```
are both possible.
## Vectors
Each point in qdrant may have one or more vectors.
Vectors are the central component of the Qdrant architecture,
qdrant relies on different types of vectors to provide different types of data exploration and search.
Here is a list of supported vector types:
|||
|-|-|
| Dense Vectors | A regular vectors, generated by majority of the embedding models. |
| Sparse Vectors | Vectors with no fixed length, but only a few non-zero elements. <br> Useful for exact token match and collaborative filtering recommendations. |
| MultiVectors | Matrices of numbers with fixed length but variable height. <br> Usually obtained from late interraction models like ColBERT. |
It is possible to attach more than one type of vector to a single point.
In Qdrant we call it Named Vectors.
Read more about vector types, how they are stored and optimized in the [vectors](../vectors/) section.
## Upload points
To optimize performance, Qdrant supports batch loading of points. I.e., you can load several points into the service in one API call.
Batching allows you to minimize the overhead of creating a network connection.
The Qdrant API supports two ways of creating batches - record-oriented and column-oriented.
Internally, these options do not differ and are made only for the convenience of interaction.
Create points with batch:
```http
PUT /collections/{collection_name}/points
{
"batch": {
"ids": [1, 2, 3],
"payloads": [
{"color": "red"},
{"color": "green"},
{"color": "blue"}
],
"vectors": [
[0.9, 0.1, 0.1],
[0.1, 0.9, 0.1],
[0.1, 0.1, 0.9]
]
}
}
```
```python
client.upsert(
collection_name="{collection_name}",
points=models.Batch(
ids=[1, 2, 3],
payloads=[
{"color": "red"},
{"color": "green"},
{"color": "blue"},
],
vectors=[
[0.9, 0.1, 0.1],
[0.1, 0.9, 0.1],
[0.1, 0.1, 0.9],
],
),
)
```
```typescript
client.upsert("{collection_name}", {
batch: {
ids: [1, 2, 3],
payloads: [{ color: "red" }, { color: "green" }, { color: "blue" }],
vectors: [
[0.9, 0.1, 0.1],
[0.1, 0.9, 0.1],
[0.1, 0.1, 0.9],
],
},
});
```
or record-oriented equivalent:
```http
PUT /collections/{collection_name}/points
{
"points": [
{
"id": 1,
"payload": {"color": "red"},
"vector": [0.9, 0.1, 0.1]
},
{
"id": 2,
"payload": {"color": "green"},
"vector": [0.1, 0.9, 0.1]
},
{
"id": 3,
"payload": {"color": "blue"},
"vector": [0.1, 0.1, 0.9]
}
]
}
```
```python
client.upsert(
collection_name="{collection_name}",
points=[
models.PointStruct(
id=1,
payload={
"color": "red",
},
vector=[0.9, 0.1, 0.1],
),
models.PointStruct(
id=2,
payload={
"color": "green",
},
vector=[0.1, 0.9, 0.1],
),
models.PointStruct(
id=3,
payload={
"color": "blue",
},
vector=[0.1, 0.1, 0.9],
),
],
)
```
```typescript
client.upsert("{collection_name}", {
points: [
{
id: 1,
payload: { color: "red" },
vector: [0.9, 0.1, 0.1],
},
{
id: 2,
payload: { color: "green" },
vector: [0.1, 0.9, 0.1],
},
{
id: 3,
payload: { color: "blue" },
vector: [0.1, 0.1, 0.9],
},
],
});
```
```rust
use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
client
.upsert_points(
UpsertPointsBuilder::new(
"{collection_name}",
vec![
PointStruct::new(1, vec![0.9, 0.1, 0.1], [("city", "red".into())]),
PointStruct::new(2, vec![0.1, 0.9, 0.1], [("city", "green".into())]),
PointStruct::new(3, vec![0.1, 0.1, 0.9], [("city", "blue".into())]),
],
)
.wait(true),
)
.await?;
```
```java
import java.util.List;
import java.util.Map;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.ValueFactory.value;
import static io.qdrant.client.VectorsFactory.vectors;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.PointStruct;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.upsertAsync(
"{collection_name}",
List.of(
PointStruct.newBuilder()
.setId(id(1))
.setVectors(vectors(0.9f, 0.1f, 0.1f))
.putAllPayload(Map.of("color", value("red")))
.build(),
PointStruct.newBuilder()
.setId(id(2))
.setVectors(vectors(0.1f, 0.9f, 0.1f))
.putAllPayload(Map.of("color", value("green")))
.build(),
PointStruct.newBuilder()
.setId(id(3))
.setVectors(vectors(0.1f, 0.1f, 0.9f))
.putAllPayload(Map.of("color", value("blue")))
.build()))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpsertAsync(
collectionName: "{collection_name}",
points: new List<PointStruct>
{
new()
{
Id = 1,
Vectors = new[] { 0.9f, 0.1f, 0.1f },
Payload = { ["color"] = "red" }
},
new()
{
Id = 2,
Vectors = new[] { 0.1f, 0.9f, 0.1f },
Payload = { ["color"] = "green" }
},
new()
{
Id = 3,
Vectors = new[] { 0.1f, 0.1f, 0.9f },
Payload = { ["color"] = "blue" }
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "{collection_name}",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewIDNum(1),
Vectors: qdrant.NewVectors(0.9, 0.1, 0.1),
Payload: qdrant.NewValueMap(map[string]any{"color": "red"}),
},
{
Id: qdrant.NewIDNum(2),
Vectors: qdrant.NewVectors(0.1, 0.9, 0.1),
Payload: qdrant.NewValueMap(map[string]any{"color": "green"}),
},
{
Id: qdrant.NewIDNum(3),
Vectors: qdrant.NewVectors(0.1, 0.1, 0.9),
Payload: qdrant.NewValueMap(map[string]any{"color": "blue"}),
},
},
})
```
The Python client has additional features for loading points, which include:
- Parallelization
- A retry mechanism
- Lazy batching support
For example, you can read your data directly from hard drives, to avoid storing all data in RAM. You can use these
features with the `upload_collection` and `upload_points` methods.
Similar to the basic upsert API, these methods support both record-oriented and column-oriented formats.
<aside role="status">
<code>upload_points</code> is available as of v1.7.1. It has replaced <code>upload_records</code> which is now deprecated.
</aside>
Column-oriented format:
```python
client.upload_collection(
collection_name="{collection_name}",
ids=[1, 2],
payload=[
{"color": "red"},
{"color": "green"},
],
vectors=[
[0.9, 0.1, 0.1],
[0.1, 0.9, 0.1],
],
parallel=4,
max_retries=3,
)
```
<aside role="status">
If <code>ids</code> are not provided, Qdrant Client will generate them automatically as random UUIDs.
</aside>
Record-oriented format:
```python
client.upload_points(
collection_name="{collection_name}",
points=[
models.PointStruct(
id=1,
payload={
"color": "red",
},
vector=[0.9, 0.1, 0.1],
),
models.PointStruct(
id=2,
payload={
"color": "green",
},
vector=[0.1, 0.9, 0.1],
),
],
parallel=4,
max_retries=3,
)
```
All APIs in Qdrant, including point loading, are idempotent.
It means that executing the same method several times in a row is equivalent to a single execution.
In this case, it means that points with the same id will be overwritten when re-uploaded.
Idempotence property is useful if you use, for example, a message queue that doesn't provide an exactly-ones guarantee.
Even with such a system, Qdrant ensures data consistency.
[_Available as of v0.10.0_](#create-vector-name)
If the collection was created with multiple vectors, each vector data can be provided using the vector's name:
```http
PUT /collections/{collection_name}/points
{
"points": [
{
"id": 1,
"vector": {
"image": [0.9, 0.1, 0.1, 0.2],
"text": [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2]
}
},
{
"id": 2,
"vector": {
"image": [0.2, 0.1, 0.3, 0.9],
"text": [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9]
}
}
]
}
```
```python
client.upsert(
collection_name="{collection_name}",
points=[
models.PointStruct(
id=1,
vector={
"image": [0.9, 0.1, 0.1, 0.2],
"text": [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2],
},
),
models.PointStruct(
id=2,
vector={
"image": [0.2, 0.1, 0.3, 0.9],
"text": [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9],
},
),
],
)
```
```typescript
client.upsert("{collection_name}", {
points: [
{
id: 1,
vector: {
image: [0.9, 0.1, 0.1, 0.2],
text: [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2],
},
},
{
id: 2,
vector: {
image: [0.2, 0.1, 0.3, 0.9],
text: [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9],
},
},
],
});
```
```rust
use std::collections::HashMap;
use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
use qdrant_client::Payload;
client
.upsert_points(
UpsertPointsBuilder::new(
"{collection_name}",
vec![
PointStruct::new(
1,
HashMap::from([
("image".to_string(), vec![0.9, 0.1, 0.1, 0.2]),
(
"text".to_string(),
vec![0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2],
),
]),
Payload::default(),
),
PointStruct::new(
2,
HashMap::from([
("image".to_string(), vec![0.2, 0.1, 0.3, 0.9]),
(
"text".to_string(),
vec![0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9],
),
]),
Payload::default(),
),
],
)
.wait(true),
)
.await?;
```
```java
import java.util.List;
import java.util.Map;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.VectorFactory.vector;
import static io.qdrant.client.VectorsFactory.namedVectors;
import io.qdrant.client.grpc.Points.PointStruct;
client
.upsertAsync(
"{collection_name}",
List.of(
PointStruct.newBuilder()
.setId(id(1))
.setVectors(
namedVectors(
Map.of(
"image",
vector(List.of(0.9f, 0.1f, 0.1f, 0.2f)),
"text",
vector(List.of(0.4f, 0.7f, 0.1f, 0.8f, 0.1f, 0.1f, 0.9f, 0.2f)))))
.build(),
PointStruct.newBuilder()
.setId(id(2))
.setVectors(
namedVectors(
Map.of(
"image",
List.of(0.2f, 0.1f, 0.3f, 0.9f),
"text",
List.of(0.5f, 0.2f, 0.7f, 0.4f, 0.7f, 0.2f, 0.3f, 0.9f))))
.build()))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpsertAsync(
collectionName: "{collection_name}",
points: new List<PointStruct>
{
new()
{
Id = 1,
Vectors = new Dictionary<string, float[]>
{
["image"] = [0.9f, 0.1f, 0.1f, 0.2f],
["text"] = [0.4f, 0.7f, 0.1f, 0.8f, 0.1f, 0.1f, 0.9f, 0.2f]
}
},
new()
{
Id = 2,
Vectors = new Dictionary<string, float[]>
{
["image"] = [0.2f, 0.1f, 0.3f, 0.9f],
["text"] = [0.5f, 0.2f, 0.7f, 0.4f, 0.7f, 0.2f, 0.3f, 0.9f]
}
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "{collection_name}",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewIDNum(1),
Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
"image": qdrant.NewVector(0.9, 0.1, 0.1, 0.2),
"text": qdrant.NewVector(0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2),
}),
},
{
Id: qdrant.NewIDNum(2),
Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
"image": qdrant.NewVector(0.2, 0.1, 0.3, 0.9),
"text": qdrant.NewVector(0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9),
}),
},
},
})
```
_Available as of v1.2.0_
Named vectors are optional. When uploading points, some vectors may be omitted.
For example, you can upload one point with only the `image` vector and a second
one with only the `text` vector.
When uploading a point with an existing ID, the existing point is deleted first,
then it is inserted with just the specified vectors. In other words, the entire
point is replaced, and any unspecified vectors are set to null. To keep existing
vectors unchanged and only update specified vectors, see [update vectors](#update-vectors).
_Available as of v1.7.0_
Points can contain dense and sparse vectors.
A sparse vector is an array in which most of the elements have a value of zero.
It is possible to take advantage of this property to have an optimized representation, for this reason they have a different shape than dense vectors.
They are represented as a list of `(index, value)` pairs, where `index` is an integer and `value` is a floating point number. The `index` is the position of the non-zero value in the vector. The `values` is the value of the non-zero element.
For example, the following vector:
```
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 0.0, 0.0]
```
can be represented as a sparse vector:
```
[(6, 1.0), (7, 2.0)]
```
Qdrant uses the following JSON representation throughout its APIs.
```json
{
"indices": [6, 7],
"values": [1.0, 2.0]
}
```
The `indices` and `values` arrays must have the same length.
And the `indices` must be unique.
If the `indices` are not sorted, Qdrant will sort them internally so you may not rely on the order of the elements.
Sparse vectors must be named and can be uploaded in the same way as dense vectors.
```http
PUT /collections/{collection_name}/points
{
"points": [
{
"id": 1,
"vector": {
"text": {
"indices": [6, 7],
"values": [1.0, 2.0]
}
}
},
{
"id": 2,
"vector": {
"text": {
"indices": [1, 1, 2, 3, 4, 5],
"values": [0.1, 0.2, 0.3, 0.4, 0.5]
}
}
}
]
}
```
```python
client.upsert(
collection_name="{collection_name}",
points=[
models.PointStruct(
id=1,
vector={
"text": models.SparseVector(
indices=[6, 7],
values=[1.0, 2.0],
)
},
),
models.PointStruct(
id=2,
vector={
"text": models.SparseVector(
indices=[1, 2, 3, 4, 5],
values=[0.1, 0.2, 0.3, 0.4, 0.5],
)
},
),
],
)
```
```typescript
client.upsert("{collection_name}", {
points: [
{
id: 1,
vector: {
text: {
indices: [6, 7],
values: [1.0, 2.0],
},
},
},
{
id: 2,
vector: {
text: {
indices: [1, 2, 3, 4, 5],
values: [0.1, 0.2, 0.3, 0.4, 0.5],
},
},
},
],
});
```
```rust
use std::collections::HashMap;
use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder, Vector};
use qdrant_client::Payload;
client
.upsert_points(
UpsertPointsBuilder::new(
"{collection_name}",
vec![
PointStruct::new(
1,
HashMap::from([("text".to_string(), vec![(6, 1.0), (7, 2.0)])]),
Payload::default(),
),
PointStruct::new(
2,
HashMap::from([(
"text".to_string(),
vec![(1, 0.1), (2, 0.2), (3, 0.3), (4, 0.4), (5, 0.5)],
)]),
Payload::default(),
),
],
)
.wait(true),
)
.await?;
```
```java
import java.util.List;
import java.util.Map;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.VectorFactory.vector;
import io.qdrant.client.grpc.Points.NamedVectors;
import io.qdrant.client.grpc.Points.PointStruct;
import io.qdrant.client.grpc.Points.Vectors;
client
.upsertAsync(
"{collection_name}",
List.of(
PointStruct.newBuilder()
.setId(id(1))
.setVectors(
Vectors.newBuilder()
.setVectors(
NamedVectors.newBuilder()
.putAllVectors(
Map.of(
"text", vector(List.of(1.0f, 2.0f), List.of(6, 7))))
.build())
.build())
.build(),
PointStruct.newBuilder()
.setId(id(2))
.setVectors(
Vectors.newBuilder()
.setVectors(
NamedVectors.newBuilder()
.putAllVectors(
Map.of(
"text",
vector(
List.of(0.1f, 0.2f, 0.3f, 0.4f, 0.5f),
List.of(1, 2, 3, 4, 5))))
.build())
.build())
.build()))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpsertAsync(
collectionName: "{collection_name}",
points: new List<PointStruct>
{
new()
{
Id = 1,
Vectors = new Dictionary<string, Vector> { ["text"] = ([1.0f, 2.0f], [6, 7]) }
},
new()
{
Id = 2,
Vectors = new Dictionary<string, Vector>
{
["text"] = ([0.1f, 0.2f, 0.3f, 0.4f, 0.5f], [1, 2, 3, 4, 5])
}
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "{collection_name}",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewIDNum(1),
Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
"text": qdrant.NewVectorSparse(
[]uint32{6, 7},
[]float32{1.0, 2.0}),
}),
},
{
Id: qdrant.NewIDNum(2),
Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
"text": qdrant.NewVectorSparse(
[]uint32{1, 2, 3, 4, 5},
[]float32{0.1, 0.2, 0.3, 0.4, 0.5}),
}),
},
},
})
```
## Modify points
To change a point, you can modify its vectors or its payload. There are several
ways to do this.
### Update vectors
_Available as of v1.2.0_
This method updates the specified vectors on the given points. Unspecified
vectors are kept unchanged. All given points must exist.
REST API ([Schema](https://api.qdrant.tech/api-reference/points/update-vectors)):
```http
PUT /collections/{collection_name}/points/vectors
{
"points": [
{
"id": 1,
"vector": {
"image": [0.1, 0.2, 0.3, 0.4]
}
},
{
"id": 2,
"vector": {
"text": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2]
}
}
]
}
```
```python
client.update_vectors(
collection_name="{collection_name}",
points=[
models.PointVectors(
id=1,
vector={
"image": [0.1, 0.2, 0.3, 0.4],
},
),
models.PointVectors(
id=2,
vector={
"text": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2],
},
),
],
)
```
```typescript
client.updateVectors("{collection_name}", {
points: [
{
id: 1,
vector: {
image: [0.1, 0.2, 0.3, 0.4],
},
},
{
id: 2,
vector: {
text: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2],
},
},
],
});
```
```rust
use std::collections::HashMap;
use qdrant_client::qdrant::{
PointVectors, UpdatePointVectorsBuilder,
};
client
.update_vectors(
UpdatePointVectorsBuilder::new(
"{collection_name}",
vec![
PointVectors {
id: Some(1.into()),
vectors: Some(
HashMap::from([("image".to_string(), vec![0.1, 0.2, 0.3, 0.4])]).into(),
),
},
PointVectors {
id: Some(2.into()),
vectors: Some(
HashMap::from([(
"text".to_string(),
vec![0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2],
)])
.into(),
),
},
],
)
.wait(true),
)
.await?;
```
```java
import java.util.List;
import java.util.Map;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.VectorFactory.vector;
import static io.qdrant.client.VectorsFactory.namedVectors;
client
.updateVectorsAsync(
"{collection_name}",
List.of(
PointVectors.newBuilder()
.setId(id(1))
.setVectors(namedVectors(Map.of("image", vector(List.of(0.1f, 0.2f, 0.3f, 0.4f)))))
.build(),
PointVectors.newBuilder()
.setId(id(2))
.setVectors(
namedVectors(
Map.of(
"text", vector(List.of(0.9f, 0.8f, 0.7f, 0.6f, 0.5f, 0.4f, 0.3f, 0.2f)))))
.build()))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpdateVectorsAsync(
collectionName: "{collection_name}",
points: new List<PointVectors>
{
new() { Id = 1, Vectors = ("image", new float[] { 0.1f, 0.2f, 0.3f, 0.4f }) },
new()
{
Id = 2,
Vectors = ("text", new float[] { 0.9f, 0.8f, 0.7f, 0.6f, 0.5f, 0.4f, 0.3f, 0.2f })
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.UpdateVectors(context.Background(), &qdrant.UpdatePointVectors{
CollectionName: "{collection_name}",
Points: []*qdrant.PointVectors{
{
Id: qdrant.NewIDNum(1),
Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
"image": qdrant.NewVector(0.1, 0.2, 0.3, 0.4),
}),
},
{
Id: qdrant.NewIDNum(2),
Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
"text": qdrant.NewVector(0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2),
}),
},
},
})
```
To update points and replace all of its vectors, see [uploading
points](#upload-points).
### Delete vectors
_Available as of v1.2.0_
This method deletes just the specified vectors from the given points. Other
vectors are kept unchanged. Points are never deleted.
REST API ([Schema](https://api.qdrant.tech/api-reference/points/delete-vectors)):
```http
POST /collections/{collection_name}/points/vectors/delete
{
"points": [0, 3, 100],
"vectors": ["text", "image"]
}
```
```python
client.delete_vectors(
collection_name="{collection_name}",
points=[0, 3, 100],
vectors=["text", "image"],
)
```
```typescript
client.deleteVectors("{collection_name}", {
points: [0, 3, 10],
vectors: ["text", "image"],
});
```
```rust
use qdrant_client::qdrant::{
DeletePointVectorsBuilder, PointsIdsList,
};
client
.delete_vectors(
DeletePointVectorsBuilder::new("{collection_name}")
.points_selector(PointsIdsList {
ids: vec![0.into(), 3.into(), 10.into()],
})
.vectors(vec!["text".into(), "image".into()])
.wait(true),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.PointIdFactory.id;
client
.deleteVectorsAsync(
"{collection_name}", List.of("text", "image"), List.of(id(0), id(3), id(10)))
.get();
```
To delete entire points, see [deleting points](#delete-points).
### Update payload
Learn how to modify the payload of a point in the [Payload](../payload/#update-payload) section.
## Delete points
REST API ([Schema](https://api.qdrant.tech/api-reference/points/delete-points)):
```http
POST /collections/{collection_name}/points/delete
{
"points": [0, 3, 100]
}
```
```python
client.delete(
collection_name="{collection_name}",
points_selector=models.PointIdsList(
points=[0, 3, 100],
),
)
```
```typescript
client.delete("{collection_name}", {
points: [0, 3, 100],
});
```
```rust
use qdrant_client::qdrant::{DeletePointsBuilder, PointsIdsList};
client
.delete_points(
DeletePointsBuilder::new("{collection_name}")
.points(PointsIdsList {
ids: vec![0.into(), 3.into(), 100.into()],
})
.wait(true),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.PointIdFactory.id;
client.deleteAsync("{collection_name}", List.of(id(0), id(3), id(100)));
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.DeleteAsync(collectionName: "{collection_name}", ids: [0, 3, 100]);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Delete(context.Background(), &qdrant.DeletePoints{
CollectionName: "{collection_name}",
Points: qdrant.NewPointsSelector(
qdrant.NewIDNum(0), qdrant.NewIDNum(3), qdrant.NewIDNum(100),
),
})
```
Alternative way to specify which points to remove is to use filter.
```http
POST /collections/{collection_name}/points/delete
{
"filter": {
"must": [
{
"key": "color",
"match": {
"value": "red"
}
}
]
}
}
```
```python
client.delete(
collection_name="{collection_name}",
points_selector=models.FilterSelector(
filter=models.Filter(
must=[
models.FieldCondition(
key="color",
match=models.MatchValue(value="red"),
),
],
)
),
)
```
```typescript
client.delete("{collection_name}", {
filter: {
must: [
{
key: "color",
match: {
value: "red",
},
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, DeletePointsBuilder, Filter};
client
.delete_points(
DeletePointsBuilder::new("{collection_name}")
.points(Filter::must([Condition::matches(
"color",
"red".to_string(),
)]))
.wait(true),
)
.await?;
```
```java
import static io.qdrant.client.ConditionFactory.matchKeyword;
import io.qdrant.client.grpc.Points.Filter;
client
.deleteAsync(
"{collection_name}",
Filter.newBuilder().addMust(matchKeyword("color", "red")).build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.DeleteAsync(collectionName: "{collection_name}", filter: MatchKeyword("color", "red"));
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Delete(context.Background(), &qdrant.DeletePoints{
CollectionName: "{collection_name}",
Points: qdrant.NewPointsSelectorFilter(
&qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("color", "red"),
},
},
),
})
```
This example removes all points with `{ "color": "red" }` from the collection.
## Retrieve points
There is a method for retrieving points by their ids.
REST API ([Schema](https://api.qdrant.tech/api-reference/points/get-points)):
```http
POST /collections/{collection_name}/points
{
"ids": [0, 3, 100]
}
```
```python
client.retrieve(
collection_name="{collection_name}",
ids=[0, 3, 100],
)
```
```typescript
client.retrieve("{collection_name}", {
ids: [0, 3, 100],
});
```
```rust
use qdrant_client::qdrant::GetPointsBuilder;
client
.get_points(GetPointsBuilder::new(
"{collection_name}",
vec![0.into(), 30.into(), 100.into()],
))
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.PointIdFactory.id;
client
.retrieveAsync("{collection_name}", List.of(id(0), id(30), id(100)), false, false, null)
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.RetrieveAsync(
collectionName: "{collection_name}",
ids: [0, 30, 100],
withPayload: false,
withVectors: false
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Get(context.Background(), &qdrant.GetPoints{
CollectionName: "{collection_name}",
Ids: []*qdrant.PointId{
qdrant.NewIDNum(0), qdrant.NewIDNum(3), qdrant.NewIDNum(100),
},
})
```
This method has additional parameters `with_vectors` and `with_payload`.
Using these parameters, you can select parts of the point you want as a result.
Excluding helps you not to waste traffic transmitting useless data.
The single point can also be retrieved via the API:
REST API ([Schema](https://api.qdrant.tech/api-reference/points/get-point)):
```http
GET /collections/{collection_name}/points/{point_id}
```
<!--
Python client:
```python
```
-->
## Scroll points
Sometimes it might be necessary to get all stored points without knowing ids, or iterate over points that correspond to a filter.
REST API ([Schema](https://api.qdrant.tech/master/api-reference/search/scroll-points)):
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"must": [
{
"key": "color",
"match": {
"value": "red"
}
}
]
},
"limit": 1,
"with_payload": true,
"with_vector": false
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
must=[
models.FieldCondition(key="color", match=models.MatchValue(value="red")),
]
),
limit=1,
with_payload=True,
with_vectors=False,
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
must: [
{
key: "color",
match: {
value: "red",
},
},
],
},
limit: 1,
with_payload: true,
with_vector: false,
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
client
.scroll(
ScrollPointsBuilder::new("{collection_name}")
.filter(Filter::must([Condition::matches(
"color",
"red".to_string(),
)]))
.limit(1)
.with_payload(true)
.with_vectors(false),
)
.await?;
```
```java
import static io.qdrant.client.ConditionFactory.matchKeyword;
import static io.qdrant.client.WithPayloadSelectorFactory.enable;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(Filter.newBuilder().addMust(matchKeyword("color", "red")).build())
.setLimit(1)
.setWithPayload(enable(true))
.build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.ScrollAsync(
collectionName: "{collection_name}",
filter: MatchKeyword("color", "red"),
limit: 1,
payloadSelector: true
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("color", "red"),
},
},
Limit: qdrant.PtrOf(uint32(1)),
WithPayload: qdrant.NewWithPayload(true),
})
```
Returns all point with `color` = `red`.
```json
{
"result": {
"next_page_offset": 1,
"points": [
{
"id": 0,
"payload": {
"color": "red"
}
}
]
},
"status": "ok",
"time": 0.0001
}
```
The Scroll API will return all points that match the filter in a page-by-page manner.
All resulting points are sorted by ID. To query the next page it is necessary to specify the largest seen ID in the `offset` field.
For convenience, this ID is also returned in the field `next_page_offset`.
If the value of the `next_page_offset` field is `null` - the last page is reached.
### Order points by payload key
_Available as of v1.8.0_
When using the [`scroll`](#scroll-points) API, you can sort the results by payload key. For example, you can retrieve points in chronological order if your payloads have a `"timestamp"` field, as is shown from the example below:
<aside role="status">Without an appropriate index, payload-based ordering would create too much load on the system for each request. Qdrant therefore requires a payload index which supports <a href=/documentation/concepts/indexing/#payload-index target="_blank">Range filtering conditions</a> on the field used for <code>order_by</code></aside>
```http
POST /collections/{collection_name}/points/scroll
{
"limit": 15,
"order_by": "timestamp", // <-- this!
}
```
```python
client.scroll(
collection_name="{collection_name}",
limit=15,
order_by="timestamp", # <-- this!
)
```
```typescript
client.scroll("{collection_name}", {
limit: 15,
order_by: "timestamp", // <-- this!
});
```
```rust
use qdrant_client::qdrant::{OrderByBuilder, ScrollPointsBuilder};
client
.scroll(
ScrollPointsBuilder::new("{collection_name}")
.limit(15)
.order_by(OrderByBuilder::new("timestamp")),
)
.await?;
```
```java
import io.qdrant.client.grpc.Points.OrderBy;
import io.qdrant.client.grpc.Points.ScrollPoints;
client.scrollAsync(ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setLimit(15)
.setOrderBy(OrderBy.newBuilder().setKey("timestamp").build())
.build()).get();
```
```csharp
await client.ScrollAsync("{collection_name}", limit: 15, orderBy: "timestamp");
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Limit: qdrant.PtrOf(uint32(15)),
OrderBy: &qdrant.OrderBy{
Key: "timestamp",
},
})
```
You need to use the `order_by` `key` parameter to specify the payload key. Then you can add other fields to control the ordering, such as `direction` and `start_from`:
```http
"order_by": {
"key": "timestamp",
"direction": "desc" // default is "asc"
"start_from": 123, // start from this value
}
```
```python
order_by=models.OrderBy(
key="timestamp",
direction="desc", # default is "asc"
start_from=123, # start from this value
)
```
```typescript
order_by: {
key: "timestamp",
direction: "desc", // default is "asc"
start_from: 123, // start from this value
}
```
```rust
use qdrant_client::qdrant::{start_from::Value, Direction, OrderByBuilder};
OrderByBuilder::new("timestamp")
.direction(Direction::Desc.into())
.start_from(Value::Integer(123))
.build();
```
```java
import io.qdrant.client.grpc.Points.Direction;
import io.qdrant.client.grpc.Points.OrderBy;
import io.qdrant.client.grpc.Points.StartFrom;
OrderBy.newBuilder()
.setKey("timestamp")
.setDirection(Direction.Desc)
.setStartFrom(StartFrom.newBuilder()
.setInteger(123)
.build())
.build();
```
```csharp
using Qdrant.Client.Grpc;
new OrderBy
{
Key = "timestamp",
Direction = Direction.Desc,
StartFrom = 123
};
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.OrderBy{
Key: "timestamp",
Direction: qdrant.Direction_Desc.Enum(),
StartFrom: qdrant.NewStartFromInt(123),
}
```
<aside role="alert">When you use the <code>order_by</code> parameter, pagination is disabled.</aside>
When sorting is based on a non-unique value, it is not possible to rely on an ID offset. Thus, next_page_offset is not returned within the response. However, you can still do pagination by combining `"order_by": { "start_from": ... }` with a `{ "must_not": [{ "has_id": [...] }] }` filter.
## Counting points
_Available as of v0.8.4_
Sometimes it can be useful to know how many points fit the filter conditions without doing a real search.
Among others, for example, we can highlight the following scenarios:
- Evaluation of results size for faceted search
- Determining the number of pages for pagination
- Debugging the query execution speed
REST API ([Schema](https://api.qdrant.tech/master/api-reference/points/count-points)):
```http
POST /collections/{collection_name}/points/count
{
"filter": {
"must": [
{
"key": "color",
"match": {
"value": "red"
}
}
]
},
"exact": true
}
```
```python
client.count(
collection_name="{collection_name}",
count_filter=models.Filter(
must=[
models.FieldCondition(key="color", match=models.MatchValue(value="red")),
]
),
exact=True,
)
```
```typescript
client.count("{collection_name}", {
filter: {
must: [
{
key: "color",
match: {
value: "red",
},
},
],
},
exact: true,
});
```
```rust
use qdrant_client::qdrant::{Condition, CountPointsBuilder, Filter};
client
.count(
CountPointsBuilder::new("{collection_name}")
.filter(Filter::must([Condition::matches(
"color",
"red".to_string(),
)]))
.exact(true),
)
.await?;
```
```java
import static io.qdrant.client.ConditionFactory.matchKeyword;
import io.qdrant.client.grpc.Points.Filter;
client
.countAsync(
"{collection_name}",
Filter.newBuilder().addMust(matchKeyword("color", "red")).build(),
true)
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.CountAsync(
collectionName: "{collection_name}",
filter: MatchKeyword("color", "red"),
exact: true
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Count(context.Background(), &qdrant.CountPoints{
CollectionName: "midlib",
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("color", "red"),
},
},
})
```
Returns number of counts matching given filtering conditions:
```json
{
"count": 3811
}
```
## Batch update
_Available as of v1.5.0_
You can batch multiple point update operations. This includes inserting,
updating and deleting points, vectors and payload.
A batch update request consists of a list of operations. These are executed in
order. These operations can be batched:
- [Upsert points](#upload-points): `upsert` or `UpsertOperation`
- [Delete points](#delete-points): `delete_points` or `DeleteOperation`
- [Update vectors](#update-vectors): `update_vectors` or `UpdateVectorsOperation`
- [Delete vectors](#delete-vectors): `delete_vectors` or `DeleteVectorsOperation`
- [Set payload](/documentation/concepts/payload/#set-payload): `set_payload` or `SetPayloadOperation`
- [Overwrite payload](/documentation/concepts/payload/#overwrite-payload): `overwrite_payload` or `OverwritePayload`
- [Delete payload](/documentation/concepts/payload/#delete-payload-keys): `delete_payload` or `DeletePayloadOperation`
- [Clear payload](/documentation/concepts/payload/#clear-payload): `clear_payload` or `ClearPayloadOperation`
The following example snippet makes use of all operations.
REST API ([Schema](https://api.qdrant.tech/master/api-reference/points/batch-update)):
```http
POST /collections/{collection_name}/points/batch
{
"operations": [
{
"upsert": {
"points": [
{
"id": 1,
"vector": [1.0, 2.0, 3.0, 4.0],
"payload": {}
}
]
}
},
{
"update_vectors": {
"points": [
{
"id": 1,
"vector": [1.0, 2.0, 3.0, 4.0]
}
]
}
},
{
"delete_vectors": {
"points": [1],
"vector": [""]
}
},
{
"overwrite_payload": {
"payload": {
"test_payload": "1"
},
"points": [1]
}
},
{
"set_payload": {
"payload": {
"test_payload_2": "2",
"test_payload_3": "3"
},
"points": [1]
}
},
{
"delete_payload": {
"keys": ["test_payload_2"],
"points": [1]
}
},
{
"clear_payload": {
"points": [1]
}
},
{"delete": {"points": [1]}}
]
}
```
```python
client.batch_update_points(
collection_name="{collection_name}",
update_operations=[
models.UpsertOperation(
upsert=models.PointsList(
points=[
models.PointStruct(
id=1,
vector=[1.0, 2.0, 3.0, 4.0],
payload={},
),
]
)
),
models.UpdateVectorsOperation(
update_vectors=models.UpdateVectors(
points=[
models.PointVectors(
id=1,
vector=[1.0, 2.0, 3.0, 4.0],
)
]
)
),
models.DeleteVectorsOperation(
delete_vectors=models.DeleteVectors(points=[1], vector=[""])
),
models.OverwritePayloadOperation(
overwrite_payload=models.SetPayload(
payload={"test_payload": 1},
points=[1],
)
),
models.SetPayloadOperation(
set_payload=models.SetPayload(
payload={
"test_payload_2": 2,
"test_payload_3": 3,
},
points=[1],
)
),
models.DeletePayloadOperation(
delete_payload=models.DeletePayload(keys=["test_payload_2"], points=[1])
),
models.ClearPayloadOperation(clear_payload=models.PointIdsList(points=[1])),
models.DeleteOperation(delete=models.PointIdsList(points=[1])),
],
)
```
```typescript
client.batchUpdate("{collection_name}", {
operations: [
{
upsert: {
points: [
{
id: 1,
vector: [1.0, 2.0, 3.0, 4.0],
payload: {},
},
],
},
},
{
update_vectors: {
points: [
{
id: 1,
vector: [1.0, 2.0, 3.0, 4.0],
},
],
},
},
{
delete_vectors: {
points: [1],
vector: [""],
},
},
{
overwrite_payload: {
payload: {
test_payload: 1,
},
points: [1],
},
},
{
set_payload: {
payload: {
test_payload_2: 2,
test_payload_3: 3,
},
points: [1],
},
},
{
delete_payload: {
keys: ["test_payload_2"],
points: [1],
},
},
{
clear_payload: {
points: [1],
},
},
{
delete: {
points: [1],
},
},
],
});
```
```rust
use std::collections::HashMap;
use qdrant_client::qdrant::{
points_update_operation::{
ClearPayload, DeletePayload, DeletePoints, DeleteVectors, Operation, OverwritePayload,
PointStructList, SetPayload, UpdateVectors,
},
PointStruct, PointVectors, PointsUpdateOperation, UpdateBatchPointsBuilder, VectorsSelector,
};
use qdrant_client::Payload;
client
.update_points_batch(
UpdateBatchPointsBuilder::new(
"{collection_name}",
vec![
PointsUpdateOperation {
operation: Some(Operation::Upsert(PointStructList {
points: vec![PointStruct::new(
1,
vec![1.0, 2.0, 3.0, 4.0],
Payload::default(),
)],
..Default::default()
})),
},
PointsUpdateOperation {
operation: Some(Operation::UpdateVectors(UpdateVectors {
points: vec![PointVectors {
id: Some(1.into()),
vectors: Some(vec![1.0, 2.0, 3.0, 4.0].into()),
}],
..Default::default()
})),
},
PointsUpdateOperation {
operation: Some(Operation::DeleteVectors(DeleteVectors {
points_selector: Some(vec![1.into()].into()),
vectors: Some(VectorsSelector {
names: vec!["".into()],
}),
..Default::default()
})),
},
PointsUpdateOperation {
operation: Some(Operation::OverwritePayload(OverwritePayload {
points_selector: Some(vec![1.into()].into()),
payload: HashMap::from([("test_payload".to_string(), 1.into())]),
..Default::default()
})),
},
PointsUpdateOperation {
operation: Some(Operation::SetPayload(SetPayload {
points_selector: Some(vec![1.into()].into()),
payload: HashMap::from([
("test_payload_2".to_string(), 2.into()),
("test_payload_3".to_string(), 3.into()),
]),
..Default::default()
})),
},
PointsUpdateOperation {
operation: Some(Operation::DeletePayload(DeletePayload {
points_selector: Some(vec![1.into()].into()),
keys: vec!["test_payload_2".to_string()],
..Default::default()
})),
},
PointsUpdateOperation {
operation: Some(Operation::ClearPayload(ClearPayload {
points: Some(vec![1.into()].into()),
..Default::default()
})),
},
PointsUpdateOperation {
operation: Some(Operation::DeletePoints(DeletePoints {
points: Some(vec![1.into()].into()),
..Default::default()
})),
},
],
)
.wait(true),
)
.await?;
```
```java
import java.util.List;
import java.util.Map;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.ValueFactory.value;
import static io.qdrant.client.VectorsFactory.vectors;
import io.qdrant.client.grpc.Points.PointStruct;
import io.qdrant.client.grpc.Points.PointVectors;
import io.qdrant.client.grpc.Points.PointsIdsList;
import io.qdrant.client.grpc.Points.PointsSelector;
import io.qdrant.client.grpc.Points.PointsUpdateOperation;
import io.qdrant.client.grpc.Points.PointsUpdateOperation.ClearPayload;
import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeletePayload;
import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeletePoints;
import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeleteVectors;
import io.qdrant.client.grpc.Points.PointsUpdateOperation.PointStructList;
import io.qdrant.client.grpc.Points.PointsUpdateOperation.SetPayload;
import io.qdrant.client.grpc.Points.PointsUpdateOperation.UpdateVectors;
import io.qdrant.client.grpc.Points.VectorsSelector;
client
.batchUpdateAsync(
"{collection_name}",
List.of(
PointsUpdateOperation.newBuilder()
.setUpsert(
PointStructList.newBuilder()
.addPoints(
PointStruct.newBuilder()
.setId(id(1))
.setVectors(vectors(1.0f, 2.0f, 3.0f, 4.0f))
.build())
.build())
.build(),
PointsUpdateOperation.newBuilder()
.setUpdateVectors(
UpdateVectors.newBuilder()
.addPoints(
PointVectors.newBuilder()
.setId(id(1))
.setVectors(vectors(1.0f, 2.0f, 3.0f, 4.0f))
.build())
.build())
.build(),
PointsUpdateOperation.newBuilder()
.setDeleteVectors(
DeleteVectors.newBuilder()
.setPointsSelector(
PointsSelector.newBuilder()
.setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
.build())
.setVectors(VectorsSelector.newBuilder().addNames("").build())
.build())
.build(),
PointsUpdateOperation.newBuilder()
.setOverwritePayload(
SetPayload.newBuilder()
.setPointsSelector(
PointsSelector.newBuilder()
.setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
.build())
.putAllPayload(Map.of("test_payload", value(1)))
.build())
.build(),
PointsUpdateOperation.newBuilder()
.setSetPayload(
SetPayload.newBuilder()
.setPointsSelector(
PointsSelector.newBuilder()
.setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
.build())
.putAllPayload(
Map.of("test_payload_2", value(2), "test_payload_3", value(3)))
.build())
.build(),
PointsUpdateOperation.newBuilder()
.setDeletePayload(
DeletePayload.newBuilder()
.setPointsSelector(
PointsSelector.newBuilder()
.setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
.build())
.addKeys("test_payload_2")
.build())
.build(),
PointsUpdateOperation.newBuilder()
.setClearPayload(
ClearPayload.newBuilder()
.setPoints(
PointsSelector.newBuilder()
.setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
.build())
.build())
.build(),
PointsUpdateOperation.newBuilder()
.setDeletePoints(
DeletePoints.newBuilder()
.setPoints(
PointsSelector.newBuilder()
.setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
.build())
.build())
.build()))
.get();
```
To batch many points with a single operation type, please use batching
functionality in that operation directly.
## Awaiting result
If the API is called with the `&wait=false` parameter, or if it is not explicitly specified, the client will receive an acknowledgment of receiving data:
```json
{
"result": {
"operation_id": 123,
"status": "acknowledged"
},
"status": "ok",
"time": 0.000206061
}
```
This response does not mean that the data is available for retrieval yet. This
uses a form of eventual consistency. It may take a short amount of time before it
is actually processed as updating the collection happens in the background. In
fact, it is possible that such request eventually fails.
If inserting a lot of vectors, we also recommend using asynchronous requests to take advantage of pipelining.
If the logic of your application requires a guarantee that the vector will be available for searching immediately after the API responds, then use the flag `?wait=true`.
In this case, the API will return the result only after the operation is finished:
```json
{
"result": {
"operation_id": 0,
"status": "completed"
},
"status": "ok",
"time": 0.000206061
}
``` | documentation/concepts/points.md |
---
title: Vectors
weight: 41
aliases:
- /vectors
---
# Vectors
Vectors (or embeddings) are the core concept of the Qdrant Vector Search engine.
Vectors define the similarity between objects in the vector space.
If a pair of vectors are similar in vector space, it means that the objects they represent are similar in some way.
For example, if you have a collection of images, you can represent each image as a vector.
If two images are similar, their vectors will be close to each other in the vector space.
In order to obtain a vector representation of an object, you need to apply a vectorization algorithm to the object.
Usually, this algorithm is a neural network that converts the object into a fixed-size vector.
The neural network is usually [trained](/articles/metric-learning-tips/) on a pairs or [triplets](/articles/triplet-loss/) of similar and dissimilar objects, so it learns to recognize a specific type of similarity.
By using this property of vectors, you can explore your data in a number of ways; e.g. by searching for similar objects, clustering objects, and more.
## Vector Types
Modern neural networks can output vectors in different shapes and sizes, and Qdrant supports most of them.
Let's take a look at the most common types of vectors supported by Qdrant.
### Dense Vectors
This is the most common type of vector. It is a simple list of numbers, it has a fixed length and each element of the list is a floating-point number.
It looks like this:
```json
// A piece of a real-world dense vector
[
-0.013052909,
0.020387933,
-0.007869,
-0.11111383,
-0.030188112,
-0.0053388323,
0.0010654867,
0.072027855,
-0.04167721,
0.014839341,
-0.032948174,
-0.062975034,
-0.024837125,
....
]
```
The majority of neural networks create dense vectors, so you can use them with Qdrant without any additional processing.
Although compatible with most embedding models out there, Qdrant has been tested with the following [verified embedding providers](/documentation/embeddings/).
### Sparse Vectors
Sparse vectors are a special type of vectors.
Mathematically, they are the same as dense vectors, but they contain many zeros so they are stored in a special format.
Sparse vectors in Qdrant don't have a fixed length, as it is dynamically allocated during vector insertion.
In order to define a sparse vector, you need to provide a list of non-zero elements and their indexes.
```json
// A sparse vector with 4 non-zero elements
{
"indexes": [1, 3, 5, 7],
"values": [0.1, 0.2, 0.3, 0.4]
}
```
Sparse vectors in Qdrant are kept in special storage and indexed in a separate index, so their configuration is different from dense vectors.
To create a collection with sparse vectors:
```http
PUT /collections/{collection_name}
{
"sparse_vectors": {
"text": { },
}
}
```
```bash
curl -X PUT http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"sparse_vectors": {
"text": { }
}
}'
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
sparse_vectors_config={
"text": models.SparseVectorParams(),
},
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
sparse_vectors: {
text: { },
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, SparseVectorParamsBuilder, SparseVectorsConfigBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
let mut sparse_vectors_config = SparseVectorsConfigBuilder::default();
sparse_vectors_config.add_named_vector_params("text", SparseVectorParamsBuilder::default());
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.sparse_vectors_config(sparse_vectors_config),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.SparseVectorConfig;
import io.qdrant.client.grpc.Collections.SparseVectorParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setSparseVectorsConfig(
SparseVectorConfig.newBuilder()
.putMap("text", SparseVectorParams.getDefaultInstance()))
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
sparseVectorsConfig: ("text", new SparseVectorParams())
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
map[string]*qdrant.SparseVectorParams{
"text": {},
}),
})
```
Insert a point with a sparse vector into the created collection:
```http
PUT /collections/{collection_name}/points
{
"points": [
{
"id": 1,
"vector": {
"text": {
"indices": [1, 3, 5, 7],
"values": [0.1, 0.2, 0.3, 0.4]
}
}
}
]
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.upsert(
collection_name="{collection_name}",
points=[
models.PointStruct(
id=1,
payload={}, # Add any additional payload if necessary
vector={
"text": models.SparseVector(
indices=[1, 3, 5, 7],
values=[0.1, 0.2, 0.3, 0.4]
)
},
)
],
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.upsert("{collection_name}", {
points: [
{
id: 1,
vector: {
text: {
indices: [1, 3, 5, 7],
values: [0.1, 0.2, 0.3, 0.4]
},
},
}
});
```
```rust
use qdrant_client::qdrant::{NamedVectors, PointStruct, UpsertPointsBuilder, Vector};
use qdrant_client::{Payload, Qdrant};
let client = Qdrant::from_url("http://localhost:6334").build()?;
let points = vec![PointStruct::new(
1,
NamedVectors::default().add_vector(
"text",
Vector::new_sparse(vec![1, 3, 5, 7], vec![0.1, 0.2, 0.3, 0.4]),
),
Payload::new(),
)];
client
.upsert_points(UpsertPointsBuilder::new("{collection_name}", points))
.await?;
```
```java
import java.util.List;
import java.util.Map;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.VectorFactory.vector;
import static io.qdrant.client.VectorsFactory.namedVectors;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.PointStruct;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.upsertAsync(
"{collection_name}",
List.of(
PointStruct.newBuilder()
.setId(id(1))
.setVectors(
namedVectors(Map.of(
"text", vector(List.of(1.0f, 2.0f), List.of(6, 7))))
)
.build()))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpsertAsync(
collectionName: "{collection_name}",
points: new List < PointStruct > {
new() {
Id = 1,
Vectors = new Dictionary < string, Vector > {
["text"] = ([0.1 f, 0.2 f, 0.3 f, 0.4 f], [1, 3, 5, 7])
}
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "{collection_name}",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewIDNum(1),
Vectors: qdrant.NewVectorsMap(
map[string]*qdrant.Vector{
"text": qdrant.NewVectorSparse(
[]uint32{1, 3, 5, 7},
[]float32{0.1, 0.2, 0.3, 0.4}),
}),
},
},
})
```
Now you can run a search with sparse vectors:
```http
POST /collections/{collection_name}/points/query
{
"query": {
"indices": [1, 3, 5, 7],
"values": [0.1, 0.2, 0.3, 0.4]
},
"using": "text"
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
result = client.query_points(
collection_name="{collection_name}",
query_vector=models.SparseVector(indices=[1, 3, 5, 7], values=[0.1, 0.2, 0.3, 0.4]),
using="text",
).points
```
```rust
use qdrant_client::qdrant::QueryPointsBuilder;
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(vec![(1, 0.2), (3, 0.1), (5, 0.9), (7, 0.7)])
.limit(10)
.using("text"),
)
.await?;
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: {
indices: [1, 3, 5, 7],
values: [0.1, 0.2, 0.3, 0.4]
},
using: "text",
limit: 3,
});
```
```java
import java.util.List;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.QueryFactory.nearest;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setUsing("text")
.setQuery(nearest(List.of(0.1f, 0.2f, 0.3f, 0.4f), List.of(1, 3, 5, 7)))
.setLimit(3)
.build())
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new (float, uint)[] {(0.1f, 1), (0.2f, 3), (0.3f, 5), (0.4f, 7)},
usingVector: "text",
limit: 3
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQuerySparse(
[]uint32{1, 3, 5, 7},
[]float32{0.1, 0.2, 0.3, 0.4}),
Using: qdrant.PtrOf("text"),
})
```
### Multivectors
**Available as of v1.10.0**
Qdrant supports the storing of a variable amount of same-shaped dense vectors in a single point.
This means that instead of a single dense vector, you can upload a matrix of dense vectors.
The length of the matrix is fixed, but the number of vectors in the matrix can be different for each point.
Multivectors look like this:
```json
// A multivector of size 4
"vector": [
[-0.013, 0.020, -0.007, -0.111],
[-0.030, -0.055, 0.001, 0.072],
[-0.041, 0.014, -0.032, -0.062],
....
]
```
There are two scenarios where multivectors are useful:
* **Multiple representation of the same object** - For example, you can store multiple embeddings for pictures of the same object, taken from different angles. This approach assumes that the payload is same for all vectors.
* **Late interaction embeddings** - Some text embedding models can output multiple vectors for a single text.
For example, a family of models such as ColBERT output a relatively small vector for each token in the text.
In order to use multivectors, we need to specify a function that will be used to compare between matrices of vectors
Currently, Qdrant supports `max_sim` function, which is defined as a sum of maximum similarities between each pair of vectors in the matrices.
$$
score = \sum_{i=1}^{N} \max_{j=1}^{M} \text{Sim}(\text{vectorA}_i, \text{vectorB}_j)
$$
Where $N$ is the number of vectors in the first matrix, $M$ is the number of vectors in the second matrix, and $\text{Sim}$ is a similarity function, for example, cosine similarity.
To use multivectors, create a collection with the following configuration:
```http
PUT collections/{collection_name}
{
"vectors": {
"size": 128,
"distance": "Cosine",
"multivector_config": {
"comparator": "max_sim"
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(
size=128,
distance=models.Distance.Cosine,
multivector_config=models.MultiVectorConfig(
comparator=models.MultiVectorComparator.MAX_SIM
),
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 128,
distance: "Cosine",
multivector_config: {
comparator: "max_sim"
}
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, VectorParamsBuilder,
MultiVectorComparator, MultiVectorConfigBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}")
.vectors_config(
VectorParamsBuilder::new(100, Distance::Cosine)
.multivector_config(
MultiVectorConfigBuilder::new(MultiVectorComparator::MaxSim)
),
),
)
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.MultiVectorComparator;
import io.qdrant.client.grpc.Collections.MultiVectorConfig;
import io.qdrant.client.grpc.Collections.VectorParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.createCollectionAsync("{collection_name}",
VectorParams.newBuilder().setSize(128)
.setDistance(Distance.Cosine)
.setMultivectorConfig(MultiVectorConfig.newBuilder()
.setComparator(MultiVectorComparator.MaxSim)
.build())
.build()).get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams {
Size = 128,
Distance = Distance.Cosine,
MultivectorConfig = new() {
Comparator = MultiVectorComparator.MaxSim
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 128,
Distance: qdrant.Distance_Cosine,
MultivectorConfig: &qdrant.MultiVectorConfig{
Comparator: qdrant.MultiVectorComparator_MaxSim,
},
}),
})
```
To insert a point with multivector:
```http
PUT collections/{collection_name}/points
{
"points": [
{
"id": 1,
"vector": [
[-0.013, 0.020, -0.007, -0.111, ...],
[-0.030, -0.055, 0.001, 0.072, ...],
[-0.041, 0.014, -0.032, -0.062, ...]
]
}
]
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.upsert(
collection_name="{collection_name}",
points=[
models.PointStruct(
id=1,
vector=[
[-0.013, 0.020, -0.007, -0.111, ...],
[-0.030, -0.055, 0.001, 0.072, ...],
[-0.041, 0.014, -0.032, -0.062, ...]
],
)
],
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.upsert("{collection_name}", {
points: [
{
id: 1,
vector: [
[-0.013, 0.020, -0.007, -0.111, ...],
[-0.030, -0.055, 0.001, 0.072, ...],
[-0.041, 0.014, -0.032, -0.062, ...]
],
}
]
});
```
```rust
use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder, Vector};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
let points = vec![
PointStruct::new(
1,
Vector::new_multi(vec![
vec![-0.013, 0.020, -0.007, -0.111],
vec![-0.030, -0.055, 0.001, 0.072],
vec![-0.041, 0.014, -0.032, -0.062],
]),
Payload::new()
)
];
client
.upsert_points(
UpsertPointsBuilder::new("{collection_name}", points)
).await?;
```
```java
import java.util.List;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.VectorsFactory.vectors;
import static io.qdrant.client.VectorFactory.multiVector;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.PointStruct;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.upsertAsync(
"{collection_name}",
List.of(
PointStruct.newBuilder()
.setId(id(1))
.setVectors(vectors(multiVector(new float[][] {
{-0.013f, 0.020f, -0.007f, -0.111f},
{-0.030f, -0.055f, 0.001f, 0.072f},
{-0.041f, 0.014f, -0.032f, -0.062f}
})))
.build()
))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.UpsertAsync(
collectionName: "{collection_name}",
points: new List <PointStruct> {
new() {
Id = 1,
Vectors = new float[][] {
[-0.013f, 0.020f, -0.007f, -0.111f],
[-0.030f, -0.05f, 0.001f, 0.072f],
[-0.041f, 0.014f, -0.032f, -0.062f ],
},
},
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "{collection_name}",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewIDNum(1),
Vectors: qdrant.NewVectorsMulti(
[][]float32{
{-0.013, 0.020, -0.007, -0.111},
{-0.030, -0.055, 0.001, 0.072},
{-0.041, 0.014, -0.032, -0.062}}),
},
},
})
```
To search with multivector (available in `query` API):
```http
POST collections/{collection_name}/points/query
{
"query": [
[-0.013, 0.020, -0.007, -0.111, ...],
[-0.030, -0.055, 0.001, 0.072, ...],
[-0.041, 0.014, -0.032, -0.062, ...]
]
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query=[
[-0.013, 0.020, -0.007, -0.111, ...],
[-0.030, -0.055, 0.001, 0.072, ...],
[-0.041, 0.014, -0.032, -0.062, ...]
],
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
"query": [
[-0.013, 0.020, -0.007, -0.111, ...],
[-0.030, -0.055, 0.001, 0.072, ...],
[-0.041, 0.014, -0.032, -0.062, ...]
]
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{ QueryPointsBuilder, VectorInput };
let client = Qdrant::from_url("http://localhost:6334").build()?;
let res = client.query(
QueryPointsBuilder::new("{collection_name}")
.query(VectorInput::new_multi(
vec![
vec![-0.013, 0.020, -0.007, -0.111, ...],
vec![-0.030, -0.055, 0.001, 0.072, ...],
vec![-0.041, 0.014, -0.032, -0.062, ...],
]
))
).await?;
```
```java
import static io.qdrant.client.QueryFactory.nearest;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(new float[][] {
{-0.013f, 0.020f, -0.007f, -0.111f},
{-0.030f, -0.055f, 0.001f, 0.072f},
{-0.041f, 0.014f, -0.032f, -0.062f}
}))
.build()).get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: new float[][] {
[-0.013f, 0.020f, -0.007f, -0.111f],
[-0.030f, -0.055f, 0.001 , 0.072f],
[-0.041f, 0.014f, -0.032f, -0.062f],
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQueryMulti(
[][]float32{
{-0.013, 0.020, -0.007, -0.111},
{-0.030, -0.055, 0.001, 0.072},
{-0.041, 0.014, -0.032, -0.062},
}),
})
```
## Named Vectors
Aside from storing multiple vectors of the same shape in a single point, Qdrant supports storing multiple different vectors in a single point.
Each of these vectors should have a unique configuration and should be addressed by a unique name.
Also, each vector can be of a different type and be generated by a different embedding model.
To create a collection with named vectors, you need to specify a configuration for each vector:
```http
PUT /collections/{collection_name}
{
"vectors": {
"image": {
"size": 4,
"distance": "Dot"
},
"text": {
"size": 8,
"distance": "Cosine"
}
}
}
```
```bash
curl -X PUT http://localhost:6333/collections/{collection_name} \
-H 'Content-Type: application/json' \
--data-raw '{
"vectors": {
"image": {
"size": 4,
"distance": "Dot"
},
"text": {
"size": 8,
"distance": "Cosine"
}
}
}'
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config={
"image": models.VectorParams(size=4, distance=models.Distance.DOT),
"text": models.VectorParams(size=8, distance=models.Distance.COSINE),
},
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
image: { size: 4, distance: "Dot" },
text: { size: 8, distance: "Cosine" },
},
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Distance, VectorParamsBuilder, VectorsConfigBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
let mut vector_config = VectorsConfigBuilder::default();
vector_config.add_named_vector_params("text", VectorParamsBuilder::new(4, Distance::Dot));
vector_config.add_named_vector_params("image", VectorParamsBuilder::new(8, Distance::Cosine));
client
.create_collection(
CreateCollectionBuilder::new("{collection_name}").vectors_config(vector_config),
)
.await?;
```
```java
import java.util.Map;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.VectorParams;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
"{collection_name}",
Map.of(
"image", VectorParams.newBuilder().setSize(4).setDistance(Distance.Dot).build(),
"text",
VectorParams.newBuilder().setSize(8).setDistance(Distance.Cosine).build()))
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParamsMap {
Map = {
["image"] = new VectorParams {
Size = 4, Distance = Distance.Dot
},
["text"] = new VectorParams {
Size = 8, Distance = Distance.Cosine
},
}
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfigMap(
map[string]*qdrant.VectorParams{
"image": {
Size: 4,
Distance: qdrant.Distance_Dot,
},
"text": {
Size: 8,
Distance: qdrant.Distance_Cosine,
},
}),
})
```
<!-- ToDo: Examples of insert and search -->
## Datatypes
Newest versions of embeddings models generate vectors with very large dimentionalities.
With OpenAI's `text-embedding-3-large` embedding model, the dimensionality can go up to 3072.
The amount of memory required to store such vectors grows linearly with the dimensionality,
so it is important to choose the right datatype for the vectors.
The choice between datatypes is a trade-off between memory consumption and precision of vectors.
Qdrant supports a number of datatypes for both dense and sparse vectors:
**Float32**
This is the default datatype for vectors in Qdrant. It is a 32-bit (4 bytes) floating-point number.
The standard OpenAI embedding of 1536 dimensionality will require 6KB of memory to store in Float32.
You don't need to specify the datatype for vectors in Qdrant, as it is set to Float32 by default.
**Float16**
This is a 16-bit (2 bytes) floating-point number. It is also known as half-precision float.
Intuitively, it looks like this:
```text
float32 -> float16 delta (float32 - float16).abs
0.79701585 -> 0.796875 delta 0.00014084578
0.7850789 -> 0.78515625 delta 0.00007736683
0.7775044 -> 0.77734375 delta 0.00016063452
0.85776305 -> 0.85791016 delta 0.00014710426
0.6616839 -> 0.6616211 delta 0.000062823296
```
The main advantage of Float16 is that it requires half the memory of Float32, while having virtually no impact on the quality of vector search.
To use Float16, you need to specify the datatype for vectors in the collection configuration:
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 128,
"distance": "Cosine",
"datatype": "float16" // <-- For dense vectors
},
"sparse_vectors": {
"text": {
"index": {
"datatype": "float16" // <-- And for sparse vectors
}
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(
size=128,
distance=models.Distance.COSINE,
datatype=models.Datatype.FLOAT16
),
sparse_vectors_config={
"text": models.SparseVectorParams(
index=models.SparseIndexConfig(datatype=models.Datatype.FLOAT16)
),
},
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 128,
distance: "Cosine",
datatype: "float16"
},
sparse_vectors: {
text: {
index: {
datatype: "float16"
}
}
}
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Datatype, Distance, SparseIndexConfigBuilder, SparseVectorParamsBuilder, SparseVectorsConfigBuilder, VectorParamsBuilder
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
let mut sparse_vector_config = SparseVectorsConfigBuilder::default();
sparse_vector_config.add_named_vector_params(
"text",
SparseVectorParamsBuilder::default()
.index(SparseIndexConfigBuilder::default().datatype(Datatype::Float32)),
);
let create_collection = CreateCollectionBuilder::new("{collection_name}")
.sparse_vectors_config(sparse_vector_config)
.vectors_config(
VectorParamsBuilder::new(128, Distance::Cosine).datatype(Datatype::Float16),
);
client.create_collection(create_collection).await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Datatype;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.SparseIndexConfig;
import io.qdrant.client.grpc.Collections.SparseVectorConfig;
import io.qdrant.client.grpc.Collections.SparseVectorParams;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(VectorsConfig.newBuilder()
.setParams(VectorParams.newBuilder()
.setSize(128)
.setDistance(Distance.Cosine)
.setDatatype(Datatype.Float16)
.build())
.build())
.setSparseVectorsConfig(
SparseVectorConfig.newBuilder()
.putMap("text", SparseVectorParams.newBuilder()
.setIndex(SparseIndexConfig.newBuilder()
.setDatatype(Datatype.Float16)
.build())
.build()))
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams {
Size = 128,
Distance = Distance.Cosine,
Datatype = Datatype.Float16
},
sparseVectorsConfig: (
"text",
new SparseVectorParams {
Index = new SparseIndexConfig {
Datatype = Datatype.Float16
}
}
)
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 128,
Distance: qdrant.Distance_Cosine,
Datatype: qdrant.Datatype_Float16.Enum(),
}),
SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
map[string]*qdrant.SparseVectorParams{
"text": {
Index: &qdrant.SparseIndexConfig{
Datatype: qdrant.Datatype_Float16.Enum(),
},
},
}),
})
```
**Uint8**
Another step towards memory optimization is to use the Uint8 datatype for vectors.
Unlike Float16, Uint8 is not a floating-point number, but an integer number in the range from 0 to 255.
Not all embeddings models generate vectors in the range from 0 to 255, so you need to be careful when using Uint8 datatype.
In order to convert a number from float range to Uint8 range, you need to apply a process called quantization.
Some embedding providers may provide embeddings in a pre-quantized format.
One of the most notable examples is the [Cohere int8 & binary embeddings](https://cohere.com/blog/int8-binary-embeddings).
For other embeddings, you will need to apply quantization yourself.
<aside role="alert">
There is a difference in how Uint8 vectors are handled for dense and sparse vectors.
Dense vectors are required to be in the range from 0 to 255, while sparse vectors can be quantized in-flight.
</aside>
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 128,
"distance": "Cosine",
"datatype": "uint8" // <-- For dense vectors
},
"sparse_vectors": {
"text": {
"index": {
"datatype": "uint8" // <-- For sparse vectors
}
}
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(
size=128,
distance=models.Distance.COSINE,
datatype=models.Datatype.UINT8
),
sparse_vectors_config={
"text": models.SparseVectorParams(
index=models.SparseIndexConfig(datatype=models.Datatype.UINT8)
),
},
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 128,
distance: "Cosine",
datatype: "uint8"
},
sparse_vectors: {
text: {
index: {
datatype: "uint8"
}
}
}
});
```
```rust
use qdrant_client::qdrant::{
CreateCollectionBuilder, Datatype, Distance, SparseIndexConfigBuilder,
SparseVectorParamsBuilder, SparseVectorsConfigBuilder, VectorParamsBuilder,
};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
let mut sparse_vector_config = SparseVectorsConfigBuilder::default();
sparse_vector_config.add_named_vector_params(
"text",
SparseVectorParamsBuilder::default()
.index(SparseIndexConfigBuilder::default().datatype(Datatype::Uint8)),
);
let create_collection = CreateCollectionBuilder::new("{collection_name}")
.sparse_vectors_config(sparse_vector_config)
.vectors_config(
VectorParamsBuilder::new(128, Distance::Cosine)
.datatype(Datatype::Uint8)
);
client.create_collection(create_collection).await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Collections.CreateCollection;
import io.qdrant.client.grpc.Collections.Datatype;
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.SparseIndexConfig;
import io.qdrant.client.grpc.Collections.SparseVectorConfig;
import io.qdrant.client.grpc.Collections.SparseVectorParams;
import io.qdrant.client.grpc.Collections.VectorParams;
import io.qdrant.client.grpc.Collections.VectorsConfig;
QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.createCollectionAsync(
CreateCollection.newBuilder()
.setCollectionName("{collection_name}")
.setVectorsConfig(VectorsConfig.newBuilder()
.setParams(VectorParams.newBuilder()
.setSize(128)
.setDistance(Distance.Cosine)
.setDatatype(Datatype.Uint8)
.build())
.build())
.setSparseVectorsConfig(
SparseVectorConfig.newBuilder()
.putMap("text", SparseVectorParams.newBuilder()
.setIndex(SparseIndexConfig.newBuilder()
.setDatatype(Datatype.Uint8)
.build())
.build()))
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.CreateCollectionAsync(
collectionName: "{collection_name}",
vectorsConfig: new VectorParams {
Size = 128,
Distance = Distance.Cosine,
Datatype = Datatype.Uint8
},
sparseVectorsConfig: (
"text",
new SparseVectorParams {
Index = new SparseIndexConfig {
Datatype = Datatype.Uint8
}
}
)
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 128,
Distance: qdrant.Distance_Cosine,
Datatype: qdrant.Datatype_Uint8.Enum(),
}),
SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
map[string]*qdrant.SparseVectorParams{
"text": {
Index: &qdrant.SparseIndexConfig{
Datatype: qdrant.Datatype_Uint8.Enum(),
},
},
}),
})
```
## Quantization
Apart from changing the datatype of the original vectors, Qdrant can create quantized representations of vectors alongside the original ones.
This quantized representation can be used to quickly select candidates for rescoring with the original vectors or even used directly for search.
Quantization is applied in the background, during the optimization process.
More information about the quantization process can be found in the [Quantization](/documentation/guides/quantization/) section.
## Vector Storage
Depending on the requirements of the application, Qdrant can use one of the data storage options.
Keep in mind that you will have to tradeoff between search speed and the size of RAM used.
More information about the storage options can be found in the [Storage](/documentation/concepts/storage/#vector-storage) section.
| documentation/concepts/vectors.md |
---
title: Snapshots
weight: 110
aliases:
- ../snapshots
---
# Snapshots
*Available as of v0.8.4*
Snapshots are `tar` archive files that contain data and configuration of a specific collection on a specific node at a specific time. In a distributed setup, when you have multiple nodes in your cluster, you must create snapshots for each node separately when dealing with a single collection.
This feature can be used to archive data or easily replicate an existing deployment. For disaster recovery, Qdrant Cloud users may prefer to use [Backups](/documentation/cloud/backups/) instead, which are physical disk-level copies of your data.
For a step-by-step guide on how to use snapshots, see our [tutorial](/documentation/tutorials/create-snapshot/).
## Create snapshot
<aside role="status">If you work with a distributed deployment, you have to create snapshots for each node separately. A single snapshot will contain only the data stored on the node on which the snapshot was created.</aside>
To create a new snapshot for an existing collection:
```http
POST /collections/{collection_name}/snapshots
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.create_snapshot(collection_name="{collection_name}")
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createSnapshot("{collection_name}");
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.create_snapshot("{collection_name}").await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.createSnapshotAsync("{collection_name}").get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.CreateSnapshotAsync("{collection_name}");
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateSnapshot(context.Background(), "{collection_name}")
```
This is a synchronous operation for which a `tar` archive file will be generated into the `snapshot_path`.
### Delete snapshot
*Available as of v1.0.0*
```http
DELETE /collections/{collection_name}/snapshots/{snapshot_name}
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.delete_snapshot(
collection_name="{collection_name}", snapshot_name="{snapshot_name}"
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.deleteSnapshot("{collection_name}", "{snapshot_name}");
```
```rust
use qdrant_client::qdrant::DeleteSnapshotRequestBuilder;
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.delete_snapshot(DeleteSnapshotRequestBuilder::new(
"{collection_name}",
"{snapshot_name}",
))
.await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.deleteSnapshotAsync("{collection_name}", "{snapshot_name}").get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.DeleteSnapshotAsync(collectionName: "{collection_name}", snapshotName: "{snapshot_name}");
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.DeleteSnapshot(context.Background(), "{collection_name}", "{snapshot_name}")
```
## List snapshot
List of snapshots for a collection:
```http
GET /collections/{collection_name}/snapshots
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.list_snapshots(collection_name="{collection_name}")
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.listSnapshots("{collection_name}");
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.list_snapshots("{collection_name}").await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.listSnapshotAsync("{collection_name}").get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.ListSnapshotsAsync("{collection_name}");
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.ListSnapshots(context.Background(), "{collection_name}")
```
## Retrieve snapshot
<aside role="status">Only available through the REST API for the time being.</aside>
To download a specified snapshot from a collection as a file:
```http
GET /collections/{collection_name}/snapshots/{snapshot_name}
```
```shell
curl 'http://{qdrant-url}:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.snapshot' \
-H 'api-key: ********' \
--output 'filename.snapshot'
```
## Restore snapshot
<aside role="status">Snapshots generated in one Qdrant cluster can only be restored to other Qdrant clusters that share the same minor version. For instance, a snapshot captured from a v1.4.1 cluster can only be restored to clusters running version v1.4.x, where x is equal to or greater than 1.</aside>
Snapshots can be restored in three possible ways:
1. [Recovering from a URL or local file](#recover-from-a-url-or-local-file) (useful for restoring a snapshot file that is on a remote server or already stored on the node)
3. [Recovering from an uploaded file](#recover-from-an-uploaded-file) (useful for migrating data to a new cluster)
3. [Recovering during start-up](#recover-during-start-up) (useful when running a self-hosted single-node Qdrant instance)
Regardless of the method used, Qdrant will extract the shard data from the snapshot and properly register shards in the cluster.
If there are other active replicas of the recovered shards in the cluster, Qdrant will replicate them to the newly recovered node by default to maintain data consistency.
### Recover from a URL or local file
*Available as of v0.11.3*
This method of recovery requires the snapshot file to be downloadable from a URL or exist as a local file on the node (like if you [created the snapshot](#create-snapshot) on this node previously). If instead you need to upload a snapshot file, see the next section.
To recover from a URL or local file use the [snapshot recovery endpoint](https://api.qdrant.tech/master/api-reference/snapshots/recover-from-snapshot). This endpoint accepts either a URL like `https://example.com` or a [file URI](https://en.wikipedia.org/wiki/File_URI_scheme) like `file:///tmp/snapshot-2022-10-10.snapshot`. If the target collection does not exist, it will be created.
```http
PUT /collections/{collection_name}/snapshots/recover
{
"location": "http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"
}
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://qdrant-node-2:6333")
client.recover_snapshot(
"{collection_name}",
"http://qdrant-node-1:6333/collections/collection_name/snapshots/snapshot-2022-10-10.shapshot",
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.recoverSnapshot("{collection_name}", {
location: "http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot",
});
```
<aside role="status">When recovering from a URL, the URL must be reachable by the Qdrant node that you are restoring. In Qdrant Cloud, restoring via URL is not supported since all outbound traffic is blocked for security purposes. You may still restore via file URI or via an uploaded file.</aside>
### Recover from an uploaded file
The snapshot file can also be uploaded as a file and restored using the [recover from uploaded snapshot](https://api.qdrant.tech/master/api-reference/snapshots/recover-from-uploaded-snapshot). This endpoint accepts the raw snapshot data in the request body. If the target collection does not exist, it will be created.
```bash
curl -X POST 'http://{qdrant-url}:6333/collections/{collection_name}/snapshots/upload?priority=snapshot' \
-H 'api-key: ********' \
-H 'Content-Type:multipart/form-data' \
-F 'snapshot=@/path/to/snapshot-2022-10-10.shapshot'
```
This method is typically used to migrate data from one cluster to another, so we recommend setting the [priority](#snapshot-priority) to "snapshot" for that use-case.
### Recover during start-up
<aside role="alert">This method cannot be used in a multi-node deployment and cannot be used in Qdrant Cloud.</aside>
If you have a single-node deployment, you can recover any collection at start-up and it will be immediately available.
Restoring snapshots is done through the Qdrant CLI at start-up time via the `--snapshot` argument which accepts a list of pairs such as `<snapshot_file_path>:<target_collection_name>`
For example:
```bash
./qdrant --snapshot /snapshots/test-collection-archive.snapshot:test-collection --snapshot /snapshots/test-collection-archive.snapshot:test-copy-collection
```
The target collection **must** be absent otherwise the program will exit with an error.
If you wish instead to overwrite an existing collection, use the `--force_snapshot` flag with caution.
### Snapshot priority
When recovering a snapshot to a non-empty node, there may be conflicts between the snapshot data and the existing data. The "priority" setting controls how Qdrant handles these conflicts. The priority setting is important because different priorities can give very
different end results. The default priority may not be best for all situations.
The available snapshot recovery priorities are:
- `replica`: _(default)_ prefer existing data over the snapshot.
- `snapshot`: prefer snapshot data over existing data.
- `no_sync`: restore snapshot without any additional synchronization.
To recover a new collection from a snapshot, you need to set
the priority to `snapshot`. With `snapshot` priority, all data from the snapshot
will be recovered onto the cluster. With `replica` priority _(default)_, you'd
end up with an empty collection because the collection on the cluster did not
contain any points and that source was preferred.
`no_sync` is for specialized use cases and is not commonly used. It allows
managing shards and transferring shards between clusters manually without any
additional synchronization. Using it incorrectly will leave your cluster in a
broken state.
To recover from a URL, you specify an additional parameter in the request body:
```http
PUT /collections/{collection_name}/snapshots/recover
{
"location": "http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot",
"priority": "snapshot"
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://qdrant-node-2:6333")
client.recover_snapshot(
"{collection_name}",
"http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot",
priority=models.SnapshotPriority.SNAPSHOT,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.recoverSnapshot("{collection_name}", {
location: "http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot",
priority: "snapshot"
});
```
```bash
curl -X POST 'http://qdrant-node-1:6333/collections/{collection_name}/snapshots/upload?priority=snapshot' \
-H 'api-key: ********' \
-H 'Content-Type:multipart/form-data' \
-F 'snapshot=@/path/to/snapshot-2022-10-10.shapshot'
```
## Snapshots for the whole storage
*Available as of v0.8.5*
Sometimes it might be handy to create snapshot not just for a single collection, but for the whole storage, including collection aliases.
Qdrant provides a dedicated API for that as well. It is similar to collection-level snapshots, but does not require `collection_name`.
<aside role="alert">Full storage snapshots are only suitable for single-node deployments. <a href="/documentation/guides/distributed_deployment/">Distributed</a> mode is not supported as it doesn't contain the necessary files for that.</aside>
<aside role="status">Full storage snapshots can be created and downloaded from Qdrant Cloud, but you cannot restore a Qdrant Cloud cluster from a whole storage snapshot since that requires use of the Qdrant CLI. You can use <a href="/documentation/cloud/backups/">Backups</a> instead.</aside>
### Create full storage snapshot
```http
POST /snapshots
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.create_full_snapshot()
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createFullSnapshot();
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.create_full_snapshot().await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.createFullSnapshotAsync().get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.CreateFullSnapshotAsync();
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.CreateFullSnapshot(context.Background())
```
### Delete full storage snapshot
*Available as of v1.0.0*
```http
DELETE /snapshots/{snapshot_name}
```
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
client.delete_full_snapshot(snapshot_name="{snapshot_name}")
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.deleteFullSnapshot("{snapshot_name}");
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.delete_full_snapshot("{snapshot_name}").await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.deleteFullSnapshotAsync("{snapshot_name}").get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.DeleteFullSnapshotAsync("{snapshot_name}");
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.DeleteFullSnapshot(context.Background(), "{snapshot_name}")
```
### List full storage snapshots
```http
GET /snapshots
```
```python
from qdrant_client import QdrantClient
client = QdrantClient("localhost", port=6333)
client.list_full_snapshots()
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.listFullSnapshots();
```
```rust
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.list_full_snapshots().await?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.listFullSnapshotAsync().get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.ListFullSnapshotsAsync();
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.ListFullSnapshots(context.Background())
```
### Download full storage snapshot
<aside role="status">Only available through the REST API for the time being.</aside>
```http
GET /snapshots/{snapshot_name}
```
## Restore full storage snapshot
Restoring snapshots can only be done through the Qdrant CLI at startup time.
For example:
```bash
./qdrant --storage-snapshot /snapshots/full-snapshot-2022-07-18-11-20-51.snapshot
```
## Storage
Created, uploaded and recovered snapshots are stored as `.snapshot` files. By
default, they're stored on the [local file system](#local-file-system). You may
also configure to use an [S3 storage](#s3) service for them.
### Local file system
By default, snapshots are stored at `./snapshots` or at `/qdrant/snapshots` when
using our Docker image.
The target directory can be controlled through the [configuration](../../guides/configuration/):
```yaml
storage:
# Specify where you want to store snapshots.
snapshots_path: ./snapshots
```
Alternatively you may use the environment variable `QDRANT__STORAGE__SNAPSHOTS_PATH=./snapshots`.
*Available as of v1.3.0*
While a snapshot is being created, temporary files are placed in the configured
storage directory by default. In case of limited capacity or a slow
network attached disk, you can specify a separate location for temporary files:
```yaml
storage:
# Where to store temporary files
temp_path: /tmp
```
### S3
*Available as of v1.10.0*
Rather than storing snapshots on the local file system, you may also configure
to store snapshots in an S3-compatible storage service. To enable this, you must
configure it in the [configuration](../../guides/configuration/) file.
For example, to configure for AWS S3:
```yaml
storage:
snapshots_config:
# Use 's3' to store snapshots on S3
snapshots_storage: s3
s3_config:
# Bucket name
bucket: your_bucket_here
# Bucket region (e.g. eu-central-1)
region: your_bucket_region_here
# Storage access key
# Can be specified either here or in the `QDRANT__STORAGE__SNAPSHOTS_CONFIG__S3_CONFIG__ACCESS_KEY` environment variable.
access_key: your_access_key_here
# Storage secret key
# Can be specified either here or in the `QDRANT__STORAGE__SNAPSHOTS_CONFIG__S3_CONFIG__SECRET_KEY` environment variable.
secret_key: your_secret_key_here
# S3-Compatible Storage URL
# Can be specified either here or in the `QDRANT__STORAGE__SNAPSHOTS_CONFIG__S3_CONFIG__ENDPOINT_URL` environment variable.
endpoint_url: your_url_here
```
| documentation/concepts/snapshots.md |
---
title: Hybrid Queries #required
weight: 57 # This is the order of the page in the sidebar. The lower the number, the higher the page will be in the sidebar.
aliases:
- ../hybrid-queries
hideInSidebar: false # Optional. If true, the page will not be shown in the sidebar. It can be used in regular documentation pages and in documentation section pages (_index.md).
---
# Hybrid and Multi-Stage Queries
*Available as of v1.10.0*
With the introduction of [many named vectors per point](../vectors/#named-vectors), there are use-cases when the best search is obtained by combining multiple queries,
or by performing the search in more than one stage.
Qdrant has a flexible and universal interface to make this possible, called `Query API` ([API reference](https://api.qdrant.tech/api-reference/search/query-points)).
The main component for making the combinations of queries possible is the `prefetch` parameter, which enables making sub-requests.
Specifically, whenever a query has at least one prefetch, Qdrant will:
1. Perform the prefetch query (or queries),
2. Apply the main query over the results of its prefetch(es).
Additionally, prefetches can have prefetches themselves, so you can have nested prefetches.
## Hybrid Search
One of the most common problems when you have different representations of the same data is to combine the queried points for each representation into a single result.
{{< figure src="/docs/fusion-idea.png" caption="Fusing results from multiple queries" width="80%" >}}
For example, in text search, it is often useful to combine dense and sparse vectors get the best of semantics,
plus the best of matching specific words.
Qdrant currently has two ways of combining the results from different queries:
- `rrf` -
<a href=https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf target="_blank">
Reciprocal Rank Fusion
</a>
Considers the positions of results within each query, and boosts the ones that appear closer to the top in multiple of them.
- `dbsf` -
<a href=https://medium.com/plain-simple-software/distribution-based-score-fusion-dbsf-a-new-approach-to-vector-search-ranking-f87c37488b18 target="_blank">
Distribution-Based Score Fusion
</a> *(available as of v1.11.0)*
Normalizes the scores of the points in each query, using the mean +/- the 3rd standard deviation as limits, and then sums the scores of the same point across different queries.
<aside role="status"><code>dbsf</code> is stateless and calculates the normalization limits only based on the results of each query, not on all the scores that it has seen.</aside>
Here is an example of Reciprocal Rank Fusion for a query containing two prefetches against different named vectors configured to respectively hold sparse and dense vectors.
```http
POST /collections/{collection_name}/points/query
{
"prefetch": [
{
"query": {
"indices": [1, 42], // <┐
"values": [0.22, 0.8] // <┴─sparse vector
},
"using": "sparse",
"limit": 20
},
{
"query": [0.01, 0.45, 0.67, ...], // <-- dense vector
"using": "dense",
"limit": 20
}
],
"query": { "fusion": "rrf" }, // <--- reciprocal rank fusion
"limit": 10
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
prefetch=[
models.Prefetch(
query=models.SparseVector(indices=[1, 42], values=[0.22, 0.8]),
using="sparse",
limit=20,
),
models.Prefetch(
query=[0.01, 0.45, 0.67, ...], # <-- dense vector
using="dense",
limit=20,
),
],
query=models.FusionQuery(fusion=models.Fusion.RRF),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
prefetch: [
{
query: {
values: [0.22, 0.8],
indices: [1, 42],
},
using: 'sparse',
limit: 20,
},
{
query: [0.01, 0.45, 0.67],
using: 'dense',
limit: 20,
},
],
query: {
fusion: 'rrf',
},
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{Fusion, PrefetchQueryBuilder, Query, QueryPointsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.query(
QueryPointsBuilder::new("{collection_name}")
.add_prefetch(PrefetchQueryBuilder::default()
.query(Query::new_nearest([(1, 0.22), (42, 0.8)].as_slice()))
.using("sparse")
.limit(20u64)
)
.add_prefetch(PrefetchQueryBuilder::default()
.query(Query::new_nearest(vec![0.01, 0.45, 0.67]))
.using("dense")
.limit(20u64)
)
.query(Query::new_fusion(Fusion::Rrf))
).await?;
```
```java
import static io.qdrant.client.QueryFactory.nearest;
import java.util.List;
import static io.qdrant.client.QueryFactory.fusion;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.Fusion;
import io.qdrant.client.grpc.Points.PrefetchQuery;
import io.qdrant.client.grpc.Points.QueryPoints;
QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.addPrefetch(PrefetchQuery.newBuilder()
.setQuery(nearest(List.of(0.22f, 0.8f), List.of(1, 42)))
.setUsing("sparse")
.setLimit(20)
.build())
.addPrefetch(PrefetchQuery.newBuilder()
.setQuery(nearest(List.of(0.01f, 0.45f, 0.67f)))
.setUsing("dense")
.setLimit(20)
.build())
.setQuery(fusion(Fusion.RRF))
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
prefetch: new List < PrefetchQuery > {
new() {
Query = new(float, uint)[] {
(0.22f, 1), (0.8f, 42),
},
Using = "sparse",
Limit = 20
},
new() {
Query = new float[] {
0.01f, 0.45f, 0.67f
},
Using = "dense",
Limit = 20
}
},
query: Fusion.Rrf
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Prefetch: []*qdrant.PrefetchQuery{
{
Query: qdrant.NewQuerySparse([]uint32{1, 42}, []float32{0.22, 0.8}),
Using: qdrant.PtrOf("sparse"),
},
{
Query: qdrant.NewQueryDense([]float32{0.01, 0.45, 0.67}),
Using: qdrant.PtrOf("dense"),
},
},
Query: qdrant.NewQueryFusion(qdrant.Fusion_RRF),
})
```
## Multi-stage queries
In many cases, the usage of a larger vector representation gives more accurate search results, but it is also more expensive to compute.
Splitting the search into two stages is a known technique:
* First, use a smaller and cheaper representation to get a large list of candidates.
* Then, re-score the candidates using the larger and more accurate representation.
There are a few ways to build search architectures around this idea:
* The quantized vectors as a first stage, and the full-precision vectors as a second stage.
* Leverage Matryoshka Representation Learning (<a href=https://arxiv.org/abs/2205.13147 target="_blank">MRL</a>) to generate candidate vectors with a shorter vector, and then refine them with a longer one.
* Use regular dense vectors to pre-fetch the candidates, and then re-score them with a multi-vector model like <a href=https://arxiv.org/abs/2112.01488 target="_blank">ColBERT</a>.
To get the best of all worlds, Qdrant has a convenient interface to perform the queries in stages,
such that the coarse results are fetched first, and then they are refined later with larger vectors.
### Re-scoring examples
Fetch 1000 results using a shorter MRL byte vector, then re-score them using the full vector and get the top 10.
```http
POST /collections/{collection_name}/points/query
{
"prefetch": {
"query": [1, 23, 45, 67], // <------------- small byte vector
"using": "mrl_byte"
"limit": 1000
},
"query": [0.01, 0.299, 0.45, 0.67, ...], // <-- full vector
"using": "full",
"limit": 10
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
prefetch=models.Prefetch(
query=[1, 23, 45, 67], # <------------- small byte vector
using="mrl_byte",
limit=1000,
),
query=[0.01, 0.299, 0.45, 0.67, ...], # <-- full vector
using="full",
limit=10,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
prefetch: {
query: [1, 23, 45, 67], // <------------- small byte vector
using: 'mrl_byte',
limit: 1000,
},
query: [0.01, 0.299, 0.45, 0.67, ...], // <-- full vector,
using: 'full',
limit: 10,
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{PrefetchQueryBuilder, Query, QueryPointsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.query(
QueryPointsBuilder::new("{collection_name}")
.add_prefetch(PrefetchQueryBuilder::default()
.query(Query::new_nearest(vec![1.0, 23.0, 45.0, 67.0]))
.using("mlr_byte")
.limit(1000u64)
)
.query(Query::new_nearest(vec![0.01, 0.299, 0.45, 0.67]))
.using("full")
.limit(10u64)
).await?;
```
```java
import static io.qdrant.client.QueryFactory.nearest;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.PrefetchQuery;
import io.qdrant.client.grpc.Points.QueryPoints;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.addPrefetch(
PrefetchQuery.newBuilder()
.setQuery(nearest(1, 23, 45, 67)) // <------------- small byte vector
.setLimit(1000)
.setUsing("mrl_byte")
.build())
.setQuery(nearest(0.01f, 0.299f, 0.45f, 0.67f)) // <-- full vector
.setUsing("full")
.setLimit(10)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
prefetch: new List<PrefetchQuery> {
new() {
Query = new float[] { 1,23, 45, 67 }, // <------------- small byte vector
Using = "mrl_byte",
Limit = 1000
}
},
query: new float[] { 0.01f, 0.299f, 0.45f, 0.67f }, // <-- full vector
usingVector: "full",
limit: 10
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Prefetch: []*qdrant.PrefetchQuery{
{
Query: qdrant.NewQueryDense([]float32{1, 23, 45, 67}),
Using: qdrant.PtrOf("mrl_byte"),
Limit: qdrant.PtrOf(uint64(1000)),
},
},
Query: qdrant.NewQueryDense([]float32{0.01, 0.299, 0.45, 0.67}),
Using: qdrant.PtrOf("full"),
})
```
Fetch 100 results using the default vector, then re-score them using a multi-vector to get the top 10.
```http
POST /collections/{collection_name}/points/query
{
"prefetch": {
"query": [0.01, 0.45, 0.67, ...], // <-- dense vector
"limit": 100
},
"query": [ // <─┐
[0.1, 0.2, ...], // < │
[0.2, 0.1, ...], // < ├─ multi-vector
[0.8, 0.9, ...] // < │
], // <─┘
"using": "colbert",
"limit": 10
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
prefetch=models.Prefetch(
query=[0.01, 0.45, 0.67, ...], # <-- dense vector
limit=100,
),
query=[
[0.1, 0.2, ...], # <─┐
[0.2, 0.1, ...], # < ├─ multi-vector
[0.8, 0.9, ...], # < ┘
],
using="colbert",
limit=10,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
prefetch: {
query: [1, 23, 45, 67], // <------------- small byte vector
limit: 100,
},
query: [
[0.1, 0.2], // <─┐
[0.2, 0.1], // < ├─ multi-vector
[0.8, 0.9], // < ┘
],
using: 'colbert',
limit: 10,
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{PrefetchQueryBuilder, Query, QueryPointsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.query(
QueryPointsBuilder::new("{collection_name}")
.add_prefetch(PrefetchQueryBuilder::default()
.query(Query::new_nearest(vec![0.01, 0.45, 0.67]))
.limit(100u64)
)
.query(Query::new_nearest(vec![
vec![0.1, 0.2],
vec![0.2, 0.1],
vec![0.8, 0.9],
]))
.using("colbert")
.limit(10u64)
).await?;
```
```java
import static io.qdrant.client.QueryFactory.nearest;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.PrefetchQuery;
import io.qdrant.client.grpc.Points.QueryPoints;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.addPrefetch(
PrefetchQuery.newBuilder()
.setQuery(nearest(0.01f, 0.45f, 0.67f)) // <-- dense vector
.setLimit(100)
.build())
.setQuery(
nearest(
new float[][] {
{0.1f, 0.2f}, // <─┐
{0.2f, 0.1f}, // < ├─ multi-vector
{0.8f, 0.9f} // < ┘
}))
.setUsing("colbert")
.setLimit(10)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
prefetch: new List <PrefetchQuery> {
new() {
Query = new float[] { 0.01f, 0.45f, 0.67f }, // <-- dense vector****
Limit = 100
}
},
query: new float[][] {
[0.1f, 0.2f], // <─┐
[0.2f, 0.1f], // < ├─ multi-vector
[0.8f, 0.9f] // < ┘
},
usingVector: "colbert",
limit: 10
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Prefetch: []*qdrant.PrefetchQuery{
{
Query: qdrant.NewQueryDense([]float32{0.01, 0.45, 0.67}),
Limit: qdrant.PtrOf(uint64(100)),
},
},
Query: qdrant.NewQueryMulti([][]float32{
{0.1, 0.2},
{0.2, 0.1},
{0.8, 0.9},
}),
Using: qdrant.PtrOf("colbert"),
})
```
It is possible to combine all the above techniques in a single query:
```http
POST /collections/{collection_name}/points/query
{
"prefetch": {
"prefetch": {
"query": [1, 23, 45, 67], // <------ small byte vector
"using": "mrl_byte"
"limit": 1000
},
"query": [0.01, 0.45, 0.67, ...], // <-- full dense vector
"using": "full"
"limit": 100
},
"query": [ // <─┐
[0.1, 0.2, ...], // < │
[0.2, 0.1, ...], // < ├─ multi-vector
[0.8, 0.9, ...] // < │
], // <─┘
"using": "colbert",
"limit": 10
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
prefetch=models.Prefetch(
prefetch=models.Prefetch(
query=[1, 23, 45, 67], # <------ small byte vector
using="mrl_byte",
limit=1000,
),
query=[0.01, 0.45, 0.67, ...], # <-- full dense vector
using="full",
limit=100,
),
query=[
[0.1, 0.2, ...], # <─┐
[0.2, 0.1, ...], # < ├─ multi-vector
[0.8, 0.9, ...], # < ┘
],
using="colbert",
limit=10,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
prefetch: {
prefetch: {
query: [1, 23, 45, 67, ...], // <------------- small byte vector
using: 'mrl_byte',
limit: 1000,
},
query: [0.01, 0.45, 0.67, ...], // <-- full dense vector
using: 'full',
limit: 100,
},
query: [
[0.1, 0.2], // <─┐
[0.2, 0.1], // < ├─ multi-vector
[0.8, 0.9], // < ┘
],
using: 'colbert',
limit: 10,
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{PrefetchQueryBuilder, Query, QueryPointsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.query(
QueryPointsBuilder::new("{collection_name}")
.add_prefetch(PrefetchQueryBuilder::default()
.add_prefetch(PrefetchQueryBuilder::default()
.query(Query::new_nearest(vec![1.0, 23.0, 45.0, 67.0]))
.using("mlr_byte")
.limit(1000u64)
)
.query(Query::new_nearest(vec![0.01, 0.45, 0.67]))
.using("full")
.limit(100u64)
)
.query(Query::new_nearest(vec![
vec![0.1, 0.2],
vec![0.2, 0.1],
vec![0.8, 0.9],
]))
.using("colbert")
.limit(10u64)
).await?;
```
```java
import static io.qdrant.client.QueryFactory.nearest;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.PrefetchQuery;
import io.qdrant.client.grpc.Points.QueryPoints;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.addPrefetch(
PrefetchQuery.newBuilder()
.addPrefetch(
PrefetchQuery.newBuilder()
.setQuery(nearest(1, 23, 45, 67)) // <------------- small byte vector
.setUsing("mrl_byte")
.setLimit(1000)
.build())
.setQuery(nearest(0.01f, 0.45f, 0.67f)) // <-- dense vector
.setUsing("full")
.setLimit(100)
.build())
.setQuery(
nearest(
new float[][] {
{0.1f, 0.2f}, // <─┐
{0.2f, 0.1f}, // < ├─ multi-vector
{0.8f, 0.9f} // < ┘
}))
.setUsing("colbert")
.setLimit(10)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
prefetch: new List <PrefetchQuery> {
new() {
Prefetch = {
new List <PrefetchQuery> {
new() {
Query = new float[] { 1, 23, 45, 67 }, // <------------- small byte vector
Using = "mrl_byte",
Limit = 1000
},
}
},
Query = new float[] {0.01f, 0.45f, 0.67f}, // <-- dense vector
Using = "full",
Limit = 100
}
},
query: new float[][] {
[0.1f, 0.2f], // <─┐
[0.2f, 0.1f], // < ├─ multi-vector
[0.8f, 0.9f] // < ┘
},
usingVector: "colbert",
limit: 10
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Prefetch: []*qdrant.PrefetchQuery{
{
Prefetch: []*qdrant.PrefetchQuery{
{
Query: qdrant.NewQueryDense([]float32{1, 23, 45, 67}),
Using: qdrant.PtrOf("mrl_byte"),
Limit: qdrant.PtrOf(uint64(1000)),
},
},
Query: qdrant.NewQueryDense([]float32{0.01, 0.45, 0.67}),
Limit: qdrant.PtrOf(uint64(100)),
Using: qdrant.PtrOf("full"),
},
},
Query: qdrant.NewQueryMulti([][]float32{
{0.1, 0.2},
{0.2, 0.1},
{0.8, 0.9},
}),
Using: qdrant.PtrOf("colbert"),
})
```
## Flexible interface
Other than the introduction of `prefetch`, the `Query API` has been designed to make querying simpler. Let's look at a few bonus features:
### Query by ID
Whenever you need to use a vector as an input, you can always use a [point ID](../points/#point-ids) instead.
```http
POST /collections/{collection_name}/points/query
{
"query": "43cf51e2-8777-4f52-bc74-c2cbde0c8b04" // <--- point id
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query="43cf51e2-8777-4f52-bc74-c2cbde0c8b04", # <--- point id
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: '43cf51e2-8777-4f52-bc74-c2cbde0c8b04', // <--- point id
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{Condition, Filter, PointId, Query, QueryPointsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.query(
QueryPointsBuilder::new("{collection_name}")
.query(Query::new_nearest(PointId::new("43cf51e2-8777-4f52-bc74-c2cbde0c8b04")))
)
.await?;
```
```java
import static io.qdrant.client.QueryFactory.nearest;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPoints;
import java.util.UUID;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(UUID.fromString("43cf51e2-8777-4f52-bc74-c2cbde0c8b04")))
.build())
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: Guid.Parse("43cf51e2-8777-4f52-bc74-c2cbde0c8b04") // <--- point id
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQueryID(qdrant.NewID("43cf51e2-8777-4f52-bc74-c2cbde0c8b04")),
})
```
The above example will fetch the default vector from the point with this id, and use it as the query vector.
If the `using` parameter is also specified, Qdrant will use the vector with that name.
It is also possible to reference an ID from a different collection, by setting the `lookup_from` parameter.
```http
POST /collections/{collection_name}/points/query
{
"query": "43cf51e2-8777-4f52-bc74-c2cbde0c8b04", // <--- point id
"using": "512d-vector"
"lookup_from": {
"collection": "another_collection", // <--- other collection name
"vector": "image-512" // <--- vector name in the other collection
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
query="43cf51e2-8777-4f52-bc74-c2cbde0c8b04", # <--- point id
using="512d-vector",
lookup_from=models.LookupFrom(
collection="another_collection", # <--- other collection name
vector="image-512", # <--- vector name in the other collection
)
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
query: '43cf51e2-8777-4f52-bc74-c2cbde0c8b04', // <--- point id
using: '512d-vector',
lookup_from: {
collection: 'another_collection', // <--- other collection name
vector: 'image-512', // <--- vector name in the other collection
}
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{LookupLocationBuilder, PointId, Query, QueryPointsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.query(
QueryPointsBuilder::new("{collection_name}")
.query(Query::new_nearest(PointId::new("43cf51e2-8777-4f52-bc74-c2cbde0c8b04")))
.using("512d-vector")
.lookup_from(
LookupLocationBuilder::new("another_collection")
.vector_name("image-512")
)
).await?;
```
```java
import static io.qdrant.client.QueryFactory.nearest;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.LookupLocation;
import io.qdrant.client.grpc.Points.QueryPoints;
import java.util.UUID;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.setQuery(nearest(UUID.fromString("43cf51e2-8777-4f52-bc74-c2cbde0c8b04")))
.setUsing("512d-vector")
.setLookupFrom(
LookupLocation.newBuilder()
.setCollectionName("another_collection")
.setVectorName("image-512")
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
query: Guid.Parse("43cf51e2-8777-4f52-bc74-c2cbde0c8b04"), // <--- point id
usingVector: "512d-vector",
lookupFrom: new() {
CollectionName = "another_collection", // <--- other collection name
VectorName = "image-512" // <--- vector name in the other collection
}
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Query: qdrant.NewQueryID(qdrant.NewID("43cf51e2-8777-4f52-bc74-c2cbde0c8b04")),
Using: qdrant.PtrOf("512d-vector"),
LookupFrom: &qdrant.LookupLocation{
CollectionName: "another_collection",
VectorName: qdrant.PtrOf("image-512"),
},
})
```
In the case above, Qdrant will fetch the `"image-512"` vector from the specified point id in the
collection `another_collection`.
<aside role="status">
The fetched vector(s) must match the characteristics of the <code>using</code> vector, otherwise, an error will be returned.
</aside>
## Re-ranking with payload values
The Query API can retrieve points not only by vector similarity but also by the content of the payload.
There are two ways to make use of the payload in the query:
* Apply filters to the payload fields, to only get the points that match the filter.
* Order the results by the payload field.
Let's see an example of when this might be useful:
```http
POST /collections/{collection_name}/points/query
{
"prefetch": [
{
"query": [0.01, 0.45, 0.67, ...], // <-- dense vector
"filter": {
"must": {
"key": "color",
"match": {
"value": "red"
}
}
},
"limit": 10
},
{
"query": [0.01, 0.45, 0.67, ...], // <-- dense vector
"filter": {
"must": {
"key": "color",
"match": {
"value": "green"
}
}
},
"limit": 10
}
],
"query": { "order_by": "price" }
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points(
collection_name="{collection_name}",
prefetch=[
models.Prefetch(
query=[0.01, 0.45, 0.67, ...], # <-- dense vector
filter=models.Filter(
must=models.FieldCondition(
key="color",
match=models.Match(value="red"),
),
),
limit=10,
),
models.Prefetch(
query=[0.01, 0.45, 0.67, ...], # <-- dense vector
filter=models.Filter(
must=models.FieldCondition(
key="color",
match=models.Match(value="green"),
),
),
limit=10,
),
],
query=models.OrderByQuery(order_by="price"),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.query("{collection_name}", {
prefetch: [
{
query: [0.01, 0.45, 0.67], // <-- dense vector
filter: {
must: {
key: 'color',
match: {
value: 'red',
},
}
},
limit: 10,
},
{
query: [0.01, 0.45, 0.67], // <-- dense vector
filter: {
must: {
key: 'color',
match: {
value: 'green',
},
}
},
limit: 10,
},
],
query: {
order_by: 'price',
},
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{Condition, Filter, PrefetchQueryBuilder, Query, QueryPointsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.query(
QueryPointsBuilder::new("{collection_name}")
.add_prefetch(PrefetchQueryBuilder::default()
.query(Query::new_nearest(vec![0.01, 0.45, 0.67]))
.filter(Filter::must([Condition::matches(
"color",
"red".to_string(),
)]))
.limit(10u64)
)
.add_prefetch(PrefetchQueryBuilder::default()
.query(Query::new_nearest(vec![0.01, 0.45, 0.67]))
.filter(Filter::must([Condition::matches(
"color",
"green".to_string(),
)]))
.limit(10u64)
)
.query(Query::new_order_by("price"))
).await?;
```
```java
import static io.qdrant.client.ConditionFactory.matchKeyword;
import static io.qdrant.client.QueryFactory.nearest;
import static io.qdrant.client.QueryFactory.orderBy;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.PrefetchQuery;
import io.qdrant.client.grpc.Points.QueryPoints;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.queryAsync(
QueryPoints.newBuilder()
.setCollectionName("{collection_name}")
.addPrefetch(
PrefetchQuery.newBuilder()
.setQuery(nearest(0.01f, 0.45f, 0.67f))
.setFilter(
Filter.newBuilder().addMust(matchKeyword("color", "red")).build())
.setLimit(10)
.build())
.addPrefetch(
PrefetchQuery.newBuilder()
.setQuery(nearest(0.01f, 0.45f, 0.67f))
.setFilter(
Filter.newBuilder().addMust(matchKeyword("color", "green")).build())
.setLimit(10)
.build())
.setQuery(orderBy("price"))
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.QueryAsync(
collectionName: "{collection_name}",
prefetch: new List <PrefetchQuery> {
new() {
Query = new float[] {
0.01f, 0.45f, 0.67f
},
Filter = MatchKeyword("color", "red"),
Limit = 10
},
new() {
Query = new float[] {
0.01f, 0.45f, 0.67f
},
Filter = MatchKeyword("color", "green"),
Limit = 10
}
},
query: (OrderBy) "price",
limit: 10
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "{collection_name}",
Prefetch: []*qdrant.PrefetchQuery{
{
Query: qdrant.NewQuery(0.01, 0.45, 0.67),
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("color", "red"),
},
},
},
{
Query: qdrant.NewQuery(0.01, 0.45, 0.67),
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("color", "green"),
},
},
},
},
Query: qdrant.NewQueryOrderBy(&qdrant.OrderBy{
Key: "price",
}),
})
```
In this example, we first fetch 10 points with the color `"red"` and then 10 points with the color `"green"`.
Then, we order the results by the price field.
This is how we can guarantee even sampling of both colors in the results and also get the cheapest ones first.
## Grouping
*Available as of v1.11.0*
It is possible to group results by a certain field. This is useful when you have multiple points for the same item, and you want to avoid redundancy of the same item in the results.
REST API ([Schema](https://api.qdrant.tech/master/api-reference/search/query-points-groups)):
```http
POST /collections/{collection_name}/points/query/groups
{
"query": [0.01, 0.45, 0.67],
group_by="document_id", # Path of the field to group by
limit=4, # Max amount of groups
group_size=2, # Max amount of points per group
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.query_points_groups(
collection_name="{collection_name}",
query=[0.01, 0.45, 0.67],
group_by="document_id",
limit=4,
group_size=2,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.queryGroups("{collection_name}", {
query: [0.01, 0.45, 0.67],
group_by: "document_id",
limit: 4,
group_size: 2,
});
```
```rust
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{Query, QueryPointsBuilder};
let client = Qdrant::from_url("http://localhost:6334").build()?;
client.query_groups(
QueryPointGroupsBuilder::new("{collection_name}", "document_id")
.query(Query::from(vec![0.01, 0.45, 0.67]))
.limit(4u64)
.group_size(2u64)
).await?;
```
```java
import static io.qdrant.client.QueryFactory.nearest;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.QueryPointGroups;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.queryGroupsAsync(
QueryPointGroups.newBuilder()
.setCollectionName("{collection_name}")
.setGroupBy("document_id")
.setQuery(nearest(0.01f, 0.45f, 0.67f))
.setLimit(4)
.setGroupSize(2)
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
var client = new QdrantClient("localhost", 6334);
await client.QueryGroupsAsync(
collectionName: "{collection_name}",
groupBy: "document_id",
query: new float[] {
0.01f, 0.45f, 0.67f
},
limit: 4,
groupSize: 2
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{
CollectionName: "{collection_name}",
Query: qdrant.NewQuery(0.01, 0.45, 0.67),
GroupBy: "document_id",
GroupSize: qdrant.PtrOf(uint64(2)),
})
```
For more information on the `grouping` capabilities refer to the reference documentation for search with [grouping](./search/#search-groups) and [lookup](./search/#lookup-in-groups).
| documentation/concepts/hybrid-queries.md |
---
title: Filtering
weight: 60
aliases:
- ../filtering
---
# Filtering
With Qdrant, you can set conditions when searching or retrieving points.
For example, you can impose conditions on both the [payload](../payload/) and the `id` of the point.
Setting additional conditions is important when it is impossible to express all the features of the object in the embedding.
Examples include a variety of business requirements: stock availability, user location, or desired price range.
## Filtering clauses
Qdrant allows you to combine conditions in clauses.
Clauses are different logical operations, such as `OR`, `AND`, and `NOT`.
Clauses can be recursively nested into each other so that you can reproduce an arbitrary boolean expression.
Let's take a look at the clauses implemented in Qdrant.
Suppose we have a set of points with the following payload:
```json
[
{ "id": 1, "city": "London", "color": "green" },
{ "id": 2, "city": "London", "color": "red" },
{ "id": 3, "city": "London", "color": "blue" },
{ "id": 4, "city": "Berlin", "color": "red" },
{ "id": 5, "city": "Moscow", "color": "green" },
{ "id": 6, "city": "Moscow", "color": "blue" }
]
```
### Must
Example:
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"must": [
{ "key": "city", "match": { "value": "London" } },
{ "key": "color", "match": { "value": "red" } }
]
}
...
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
must=[
models.FieldCondition(
key="city",
match=models.MatchValue(value="London"),
),
models.FieldCondition(
key="color",
match=models.MatchValue(value="red"),
),
]
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.scroll("{collection_name}", {
filter: {
must: [
{
key: "city",
match: { value: "London" },
},
{
key: "color",
match: { value: "red" },
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.scroll(
ScrollPointsBuilder::new("{collection_name}").filter(Filter::must([
Condition::matches("city", "london".to_string()),
Condition::matches("color", "red".to_string()),
])),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.ConditionFactory.matchKeyword;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addAllMust(
List.of(matchKeyword("city", "London"), matchKeyword("color", "red")))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
// & operator combines two conditions in an AND conjunction(must)
await client.ScrollAsync(
collectionName: "{collection_name}",
filter: MatchKeyword("city", "London") & MatchKeyword("color", "red")
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("city", "London"),
qdrant.NewMatch("color", "red"),
},
},
})
```
Filtered points would be:
```json
[{ "id": 2, "city": "London", "color": "red" }]
```
When using `must`, the clause becomes `true` only if every condition listed inside `must` is satisfied.
In this sense, `must` is equivalent to the operator `AND`.
### Should
Example:
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"should": [
{ "key": "city", "match": { "value": "London" } },
{ "key": "color", "match": { "value": "red" } }
]
}
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
should=[
models.FieldCondition(
key="city",
match=models.MatchValue(value="London"),
),
models.FieldCondition(
key="color",
match=models.MatchValue(value="red"),
),
]
),
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
should: [
{
key: "city",
match: { value: "London" },
},
{
key: "color",
match: { value: "red" },
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.scroll(
ScrollPointsBuilder::new("{collection_name}").filter(Filter::should([
Condition::matches("city", "london".to_string()),
Condition::matches("color", "red".to_string()),
])),
)
.await?;
```
```java
import static io.qdrant.client.ConditionFactory.matchKeyword;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
import java.util.List;
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addAllShould(
List.of(matchKeyword("city", "London"), matchKeyword("color", "red")))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
// | operator combines two conditions in an OR disjunction(should)
await client.ScrollAsync(
collectionName: "{collection_name}",
filter: MatchKeyword("city", "London") | MatchKeyword("color", "red")
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
Should: []*qdrant.Condition{
qdrant.NewMatch("city", "London"),
qdrant.NewMatch("color", "red"),
},
},
})
```
Filtered points would be:
```json
[
{ "id": 1, "city": "London", "color": "green" },
{ "id": 2, "city": "London", "color": "red" },
{ "id": 3, "city": "London", "color": "blue" },
{ "id": 4, "city": "Berlin", "color": "red" }
]
```
When using `should`, the clause becomes `true` if at least one condition listed inside `should` is satisfied.
In this sense, `should` is equivalent to the operator `OR`.
### Must Not
Example:
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"must_not": [
{ "key": "city", "match": { "value": "London" } },
{ "key": "color", "match": { "value": "red" } }
]
}
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
must_not=[
models.FieldCondition(key="city", match=models.MatchValue(value="London")),
models.FieldCondition(key="color", match=models.MatchValue(value="red")),
]
),
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
must_not: [
{
key: "city",
match: { value: "London" },
},
{
key: "color",
match: { value: "red" },
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.scroll(
ScrollPointsBuilder::new("{collection_name}").filter(Filter::must_not([
Condition::matches("city", "london".to_string()),
Condition::matches("color", "red".to_string()),
])),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.ConditionFactory.matchKeyword;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addAllMustNot(
List.of(matchKeyword("city", "London"), matchKeyword("color", "red")))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
// The ! operator negates the condition(must not)
await client.ScrollAsync(
collectionName: "{collection_name}",
filter: !(MatchKeyword("city", "London") & MatchKeyword("color", "red"))
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
MustNot: []*qdrant.Condition{
qdrant.NewMatch("city", "London"),
qdrant.NewMatch("color", "red"),
},
},
})
```
Filtered points would be:
```json
[
{ "id": 5, "city": "Moscow", "color": "green" },
{ "id": 6, "city": "Moscow", "color": "blue" }
]
```
When using `must_not`, the clause becomes `true` if none if the conditions listed inside `should` is satisfied.
In this sense, `must_not` is equivalent to the expression `(NOT A) AND (NOT B) AND (NOT C)`.
### Clauses combination
It is also possible to use several clauses simultaneously:
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"must": [
{ "key": "city", "match": { "value": "London" } }
],
"must_not": [
{ "key": "color", "match": { "value": "red" } }
]
}
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
must=[
models.FieldCondition(key="city", match=models.MatchValue(value="London")),
],
must_not=[
models.FieldCondition(key="color", match=models.MatchValue(value="red")),
],
),
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
must: [
{
key: "city",
match: { value: "London" },
},
],
must_not: [
{
key: "color",
match: { value: "red" },
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
client
.scroll(
ScrollPointsBuilder::new("{collection_name}").filter(Filter {
must: vec![Condition::matches("city", "London".to_string())],
must_not: vec![Condition::matches("color", "red".to_string())],
..Default::default()
}),
)
.await?;
```
```java
import static io.qdrant.client.ConditionFactory.matchKeyword;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addMust(matchKeyword("city", "London"))
.addMustNot(matchKeyword("color", "red"))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.ScrollAsync(
collectionName: "{collection_name}",
filter: MatchKeyword("city", "London") & !MatchKeyword("color", "red")
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("city", "London"),
},
MustNot: []*qdrant.Condition{
qdrant.NewMatch("color", "red"),
},
},
})
```
Filtered points would be:
```json
[
{ "id": 1, "city": "London", "color": "green" },
{ "id": 3, "city": "London", "color": "blue" }
]
```
In this case, the conditions are combined by `AND`.
Also, the conditions could be recursively nested. Example:
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"must_not": [
{
"must": [
{ "key": "city", "match": { "value": "London" } },
{ "key": "color", "match": { "value": "red" } }
]
}
]
}
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
must_not=[
models.Filter(
must=[
models.FieldCondition(
key="city", match=models.MatchValue(value="London")
),
models.FieldCondition(
key="color", match=models.MatchValue(value="red")
),
],
),
],
),
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
must_not: [
{
must: [
{
key: "city",
match: { value: "London" },
},
{
key: "color",
match: { value: "red" },
},
],
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
client
.scroll(
ScrollPointsBuilder::new("{collection_name}").filter(Filter::must_not([Filter::must(
[
Condition::matches("city", "London".to_string()),
Condition::matches("color", "red".to_string()),
],
)
.into()])),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.ConditionFactory.filter;
import static io.qdrant.client.ConditionFactory.matchKeyword;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addMustNot(
filter(
Filter.newBuilder()
.addAllMust(
List.of(
matchKeyword("city", "London"),
matchKeyword("color", "red")))
.build()))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.ScrollAsync(
collectionName: "{collection_name}",
filter: new Filter { MustNot = { MatchKeyword("city", "London") & MatchKeyword("color", "red") } }
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
MustNot: []*qdrant.Condition{
qdrant.NewFilterAsCondition(&qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("city", "London"),
qdrant.NewMatch("color", "red"),
},
}),
},
},
})
```
Filtered points would be:
```json
[
{ "id": 1, "city": "London", "color": "green" },
{ "id": 3, "city": "London", "color": "blue" },
{ "id": 4, "city": "Berlin", "color": "red" },
{ "id": 5, "city": "Moscow", "color": "green" },
{ "id": 6, "city": "Moscow", "color": "blue" }
]
```
## Filtering conditions
Different types of values in payload correspond to different kinds of queries that we can apply to them.
Let's look at the existing condition variants and what types of data they apply to.
### Match
```json
{
"key": "color",
"match": {
"value": "red"
}
}
```
```python
models.FieldCondition(
key="color",
match=models.MatchValue(value="red"),
)
```
```typescript
{
key: 'color',
match: {value: 'red'}
}
```
```rust
Condition::matches("color", "red".to_string())
```
```java
matchKeyword("color", "red");
```
```csharp
using static Qdrant.Client.Grpc.Conditions;
MatchKeyword("color", "red");
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewMatch("color", "red")
```
For the other types, the match condition will look exactly the same, except for the type used:
```json
{
"key": "count",
"match": {
"value": 0
}
}
```
```python
models.FieldCondition(
key="count",
match=models.MatchValue(value=0),
)
```
```typescript
{
key: 'count',
match: {value: 0}
}
```
```rust
Condition::matches("count", 0)
```
```java
import static io.qdrant.client.ConditionFactory.match;
match("count", 0);
```
```csharp
using static Qdrant.Client.Grpc.Conditions;
Match("count", 0);
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewMatchInt("count", 0)
```
The simplest kind of condition is one that checks if the stored value equals the given one.
If several values are stored, at least one of them should match the condition.
You can apply it to [keyword](../payload/#keyword), [integer](../payload/#integer) and [bool](../payload/#bool) payloads.
### Match Any
*Available as of v1.1.0*
In case you want to check if the stored value is one of multiple values, you can use the Match Any condition.
Match Any works as a logical OR for the given values. It can also be described as a `IN` operator.
You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads.
Example:
```json
{
"key": "color",
"match": {
"any": ["black", "yellow"]
}
}
```
```python
models.FieldCondition(
key="color",
match=models.MatchAny(any=["black", "yellow"]),
)
```
```typescript
{
key: 'color',
match: {any: ['black', 'yellow']}
}
```
```rust
Condition::matches("color", vec!["black".to_string(), "yellow".to_string()])
```
```java
import static io.qdrant.client.ConditionFactory.matchKeywords;
matchKeywords("color", List.of("black", "yellow"));
```
```csharp
using static Qdrant.Client.Grpc.Conditions;
Match("color", ["black", "yellow"]);
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewMatchKeywords("color", "black", "yellow")
```
In this example, the condition will be satisfied if the stored value is either `black` or `yellow`.
If the stored value is an array, it should have at least one value matching any of the given values. E.g. if the stored value is `["black", "green"]`, the condition will be satisfied, because `"black"` is in `["black", "yellow"]`.
### Match Except
*Available as of v1.2.0*
In case you want to check if the stored value is not one of multiple values, you can use the Match Except condition.
Match Except works as a logical NOR for the given values.
It can also be described as a `NOT IN` operator.
You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads.
Example:
```json
{
"key": "color",
"match": {
"except": ["black", "yellow"]
}
}
```
```python
models.FieldCondition(
key="color",
match=models.MatchExcept(**{"except": ["black", "yellow"]}),
)
```
```typescript
{
key: 'color',
match: {except: ['black', 'yellow']}
}
```
```rust
use qdrant_client::qdrant::r#match::MatchValue;
Condition::matches(
"color",
!MatchValue::from(vec!["black".to_string(), "yellow".to_string()]),
)
```
```java
import static io.qdrant.client.ConditionFactory.matchExceptKeywords;
matchExceptKeywords("color", List.of("black", "yellow"));
```
```csharp
using static Qdrant.Client.Grpc.Conditions;
Match("color", ["black", "yellow"]);
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewMatchExcept("color", "black", "yellow")
```
In this example, the condition will be satisfied if the stored value is neither `black` nor `yellow`.
If the stored value is an array, it should have at least one value not matching any of the given values. E.g. if the stored value is `["black", "green"]`, the condition will be satisfied, because `"green"` does not match `"black"` nor `"yellow"`.
### Nested key
*Available as of v1.1.0*
Payloads being arbitrary JSON object, it is likely that you will need to filter on a nested field.
For convenience, we use a syntax similar to what can be found in the [Jq](https://stedolan.github.io/jq/manual/#Basicfilters) project.
Suppose we have a set of points with the following payload:
```json
[
{
"id": 1,
"country": {
"name": "Germany",
"cities": [
{
"name": "Berlin",
"population": 3.7,
"sightseeing": ["Brandenburg Gate", "Reichstag"]
},
{
"name": "Munich",
"population": 1.5,
"sightseeing": ["Marienplatz", "Olympiapark"]
}
]
}
},
{
"id": 2,
"country": {
"name": "Japan",
"cities": [
{
"name": "Tokyo",
"population": 9.3,
"sightseeing": ["Tokyo Tower", "Tokyo Skytree"]
},
{
"name": "Osaka",
"population": 2.7,
"sightseeing": ["Osaka Castle", "Universal Studios Japan"]
}
]
}
}
]
```
You can search on a nested field using a dot notation.
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"should": [
{
"key": "country.name",
"match": {
"value": "Germany"
}
}
]
}
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
should=[
models.FieldCondition(
key="country.name", match=models.MatchValue(value="Germany")
),
],
),
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
should: [
{
key: "country.name",
match: { value: "Germany" },
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
client
.scroll(
ScrollPointsBuilder::new("{collection_name}").filter(Filter::should([
Condition::matches("country.name", "Germany".to_string()),
])),
)
.await?;
```
```java
import static io.qdrant.client.ConditionFactory.matchKeyword;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addShould(matchKeyword("country.name", "Germany"))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.ScrollAsync(collectionName: "{collection_name}", filter: MatchKeyword("country.name", "Germany"));
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
Should: []*qdrant.Condition{
qdrant.NewMatch("country.name", "Germany"),
},
},
})
```
You can also search through arrays by projecting inner values using the `[]` syntax.
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"should": [
{
"key": "country.cities[].population",
"range": {
"gte": 9.0,
}
}
]
}
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
should=[
models.FieldCondition(
key="country.cities[].population",
range=models.Range(
gt=None,
gte=9.0,
lt=None,
lte=None,
),
),
],
),
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
should: [
{
key: "country.cities[].population",
range: {
gt: null,
gte: 9.0,
lt: null,
lte: null,
},
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, Range, ScrollPointsBuilder};
client
.scroll(
ScrollPointsBuilder::new("{collection_name}").filter(Filter::should([
Condition::range(
"country.cities[].population",
Range {
gte: Some(9.0),
..Default::default()
},
),
])),
)
.await?;
```
```java
import static io.qdrant.client.ConditionFactory.range;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.Range;
import io.qdrant.client.grpc.Points.ScrollPoints;
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addShould(
range(
"country.cities[].population",
Range.newBuilder().setGte(9.0).build()))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.ScrollAsync(
collectionName: "{collection_name}",
filter: Range("country.cities[].population", new Qdrant.Client.Grpc.Range { Gte = 9.0 })
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
Should: []*qdrant.Condition{
qdrant.NewRange("country.cities[].population", &qdrant.Range{
Gte: qdrant.PtrOf(9.0),
}),
},
},
})
```
This query would only output the point with id 2 as only Japan has a city with population greater than 9.0.
And the leaf nested field can also be an array.
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"should": [
{
"key": "country.cities[].sightseeing",
"match": {
"value": "Osaka Castle"
}
}
]
}
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
should=[
models.FieldCondition(
key="country.cities[].sightseeing",
match=models.MatchValue(value="Osaka Castle"),
),
],
),
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
should: [
{
key: "country.cities[].sightseeing",
match: { value: "Osaka Castle" },
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
client
.scroll(
ScrollPointsBuilder::new("{collection_name}").filter(Filter::should([
Condition::matches("country.cities[].sightseeing", "Osaka Castle".to_string()),
])),
)
.await?;
```
```java
import static io.qdrant.client.ConditionFactory.matchKeyword;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addShould(matchKeyword("country.cities[].sightseeing", "Germany"))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.ScrollAsync(
collectionName: "{collection_name}",
filter: MatchKeyword("country.cities[].sightseeing", "Germany")
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
Should: []*qdrant.Condition{
qdrant.NewMatch("country.cities[].sightseeing", "Germany"),
},
},
})
```
This query would only output the point with id 2 as only Japan has a city with the "Osaka castke" as part of the sightseeing.
### Nested object filter
*Available as of v1.2.0*
By default, the conditions are taking into account the entire payload of a point.
For instance, given two points with the following payload:
```json
[
{
"id": 1,
"dinosaur": "t-rex",
"diet": [
{ "food": "leaves", "likes": false},
{ "food": "meat", "likes": true}
]
},
{
"id": 2,
"dinosaur": "diplodocus",
"diet": [
{ "food": "leaves", "likes": true},
{ "food": "meat", "likes": false}
]
}
]
```
The following query would match both points:
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"must": [
{
"key": "diet[].food",
"match": {
"value": "meat"
}
},
{
"key": "diet[].likes",
"match": {
"value": true
}
}
]
}
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
must=[
models.FieldCondition(
key="diet[].food", match=models.MatchValue(value="meat")
),
models.FieldCondition(
key="diet[].likes", match=models.MatchValue(value=True)
),
],
),
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
must: [
{
key: "diet[].food",
match: { value: "meat" },
},
{
key: "diet[].likes",
match: { value: true },
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
client
.scroll(
ScrollPointsBuilder::new("{collection_name}").filter(Filter::must([
Condition::matches("diet[].food", "meat".to_string()),
Condition::matches("diet[].likes", true),
])),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.ConditionFactory.match;
import static io.qdrant.client.ConditionFactory.matchKeyword;
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
QdrantClient client =
new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addAllMust(
List.of(matchKeyword("diet[].food", "meat"), match("diet[].likes", true)))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.ScrollAsync(
collectionName: "{collection_name}",
filter: MatchKeyword("diet[].food", "meat") & Match("diet[].likes", true)
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("diet[].food", "meat"),
qdrant.NewMatchBool("diet[].likes", true),
},
},
})
```
This happens because both points are matching the two conditions:
- the "t-rex" matches food=meat on `diet[1].food` and likes=true on `diet[1].likes`
- the "diplodocus" matches food=meat on `diet[1].food` and likes=true on `diet[0].likes`
To retrieve only the points which are matching the conditions on an array element basis, that is the point with id 1 in this example, you would need to use a nested object filter.
Nested object filters allow arrays of objects to be queried independently of each other.
It is achieved by using the `nested` condition type formed by a payload key to focus on and a filter to apply.
The key should point to an array of objects and can be used with or without the bracket notation ("data" or "data[]").
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"must": [{
"nested": {
"key": "diet",
"filter":{
"must": [
{
"key": "food",
"match": {
"value": "meat"
}
},
{
"key": "likes",
"match": {
"value": true
}
}
]
}
}
}]
}
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
must=[
models.NestedCondition(
nested=models.Nested(
key="diet",
filter=models.Filter(
must=[
models.FieldCondition(
key="food", match=models.MatchValue(value="meat")
),
models.FieldCondition(
key="likes", match=models.MatchValue(value=True)
),
]
),
)
)
],
),
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
must: [
{
nested: {
key: "diet",
filter: {
must: [
{
key: "food",
match: { value: "meat" },
},
{
key: "likes",
match: { value: true },
},
],
},
},
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPointsBuilder};
client
.scroll(
ScrollPointsBuilder::new("{collection_name}").filter(Filter::must([NestedCondition {
key: "diet".to_string(),
filter: Some(Filter::must([
Condition::matches("food", "meat".to_string()),
Condition::matches("likes", true),
])),
}
.into()])),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.ConditionFactory.match;
import static io.qdrant.client.ConditionFactory.matchKeyword;
import static io.qdrant.client.ConditionFactory.nested;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addMust(
nested(
"diet",
Filter.newBuilder()
.addAllMust(
List.of(
matchKeyword("food", "meat"), match("likes", true)))
.build()))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.ScrollAsync(
collectionName: "{collection_name}",
filter: Nested("diet", MatchKeyword("food", "meat") & Match("likes", true))
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewNestedFilter("diet", &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("food", "meat"),
qdrant.NewMatchBool("likes", true),
},
}),
},
},
})
```
The matching logic is modified to be applied at the level of an array element within the payload.
Nested filters work in the same way as if the nested filter was applied to a single element of the array at a time.
Parent document is considered to match the condition if at least one element of the array matches the nested filter.
**Limitations**
The `has_id` condition is not supported within the nested object filter. If you need it, place it in an adjacent `must` clause.
```http
POST /collections/{collection_name}/points/scroll
{
"filter":{
"must":[
{
"nested":{
"key":"diet",
"filter":{
"must":[
{
"key":"food",
"match":{
"value":"meat"
}
},
{
"key":"likes",
"match":{
"value":true
}
}
]
}
}
},
{
"has_id":[
1
]
}
]
}
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
must=[
models.NestedCondition(
nested=models.Nested(
key="diet",
filter=models.Filter(
must=[
models.FieldCondition(
key="food", match=models.MatchValue(value="meat")
),
models.FieldCondition(
key="likes", match=models.MatchValue(value=True)
),
]
),
)
),
models.HasIdCondition(has_id=[1]),
],
),
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
must: [
{
nested: {
key: "diet",
filter: {
must: [
{
key: "food",
match: { value: "meat" },
},
{
key: "likes",
match: { value: true },
},
],
},
},
},
{
has_id: [1],
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPointsBuilder};
client
.scroll(
ScrollPointsBuilder::new("{collection_name}").filter(Filter::must([
NestedCondition {
key: "diet".to_string(),
filter: Some(Filter::must([
Condition::matches("food", "meat".to_string()),
Condition::matches("likes", true),
])),
}
.into(),
Condition::has_id([1]),
])),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.ConditionFactory.hasId;
import static io.qdrant.client.ConditionFactory.match;
import static io.qdrant.client.ConditionFactory.matchKeyword;
import static io.qdrant.client.ConditionFactory.nested;
import static io.qdrant.client.PointIdFactory.id;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addMust(
nested(
"diet",
Filter.newBuilder()
.addAllMust(
List.of(
matchKeyword("food", "meat"), match("likes", true)))
.build()))
.addMust(hasId(id(1)))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.ScrollAsync(
collectionName: "{collection_name}",
filter: Nested("diet", MatchKeyword("food", "meat") & Match("likes", true)) & HasId(1)
);
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewNestedFilter("diet", &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("food", "meat"),
qdrant.NewMatchBool("likes", true),
},
}),
qdrant.NewHasID(qdrant.NewIDNum(1)),
},
},
})
```
### Full Text Match
*Available as of v0.10.0*
A special case of the `match` condition is the `text` match condition.
It allows you to search for a specific substring, token or phrase within the text field.
Exact texts that will match the condition depend on full-text index configuration.
Configuration is defined during the index creation and describe at [full-text index](../indexing/#full-text-index).
If there is no full-text index for the field, the condition will work as exact substring match.
```json
{
"key": "description",
"match": {
"text": "good cheap"
}
}
```
```python
models.FieldCondition(
key="description",
match=models.MatchText(text="good cheap"),
)
```
```typescript
{
key: 'description',
match: {text: 'good cheap'}
}
```
```rust
use qdrant_client::qdrant::Condition;
Condition::matches_text("description", "good cheap")
```
```java
import static io.qdrant.client.ConditionFactory.matchText;
matchText("description", "good cheap");
```
```csharp
using static Qdrant.Client.Grpc.Conditions;
MatchText("description", "good cheap");
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewMatchText("description", "good cheap")
```
If the query has several words, then the condition will be satisfied only if all of them are present in the text.
### Range
```json
{
"key": "price",
"range": {
"gt": null,
"gte": 100.0,
"lt": null,
"lte": 450.0
}
}
```
```python
models.FieldCondition(
key="price",
range=models.Range(
gt=None,
gte=100.0,
lt=None,
lte=450.0,
),
)
```
```typescript
{
key: 'price',
range: {
gt: null,
gte: 100.0,
lt: null,
lte: 450.0
}
}
```
```rust
use qdrant_client::qdrant::{Condition, Range};
Condition::range(
"price",
Range {
gt: None,
gte: Some(100.0),
lt: None,
lte: Some(450.0),
},
)
```
```java
import static io.qdrant.client.ConditionFactory.range;
import io.qdrant.client.grpc.Points.Range;
range("price", Range.newBuilder().setGte(100.0).setLte(450).build());
```
```csharp
using static Qdrant.Client.Grpc.Conditions;
Range("price", new Qdrant.Client.Grpc.Range { Gte = 100.0, Lte = 450 });
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewRange("price", &qdrant.Range{
Gte: qdrant.PtrOf(100.0),
Lte: qdrant.PtrOf(450.0),
})
```
The `range` condition sets the range of possible values for stored payload values.
If several values are stored, at least one of them should match the condition.
Comparisons that can be used:
- `gt` - greater than
- `gte` - greater than or equal
- `lt` - less than
- `lte` - less than or equal
Can be applied to [float](../payload/#float) and [integer](../payload/#integer) payloads.
### Datetime Range
The datetime range is a unique range condition, used for [datetime](../payload/#datetime) payloads, which supports RFC 3339 formats.
You do not need to convert dates to UNIX timestaps. During comparison, timestamps are parsed and converted to UTC.
_Available as of v1.8.0_
```json
{
"key": "date",
"range": {
"gt": "2023-02-08T10:49:00Z",
"gte": null,
"lt": null,
"lte": "2024-01-31 10:14:31Z"
}
}
```
```python
models.FieldCondition(
key="date",
range=models.DatetimeRange(
gt="2023-02-08T10:49:00Z",
gte=None,
lt=None,
lte="2024-01-31T10:14:31Z",
),
)
```
```typescript
{
key: 'date',
range: {
gt: '2023-02-08T10:49:00Z',
gte: null,
lt: null,
lte: '2024-01-31T10:14:31Z'
}
}
```
```rust
use qdrant_client::qdrant::{Condition, DatetimeRange, Timestamp};
Condition::datetime_range(
"date",
DatetimeRange {
gt: Some(Timestamp::date_time(2023, 2, 8, 10, 49, 0).unwrap()),
gte: None,
lt: None,
lte: Some(Timestamp::date_time(2024, 1, 31, 10, 14, 31).unwrap()),
},
)
```
```java
import static io.qdrant.client.ConditionFactory.datetimeRange;
import com.google.protobuf.Timestamp;
import io.qdrant.client.grpc.Points.DatetimeRange;
import java.time.Instant;
long gt = Instant.parse("2023-02-08T10:49:00Z").getEpochSecond();
long lte = Instant.parse("2024-01-31T10:14:31Z").getEpochSecond();
datetimeRange("date",
DatetimeRange.newBuilder()
.setGt(Timestamp.newBuilder().setSeconds(gt))
.setLte(Timestamp.newBuilder().setSeconds(lte))
.build());
```
```csharp
using Qdrant.Client.Grpc;
Conditions.DatetimeRange(
field: "date",
gt: new DateTime(2023, 2, 8, 10, 49, 0, DateTimeKind.Utc),
lte: new DateTime(2024, 1, 31, 10, 14, 31, DateTimeKind.Utc)
);
```
```go
import (
"time"
"github.com/qdrant/go-client/qdrant"
"google.golang.org/protobuf/types/known/timestamppb"
)
qdrant.NewDatetimeRange("date", &qdrant.DatetimeRange{
Gt: timestamppb.New(time.Date(2023, 2, 8, 10, 49, 0, 0, time.UTC)),
Lte: timestamppb.New(time.Date(2024, 1, 31, 10, 14, 31, 0, time.UTC)),
})
```
### UUID Match
_Available as of v1.11.0_
Matching of UUID values works similarly to the regular `match` condition for strings.
Functionally, it will work with `keyword` and `uuid` indexes exactly the same, but `uuid` index is more memory efficient.
```json
{
"key": "uuid",
"match": {
"uuid": "f47ac10b-58cc-4372-a567-0e02b2c3d479"
}
}
```
```python
models.FieldCondition(
key="uuid",
match=models.MatchValue(uuid="f47ac10b-58cc-4372-a567-0e02b2c3d479"),
)
```
```typescript
{
key: 'uuid',
match: {uuid: 'f47ac10b-58cc-4372-a567-0e02b2c3d479'}
}
```
```rust
Condition::matches("uuid", "f47ac10b-58cc-4372-a567-0e02b2c3d479".to_string())
```
```java
matchKeyword("uuid", "f47ac10b-58cc-4372-a567-0e02b2c3d479");
```
```csharp
using static Qdrant.Client.Grpc.Conditions;
MatchKeyword("uuid", "f47ac10b-58cc-4372-a567-0e02b2c3d479");
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewMatch("uuid", "f47ac10b-58cc-4372-a567-0e02b2c3d479")
```
### Geo
#### Geo Bounding Box
```json
{
"key": "location",
"geo_bounding_box": {
"bottom_right": {
"lon": 13.455868,
"lat": 52.495862
},
"top_left": {
"lon": 13.403683,
"lat": 52.520711
}
}
}
```
```python
models.FieldCondition(
key="location",
geo_bounding_box=models.GeoBoundingBox(
bottom_right=models.GeoPoint(
lon=13.455868,
lat=52.495862,
),
top_left=models.GeoPoint(
lon=13.403683,
lat=52.520711,
),
),
)
```
```typescript
{
key: 'location',
geo_bounding_box: {
bottom_right: {
lon: 13.455868,
lat: 52.495862
},
top_left: {
lon: 13.403683,
lat: 52.520711
}
}
}
```
```rust
use qdrant_client::qdrant::{Condition, GeoBoundingBox, GeoPoint};
Condition::geo_bounding_box(
"location",
GeoBoundingBox {
bottom_right: Some(GeoPoint {
lon: 13.455868,
lat: 52.495862,
}),
top_left: Some(GeoPoint {
lon: 13.403683,
lat: 52.520711,
}),
},
)
```
```java
import static io.qdrant.client.ConditionFactory.geoBoundingBox;
geoBoundingBox("location", 52.520711, 13.403683, 52.495862, 13.455868);
```
```csharp
using static Qdrant.Client.Grpc.Conditions;
GeoBoundingBox("location", 52.520711, 13.403683, 52.495862, 13.455868);
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewGeoBoundingBox("location", 52.520711, 13.403683, 52.495862, 13.455868)
```
It matches with `location`s inside a rectangle with the coordinates of the upper left corner in `bottom_right` and the coordinates of the lower right corner in `top_left`.
#### Geo Radius
```json
{
"key": "location",
"geo_radius": {
"center": {
"lon": 13.403683,
"lat": 52.520711
},
"radius": 1000.0
}
}
```
```python
models.FieldCondition(
key="location",
geo_radius=models.GeoRadius(
center=models.GeoPoint(
lon=13.403683,
lat=52.520711,
),
radius=1000.0,
),
)
```
```typescript
{
key: 'location',
geo_radius: {
center: {
lon: 13.403683,
lat: 52.520711
},
radius: 1000.0
}
}
```
```rust
use qdrant_client::qdrant::{Condition, GeoPoint, GeoRadius};
Condition::geo_radius(
"location",
GeoRadius {
center: Some(GeoPoint {
lon: 13.403683,
lat: 52.520711,
}),
radius: 1000.0,
},
)
```
```java
import static io.qdrant.client.ConditionFactory.geoRadius;
geoRadius("location", 52.520711, 13.403683, 1000.0f);
```
```csharp
using static Qdrant.Client.Grpc.Conditions;
GeoRadius("location", 52.520711, 13.403683, 1000.0f);
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewGeoRadius("location", 52.520711, 13.403683, 1000.0)
```
It matches with `location`s inside a circle with the `center` at the center and a radius of `radius` meters.
If several values are stored, at least one of them should match the condition.
These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo).
#### Geo Polygon
Geo Polygons search is useful for when you want to find points inside an irregularly shaped area, for example a country boundary or a forest boundary. A polygon always has an exterior ring and may optionally include interior rings. A lake with an island would be an example of an interior ring. If you wanted to find points in the water but not on the island, you would make an interior ring for the island.
When defining a ring, you must pick either a clockwise or counterclockwise ordering for your points. The first and last point of the polygon must be the same.
Currently, we only support unprojected global coordinates (decimal degrees longitude and latitude) and we are datum agnostic.
```json
{
"key": "location",
"geo_polygon": {
"exterior": {
"points": [
{ "lon": -70.0, "lat": -70.0 },
{ "lon": 60.0, "lat": -70.0 },
{ "lon": 60.0, "lat": 60.0 },
{ "lon": -70.0, "lat": 60.0 },
{ "lon": -70.0, "lat": -70.0 }
]
},
"interiors": [
{
"points": [
{ "lon": -65.0, "lat": -65.0 },
{ "lon": 0.0, "lat": -65.0 },
{ "lon": 0.0, "lat": 0.0 },
{ "lon": -65.0, "lat": 0.0 },
{ "lon": -65.0, "lat": -65.0 }
]
}
]
}
}
```
```python
models.FieldCondition(
key="location",
geo_polygon=models.GeoPolygon(
exterior=models.GeoLineString(
points=[
models.GeoPoint(
lon=-70.0,
lat=-70.0,
),
models.GeoPoint(
lon=60.0,
lat=-70.0,
),
models.GeoPoint(
lon=60.0,
lat=60.0,
),
models.GeoPoint(
lon=-70.0,
lat=60.0,
),
models.GeoPoint(
lon=-70.0,
lat=-70.0,
),
]
),
interiors=[
models.GeoLineString(
points=[
models.GeoPoint(
lon=-65.0,
lat=-65.0,
),
models.GeoPoint(
lon=0.0,
lat=-65.0,
),
models.GeoPoint(
lon=0.0,
lat=0.0,
),
models.GeoPoint(
lon=-65.0,
lat=0.0,
),
models.GeoPoint(
lon=-65.0,
lat=-65.0,
),
]
)
],
),
)
```
```typescript
{
key: 'location',
geo_polygon: {
exterior: {
points: [
{
lon: -70.0,
lat: -70.0
},
{
lon: 60.0,
lat: -70.0
},
{
lon: 60.0,
lat: 60.0
},
{
lon: -70.0,
lat: 60.0
},
{
lon: -70.0,
lat: -70.0
}
]
},
interiors: {
points: [
{
lon: -65.0,
lat: -65.0
},
{
lon: 0.0,
lat: -65.0
},
{
lon: 0.0,
lat: 0.0
},
{
lon: -65.0,
lat: 0.0
},
{
lon: -65.0,
lat: -65.0
}
]
}
}
}
```
```rust
use qdrant_client::qdrant::{Condition, GeoLineString, GeoPoint, GeoPolygon};
Condition::geo_polygon(
"location",
GeoPolygon {
exterior: Some(GeoLineString {
points: vec![
GeoPoint {
lon: -70.0,
lat: -70.0,
},
GeoPoint {
lon: 60.0,
lat: -70.0,
},
GeoPoint {
lon: 60.0,
lat: 60.0,
},
GeoPoint {
lon: -70.0,
lat: 60.0,
},
GeoPoint {
lon: -70.0,
lat: -70.0,
},
],
}),
interiors: vec![GeoLineString {
points: vec![
GeoPoint {
lon: -65.0,
lat: -65.0,
},
GeoPoint {
lon: 0.0,
lat: -65.0,
},
GeoPoint { lon: 0.0, lat: 0.0 },
GeoPoint {
lon: -65.0,
lat: 0.0,
},
GeoPoint {
lon: -65.0,
lat: -65.0,
},
],
}],
},
)
```
```java
import static io.qdrant.client.ConditionFactory.geoPolygon;
import io.qdrant.client.grpc.Points.GeoLineString;
import io.qdrant.client.grpc.Points.GeoPoint;
geoPolygon(
"location",
GeoLineString.newBuilder()
.addAllPoints(
List.of(
GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build(),
GeoPoint.newBuilder().setLon(60.0).setLat(-70.0).build(),
GeoPoint.newBuilder().setLon(60.0).setLat(60.0).build(),
GeoPoint.newBuilder().setLon(-70.0).setLat(60.0).build(),
GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build()))
.build(),
List.of(
GeoLineString.newBuilder()
.addAllPoints(
List.of(
GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build(),
GeoPoint.newBuilder().setLon(0.0).setLat(-65.0).build(),
GeoPoint.newBuilder().setLon(0.0).setLat(0.0).build(),
GeoPoint.newBuilder().setLon(-65.0).setLat(0.0).build(),
GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build()))
.build()));
```
```csharp
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
GeoPolygon(
field: "location",
exterior: new GeoLineString
{
Points =
{
new GeoPoint { Lat = -70.0, Lon = -70.0 },
new GeoPoint { Lat = 60.0, Lon = -70.0 },
new GeoPoint { Lat = 60.0, Lon = 60.0 },
new GeoPoint { Lat = -70.0, Lon = 60.0 },
new GeoPoint { Lat = -70.0, Lon = -70.0 }
}
},
interiors: [
new()
{
Points =
{
new GeoPoint { Lat = -65.0, Lon = -65.0 },
new GeoPoint { Lat = 0.0, Lon = -65.0 },
new GeoPoint { Lat = 0.0, Lon = 0.0 },
new GeoPoint { Lat = -65.0, Lon = 0.0 },
new GeoPoint { Lat = -65.0, Lon = -65.0 }
}
}
]
);
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewGeoPolygon("location",
&qdrant.GeoLineString{
Points: []*qdrant.GeoPoint{
{Lat: -70, Lon: -70},
{Lat: 60, Lon: -70},
{Lat: 60, Lon: 60},
{Lat: -70, Lon: 60},
{Lat: -70, Lon: -70},
},
}, &qdrant.GeoLineString{
Points: []*qdrant.GeoPoint{
{Lat: -65, Lon: -65},
{Lat: 0, Lon: -65},
{Lat: 0, Lon: 0},
{Lat: -65, Lon: 0},
{Lat: -65, Lon: -65},
},
})
```
A match is considered any point location inside or on the boundaries of the given polygon's exterior but not inside any interiors.
If several location values are stored for a point, then any of them matching will include that point as a candidate in the resultset.
These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo).
### Values count
In addition to the direct value comparison, it is also possible to filter by the amount of values.
For example, given the data:
```json
[
{ "id": 1, "name": "product A", "comments": ["Very good!", "Excellent"] },
{ "id": 2, "name": "product B", "comments": ["meh", "expected more", "ok"] }
]
```
We can perform the search only among the items with more than two comments:
```json
{
"key": "comments",
"values_count": {
"gt": 2
}
}
```
```python
models.FieldCondition(
key="comments",
values_count=models.ValuesCount(gt=2),
)
```
```typescript
{
key: 'comments',
values_count: {gt: 2}
}
```
```rust
use qdrant_client::qdrant::{Condition, ValuesCount};
Condition::values_count(
"comments",
ValuesCount {
gt: Some(2),
..Default::default()
},
)
```
```java
import static io.qdrant.client.ConditionFactory.valuesCount;
import io.qdrant.client.grpc.Points.ValuesCount;
valuesCount("comments", ValuesCount.newBuilder().setGt(2).build());
```
```csharp
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
ValuesCount("comments", new ValuesCount { Gt = 2 });
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewValuesCount("comments", &qdrant.ValuesCount{
Gt: qdrant.PtrOf(uint64(2)),
})
```
The result would be:
```json
[{ "id": 2, "name": "product B", "comments": ["meh", "expected more", "ok"] }]
```
If stored value is not an array - it is assumed that the amount of values is equals to 1.
### Is Empty
Sometimes it is also useful to filter out records that are missing some value.
The `IsEmpty` condition may help you with that:
```json
{
"is_empty": {
"key": "reports"
}
}
```
```python
models.IsEmptyCondition(
is_empty=models.PayloadField(key="reports"),
)
```
```typescript
{
is_empty: {
key: "reports";
}
}
```
```rust
use qdrant_client::qdrant::Condition;
Condition::is_empty("reports")
```
```java
import static io.qdrant.client.ConditionFactory.isEmpty;
isEmpty("reports");
```
```csharp
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
IsEmpty("reports");
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewIsEmpty("reports")
```
This condition will match all records where the field `reports` either does not exist, or has `null` or `[]` value.
<aside role="status">The <b>IsEmpty</b> is often useful together with the logical negation <b>must_not</b>. In this case all non-empty values will be selected.</aside>
### Is Null
It is not possible to test for `NULL` values with the <b>match</b> condition.
We have to use `IsNull` condition instead:
```json
{
"is_null": {
"key": "reports"
}
}
```
```python
models.IsNullCondition(
is_null=models.PayloadField(key="reports"),
)
```
```typescript
{
is_null: {
key: "reports";
}
}
```
```rust
use qdrant_client::qdrant::Condition;
Condition::is_null("reports")
```
```java
import static io.qdrant.client.ConditionFactory.isNull;
isNull("reports");
```
```csharp
using Qdrant.Client.Grpc;
using static Qdrant.Client.Grpc.Conditions;
IsNull("reports");
```
```go
import "github.com/qdrant/go-client/qdrant"
qdrant.NewIsNull("reports")
```
This condition will match all records where the field `reports` exists and has `NULL` value.
### Has id
This type of query is not related to payload, but can be very useful in some situations.
For example, the user could mark some specific search results as irrelevant, or we want to search only among the specified points.
```http
POST /collections/{collection_name}/points/scroll
{
"filter": {
"must": [
{ "has_id": [1,3,5,7,9,11] }
]
}
...
}
```
```python
client.scroll(
collection_name="{collection_name}",
scroll_filter=models.Filter(
must=[
models.HasIdCondition(has_id=[1, 3, 5, 7, 9, 11]),
],
),
)
```
```typescript
client.scroll("{collection_name}", {
filter: {
must: [
{
has_id: [1, 3, 5, 7, 9, 11],
},
],
},
});
```
```rust
use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
use qdrant_client::Qdrant;
let client = Qdrant::from_url("http://localhost:6334").build()?;
client
.scroll(
ScrollPointsBuilder::new("{collection_name}")
.filter(Filter::must([Condition::has_id([1, 3, 5, 7, 9, 11])])),
)
.await?;
```
```java
import java.util.List;
import static io.qdrant.client.ConditionFactory.hasId;
import static io.qdrant.client.PointIdFactory.id;
import io.qdrant.client.grpc.Points.Filter;
import io.qdrant.client.grpc.Points.ScrollPoints;
client
.scrollAsync(
ScrollPoints.newBuilder()
.setCollectionName("{collection_name}")
.setFilter(
Filter.newBuilder()
.addMust(hasId(List.of(id(1), id(3), id(5), id(7), id(9), id(11))))
.build())
.build())
.get();
```
```csharp
using Qdrant.Client;
using static Qdrant.Client.Grpc.Conditions;
var client = new QdrantClient("localhost", 6334);
await client.ScrollAsync(collectionName: "{collection_name}", filter: HasId([1, 3, 5, 7, 9, 11]));
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
client.Scroll(context.Background(), &qdrant.ScrollPoints{
CollectionName: "{collection_name}",
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewHasID(
qdrant.NewIDNum(1),
qdrant.NewIDNum(3),
qdrant.NewIDNum(5),
qdrant.NewIDNum(7),
qdrant.NewIDNum(9),
qdrant.NewIDNum(11),
),
},
},
})
```
Filtered points would be:
```json
[
{ "id": 1, "city": "London", "color": "green" },
{ "id": 3, "city": "London", "color": "blue" },
{ "id": 5, "city": "Moscow", "color": "green" }
]
```
| documentation/concepts/filtering.md |
---
title: Concepts
weight: 11
# If the index.md file is empty, the link to the section will be hidden from the sidebar
---
# Concepts
Think of these concepts as a glossary. Each of these concepts include a link to
detailed information, usually with examples. If you're new to AI, these concepts
can help you learn more about AI and the Qdrant approach.
## Collections
[Collections](/documentation/concepts/collections/) define a named set of points that you can use for your search.
## Payload
A [Payload](/documentation/concepts/payload/) describes information that you can store with vectors.
## Points
[Points](/documentation/concepts/points/) are a record which consists of a vector and an optional payload.
## Search
[Search](/documentation/concepts/search/) describes _similarity search_, which set up related objects close to each other in vector space.
## Explore
[Explore](/documentation/concepts/explore/) includes several APIs for exploring data in your collections.
## Hybrid Queries
[Hybrid Queries](/documentation/concepts/hybrid-queries/) combines multiple queries or performs them in more than one stage.
## Filtering
[Filtering](/documentation/concepts/filtering/) defines various database-style clauses, conditions, and more.
## Optimizer
[Optimizer](/documentation/concepts/optimizer/) describes options to rebuild
database structures for faster search. They include a vacuum, a merge, and an
indexing optimizer.
## Storage
[Storage](/documentation/concepts/storage/) describes the configuration of storage in segments, which include indexes and an ID mapper.
## Indexing
[Indexing](/documentation/concepts/indexing/) lists and describes available indexes. They include payload, vector, sparse vector, and a filterable index.
## Snapshots
[Snapshots](/documentation/concepts/snapshots/) describe the backup/restore process (and more) for each node at specific times.
| documentation/concepts/_index.md |
---
title: Bulk Upload Vectors
weight: 13
---
# Bulk upload a large number of vectors
Uploading a large-scale dataset fast might be a challenge, but Qdrant has a few tricks to help you with that.
The first important detail about data uploading is that the bottleneck is usually located on the client side, not on the server side.
This means that if you are uploading a large dataset, you should prefer a high-performance client library.
We recommend using our [Rust client library](https://github.com/qdrant/rust-client) for this purpose, as it is the fastest client library available for Qdrant.
If you are not using Rust, you might want to consider parallelizing your upload process.
## Disable indexing during upload
In case you are doing an initial upload of a large dataset, you might want to disable indexing during upload.
It will enable to avoid unnecessary indexing of vectors, which will be overwritten by the next batch.
To disable indexing during upload, set `indexing_threshold` to `0`:
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine"
},
"optimizers_config": {
"indexing_threshold": 0
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
optimizers_config=models.OptimizersConfigDiff(
indexing_threshold=0,
),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
},
optimizers_config: {
indexing_threshold: 0,
},
});
```
After upload is done, you can enable indexing by setting `indexing_threshold` to a desired value (default is 20000):
```http
PATCH /collections/{collection_name}
{
"optimizers_config": {
"indexing_threshold": 20000
}
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.update_collection(
collection_name="{collection_name}",
optimizer_config=models.OptimizersConfigDiff(indexing_threshold=20000),
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.updateCollection("{collection_name}", {
optimizers_config: {
indexing_threshold: 20000,
},
});
```
## Upload directly to disk
When the vectors you upload do not all fit in RAM, you likely want to use
[memmap](../../concepts/storage/#configuring-memmap-storage)
support.
During collection
[creation](../../concepts/collections/#create-collection),
memmaps may be enabled on a per-vector basis using the `on_disk` parameter. This
will store vector data directly on disk at all times. It is suitable for
ingesting a large amount of data, essential for the billion scale benchmark.
Using `memmap_threshold_kb` is not recommended in this case. It would require
the [optimizer](../../concepts/optimizer/) to constantly
transform in-memory segments into memmap segments on disk. This process is
slower, and the optimizer can be a bottleneck when ingesting a large amount of
data.
Read more about this in
[Configuring Memmap Storage](../../concepts/storage/#configuring-memmap-storage).
## Parallel upload into multiple shards
In Qdrant, each collection is split into shards. Each shard has a separate Write-Ahead-Log (WAL), which is responsible for ordering operations.
By creating multiple shards, you can parallelize upload of a large dataset. From 2 to 4 shards per one machine is a reasonable number.
```http
PUT /collections/{collection_name}
{
"vectors": {
"size": 768,
"distance": "Cosine"
},
"shard_number": 2
}
```
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
collection_name="{collection_name}",
vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
shard_number=2,
)
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
client.createCollection("{collection_name}", {
vectors: {
size: 768,
distance: "Cosine",
},
shard_number: 2,
});
```
| documentation/tutorials/bulk-upload.md |
---
title: Semantic code search
weight: 22
---
# Use semantic search to navigate your codebase
| Time: 45 min | Level: Intermediate | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/qdrant/examples/blob/master/code-search/code-search.ipynb) | |
|--------------|---------------------|--|----|
You too can enrich your applications with Qdrant semantic search. In this
tutorial, we describe how you can use Qdrant to navigate a codebase, to help
you find relevant code snippets. As an example, we will use the [Qdrant](https://github.com/qdrant/qdrant)
source code itself, which is mostly written in Rust.
<aside role="status">This tutorial might not work on code bases that are not disciplined or structured. For good code search, you may need to refactor the project first.</aside>
## The approach
We want to search codebases using natural semantic queries, and searching for
code based on similar logic. You can set up these tasks with embeddings:
1. General usage neural encoder for Natural Language Processing (NLP), in our case
`all-MiniLM-L6-v2` from the
[sentence-transformers](https://www.sbert.net/docs/pretrained_models.html) library.
2. Specialized embeddings for code-to-code similarity search. We use the
`jina-embeddings-v2-base-code` model.
To prepare our code for `all-MiniLM-L6-v2`, we preprocess the code to text that
more closely resembles natural language. The Jina embeddings model supports a
variety of standard programming languages, so there is no need to preprocess the
snippets. We can use the code as is.
NLP-based search is based on function signatures, but code search may return
smaller pieces, such as loops. So, if we receive a particular function signature
from the NLP model and part of its implementation from the code model, we merge
the results and highlight the overlap.
## Data preparation
Chunking the application sources into smaller parts is a non-trivial task. In
general, functions, class methods, structs, enums, and all the other language-specific
constructs are good candidates for chunks. They are big enough to
contain some meaningful information, but small enough to be processed by
embedding models with a limited context window. You can also use docstrings,
comments, and other metadata can be used to enrich the chunks with additional
information.
![Code chunking strategy](/documentation/tutorials/code-search/data-chunking.png)
### Parsing the codebase
While our example uses Rust, you can use our approach with any other language.
You can parse code with a [Language Server Protocol](https://microsoft.github.io/language-server-protocol/) (**LSP**)
compatible tool. You can use an LSP to build a graph of the codebase, and then extract chunks.
We did our work with the [rust-analyzer](https://rust-analyzer.github.io/).
We exported the parsed codebase into the [LSIF](https://microsoft.github.io/language-server-protocol/specifications/lsif/0.4.0/specification/)
format, a standard for code intelligence data. Next, we used the LSIF data to
navigate the codebase and extract the chunks. For details, see our [code search
demo](https://github.com/qdrant/demo-code-search).
<aside role="status">
For other languages, you can use the same approach. There are
<a href="https://microsoft.github.io/language-server-protocol/implementors/servers/">plenty of implementations available
</a>.
</aside>
We then exported the chunks into JSON documents with not only the code itself,
but also context with the location of the code in the project. For example, see
the description of the `await_ready_for_timeout` function from the `IsReady`
struct in the `common` module:
```json
{
"name":"await_ready_for_timeout",
"signature":"fn await_ready_for_timeout (& self , timeout : Duration) -> bool",
"code_type":"Function",
"docstring":"= \" Return `true` if ready, `false` if timed out.\"",
"line":44,
"line_from":43,
"line_to":51,
"context":{
"module":"common",
"file_path":"lib/collection/src/common/is_ready.rs",
"file_name":"is_ready.rs",
"struct_name":"IsReady",
"snippet":" /// Return `true` if ready, `false` if timed out.\n pub fn await_ready_for_timeout(&self, timeout: Duration) -> bool {\n let mut is_ready = self.value.lock();\n if !*is_ready {\n !self.condvar.wait_for(&mut is_ready, timeout).timed_out()\n } else {\n true\n }\n }\n"
}
}
```
You can examine the Qdrant structures, parsed in JSON, in the [`structures.jsonl`
file](https://storage.googleapis.com/tutorial-attachments/code-search/structures.jsonl)
in our Google Cloud Storage bucket. Download it and use it as a source of data for our code search.
```shell
wget https://storage.googleapis.com/tutorial-attachments/code-search/structures.jsonl
```
Next, load the file and parse the lines into a list of dictionaries:
```python
import json
structures = []
with open("structures.jsonl", "r") as fp:
for i, row in enumerate(fp):
entry = json.loads(row)
structures.append(entry)
```
### Code to *natural language* conversion
Each programming language has its own syntax which is not a part of the natural
language. Thus, a general-purpose model probably does not understand the code
as is. We can, however, normalize the data by removing code specifics and
including additional context, such as module, class, function, and file name.
We took the following steps:
1. Extract the signature of the function, method, or other code construct.
2. Divide camel case and snake case names into separate words.
3. Take the docstring, comments, and other important metadata.
4. Build a sentence from the extracted data using a predefined template.
5. Remove the special characters and replace them with spaces.
As input, expect dictionaries with the same structure. Define a `textify`
function to do the conversion. We'll use an `inflection` library to convert
with different naming conventions.
```shell
pip install inflection
```
Once all dependencies are installed, we define the `textify` function:
```python
import inflection
import re
from typing import Dict, Any
def textify(chunk: Dict[str, Any]) -> str:
# Get rid of all the camel case / snake case
# - inflection.underscore changes the camel case to snake case
# - inflection.humanize converts the snake case to human readable form
name = inflection.humanize(inflection.underscore(chunk["name"]))
signature = inflection.humanize(inflection.underscore(chunk["signature"]))
# Check if docstring is provided
docstring = ""
if chunk["docstring"]:
docstring = f"that does {chunk['docstring']} "
# Extract the location of that snippet of code
context = (
f"module {chunk['context']['module']} "
f"file {chunk['context']['file_name']}"
)
if chunk["context"]["struct_name"]:
struct_name = inflection.humanize(
inflection.underscore(chunk["context"]["struct_name"])
)
context = f"defined in struct {struct_name} {context}"
# Combine all the bits and pieces together
text_representation = (
f"{chunk['code_type']} {name} "
f"{docstring}"
f"defined as {signature} "
f"{context}"
)
# Remove any special characters and concatenate the tokens
tokens = re.split(r"\W", text_representation)
tokens = filter(lambda x: x, tokens)
return " ".join(tokens)
```
Now we can use `textify` to convert all chunks into text representations:
```python
text_representations = list(map(textify, structures))
```
This is how the `await_ready_for_timeout` function description appears:
```text
Function Await ready for timeout that does Return true if ready false if timed out defined as Fn await ready for timeout self timeout duration bool defined in struct Is ready module common file is_ready rs
```
## Ingestion pipeline
Next, we build the code search engine to vectorizing data and set up a semantic
search mechanism for both embedding models.
### Natural language embeddings
We can encode text representations through the `all-MiniLM-L6-v2` model from
`sentence-transformers`. With the following command, we install `sentence-transformers`
with dependencies:
```shell
pip install sentence-transformers optimum onnx
```
Then we can use the model to encode the text representations:
```python
from sentence_transformers import SentenceTransformer
nlp_model = SentenceTransformer("all-MiniLM-L6-v2")
nlp_embeddings = nlp_model.encode(
text_representations, show_progress_bar=True,
)
```
### Code embeddings
The `jina-embeddings-v2-base-code` model is a good candidate for this task.
You can also get it from the `sentence-transformers` library, with conditions.
Visit [the model page](https://huggingface.co/jinaai/jina-embeddings-v2-base-code),
accept the rules, and generate the access token in your [account settings](https://huggingface.co/settings/tokens).
Once you have the token, you can use the model as follows:
```python
HF_TOKEN = "THIS_IS_YOUR_TOKEN"
# Extract the code snippets from the structures to a separate list
code_snippets = [
structure["context"]["snippet"] for structure in structures
]
code_model = SentenceTransformer(
"jinaai/jina-embeddings-v2-base-code",
token=HF_TOKEN,
trust_remote_code=True
)
code_model.max_seq_length = 8192 # increase the context length window
code_embeddings = code_model.encode(
code_snippets, batch_size=4, show_progress_bar=True,
)
```
Remember to set the `trust_remote_code` parameter to `True`. Otherwise, the
model does not produce meaningful vectors. Setting this parameter allows the
library to download and possibly launch some code on your machine, so be sure
to trust the source.
With both the natural language and code embeddings, we can store them in the
Qdrant collection.
### Building Qdrant collection
We use the `qdrant-client` library to interact with the Qdrant server. Let's
install that client:
```shell
pip install qdrant-client
```
Of course, we need a running Qdrant server for vector search. If you need one,
you can [use a local Docker container](/documentation/quick-start/)
or deploy it using the [Qdrant Cloud](https://cloud.qdrant.io/).
You can use either to follow this tutorial. Configure the connection parameters:
```python
QDRANT_URL = "https://my-cluster.cloud.qdrant.io:6333" # http://localhost:6333 for local instance
QDRANT_API_KEY = "THIS_IS_YOUR_API_KEY" # None for local instance
```
Then use the library to create a collection:
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(QDRANT_URL, api_key=QDRANT_API_KEY)
client.create_collection(
"qdrant-sources",
vectors_config={
"text": models.VectorParams(
size=nlp_embeddings.shape[1],
distance=models.Distance.COSINE,
),
"code": models.VectorParams(
size=code_embeddings.shape[1],
distance=models.Distance.COSINE,
),
}
)
```
Our newly created collection is ready to accept the data. Let's upload the embeddings:
```python
import uuid
points = [
models.PointStruct(
id=uuid.uuid4().hex,
vector={
"text": text_embedding,
"code": code_embedding,
},
payload=structure,
)
for text_embedding, code_embedding, structure in zip(nlp_embeddings, code_embeddings, structures)
]
client.upload_points("qdrant-sources", points=points, batch_size=64)
```
The uploaded points are immediately available for search. Next, query the
collection to find relevant code snippets.
## Querying the codebase
We use one of the models to search the collection. Start with text embeddings.
Run the following query "*How do I count points in a collection?*". Review the
results.
<aside role="status">In these tables, we link to longer code excerpts from a
`file_name` in the `Qdrant` repository. The results are subject to change.
Fortunately, this model should continue to provide the results you need.</aside>
```python
query = "How do I count points in a collection?"
hits = client.query_points(
"qdrant-sources",
query=nlp_model.encode(query).tolist(),
using="text",
limit=5,
).points
```
Now, review the results. The following table lists the module, the file name
and score. Each line includes a link to the signature, as a code block from
the file.
| module | file_name | score | signature |
|--------------------|---------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| toc | point_ops.rs | 0.59448624 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/storage/src/content_manager/toc/point_ops.rs#L120) |
| operations | types.rs | 0.5493385 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `pub struct CountRequestInternal`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/operations/types.rs#L831) |
| collection_manager | segments_updater.rs | 0.5121002 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `pub(crate) fn upsert_points<'a, T>`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection_manager/segments_updater.rs#L339) |
| collection | point_ops.rs | 0.5063539 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection/point_ops.rs#L213) |
| map_index | mod.rs | 0.49973983 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn get_points_with_value_count<Q>`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/map_index/mod.rs#L88) |
It seems we were able to find some relevant code structures. Let's try the same with the code embeddings:
```python
hits = client.query_points(
"qdrant-sources",
query=code_model.encode(query).tolist(),
using="code",
limit=5,
).points
```
Output:
| module | file_name | score | signature |
|---------------|----------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| field_index | geo_index.rs | 0.73278356 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/geo_index.rs#L612) |
| numeric_index | mod.rs | 0.7254976 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/numeric_index/mod.rs#L322) |
| map_index | mod.rs | 0.7124739 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L315) |
| map_index | mod.rs | 0.7124739 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L429) |
| fixtures | payload_context_fixture.rs | 0.706204 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn total_point_count`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/fixtures/payload_context_fixture.rs#L122) |
While the scores retrieved by different models are not comparable, but we can
see that the results are different. Code and text embeddings can capture
different aspects of the codebase. We can use both models to query the collection
and then combine the results to get the most relevant code snippets, from a single batch request.
```python
responses = client.query_batch_points(
"qdrant-sources",
requests=[
models.QueryRequest(
query=nlp_model.encode(query).tolist(),
using="text",
with_payload=True,
limit=5,
),
models.QueryRequest(
query=code_model.encode(query).tolist(),
using="code",
with_payload=True,
limit=5,
),
]
)
results = [response.points for response in responses]
```
Output:
| module | file_name | score | signature |
|--------------------|----------------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| toc | point_ops.rs | 0.59448624 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/storage/src/content_manager/toc/point_ops.rs#L120) |
| operations | types.rs | 0.5493385 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `pub struct CountRequestInternal`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/operations/types.rs#L831) |
| collection_manager | segments_updater.rs | 0.5121002 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `pub(crate) fn upsert_points<'a, T>`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection_manager/segments_updater.rs#L339) |
| collection | point_ops.rs | 0.5063539 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection/point_ops.rs#L213) |
| map_index | mod.rs | 0.49973983 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn get_points_with_value_count<Q>`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/map_index/mod.rs#L88) |
| field_index | geo_index.rs | 0.73278356 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/geo_index.rs#L612) |
| numeric_index | mod.rs | 0.7254976 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/numeric_index/mod.rs#L322) |
| map_index | mod.rs | 0.7124739 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L315) |
| map_index | mod.rs | 0.7124739 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L429) |
| fixtures | payload_context_fixture.rs | 0.706204 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn total_point_count`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/fixtures/payload_context_fixture.rs#L122) |
This is one example of how you can use different models and combine the results.
In a real-world scenario, you might run some reranking and deduplication, as
well as additional processing of the results.
### Code search demo
Our [Code search demo](https://code-search.qdrant.tech/) uses the following process:
1. The user sends a query.
1. Both models vectorize that query simultaneously. We get two different
vectors.
1. Both vectors are used in parallel to find relevant snippets. We expect
5 examples from the NLP search and 20 examples from the code search.
1. Once we retrieve results for both vectors, we merge them in one of the
following scenarios:
1. If both methods return different results, we prefer the results from
the general usage model (NLP).
1. If there is an overlap between the search results, we merge overlapping
snippets.
In the screenshot, we search for `flush of wal`. The result
shows relevant code, merged from both models. Note the highlighted
code in lines 621-629. It's where both models agree.
![Results from both models, with overlap](/documentation/tutorials/code-search/code-search-demo-example.png)
Now you see semantic code intelligence, in action.
### Grouping the results
You can improve the search results, by grouping them by payload properties.
In our case, we can group the results by the module. If we use code embeddings,
we can see multiple results from the `map_index` module. Let's group the
results and assume a single result per module:
```python
results = client.search_groups(
"qdrant-sources",
query_vector=(
"code", code_model.encode(query).tolist()
),
group_by="context.module",
limit=5,
group_size=1,
)
```
Output:
| module | file_name | score | signature |
|---------------|----------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| field_index | geo_index.rs | 0.73278356 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/geo_index.rs#L612) |
| numeric_index | mod.rs | 0.7254976 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/numeric_index/mod.rs#L322) |
| map_index | mod.rs | 0.7124739 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L315) |
| fixtures | payload_context_fixture.rs | 0.706204 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn total_point_count`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/fixtures/payload_context_fixture.rs#L122) |
| hnsw_index | graph_links.rs | 0.6998417 | [<img src="/documentation/tutorials/code-search/github-mark.png" width="16" style="display: inline"> `fn num_points `](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/hnsw_index/graph_links.rs#L477) |
With the grouping feature, we get more diverse results.
## Summary
This tutorial demonstrates how to use Qdrant to navigate a codebase. For an
end-to-end implementation, review the [code search
notebook](https://colab.research.google.com/github/qdrant/examples/blob/master/code-search/code-search.ipynb) and the
[code-search-demo](https://github.com/qdrant/demo-code-search). You can also check out [a running version of the code
search demo](https://code-search.qdrant.tech/) which exposes Qdrant codebase for search with a web interface.
| documentation/tutorials/code-search.md |
---
title: Measure retrieval quality
weight: 21
---
# Measure retrieval quality
| Time: 30 min | Level: Intermediate | | |
|--------------|---------------------|--|----|
Semantic search pipelines are as good as the embeddings they use. If your model cannot properly represent input data, similar objects might
be far away from each other in the vector space. No surprise, that the search results will be poor in this case. There is, however, another
component of the process which can also degrade the quality of the search results. It is the ANN algorithm itself.
In this tutorial, we will show how to measure the quality of the semantic retrieval and how to tune the parameters of the HNSW, the ANN
algorithm used in Qdrant, to obtain the best results.
## Embeddings quality
The quality of the embeddings is a topic for a separate tutorial. In a nutshell, it is usually measured and compared by benchmarks, such as
[Massive Text Embedding Benchmark (MTEB)](https://huggingface.co/spaces/mteb/leaderboard). The evaluation process itself is pretty
straightforward and is based on a ground truth dataset built by humans. We have a set of queries and a set of the documents we would expect
to receive for each of them. In the evaluation process, we take a query, find the most similar documents in the vector space and compare
them with the ground truth. In that setup, **finding the most similar documents is implemented as full kNN search, without any approximation**.
As a result, we can measure the quality of the embeddings themselves, without the influence of the ANN algorithm.
## Retrieval quality
Embeddings quality is indeed the most important factor in the semantic search quality. However, vector search engines, such as Qdrant, do not
perform pure kNN search. Instead, they use **Approximate Nearest Neighbors** (ANN) algorithms, which are much faster than the exact search,
but can return suboptimal results. We can also **measure the retrieval quality of that approximation** which also contributes to the overall
search quality.
### Quality metrics
There are various ways of how quantify the quality of semantic search. Some of them, such as [Precision@k](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k),
are based on the number of relevant documents in the top-k search results. Others, such as [Mean Reciprocal Rank (MRR)](https://en.wikipedia.org/wiki/Mean_reciprocal_rank),
take into account the position of the first relevant document in the search results. [DCG and NDCG](https://en.wikipedia.org/wiki/Discounted_cumulative_gain)
metrics are, in turn, based on the relevance score of the documents.
If we treat the search pipeline as a whole, we could use them all. The same is true for the embeddings quality evaluation. However, for the
ANN algorithm itself, anything based on the relevance score or ranking is not applicable. Ranking in vector search relies on the distance
between the query and the document in the vector space, however distance is not going to change due to approximation, as the function is
still the same.
Therefore, it only makes sense to measure the quality of the ANN algorithm by the number of relevant documents in the top-k search results,
such as `precision@k`. It is calculated as the number of relevant documents in the top-k search results divided by `k`. In case of testing
just the ANN algorithm, we can use the exact kNN search as a ground truth, with `k` being fixed. It will be a measure on **how well the ANN
algorithm approximates the exact search**.
## Measure the quality of the search results
Let's build a quality evaluation of the ANN algorithm in Qdrant. We will, first, call the search endpoint in a standard way to obtain
the approximate search results. Then, we will call the exact search endpoint to obtain the exact matches, and finally compare both results
in terms of precision.
Before we start, let's create a collection, fill it with some data and then start our evaluation. We will use the same dataset as in the
[Loading a dataset from Hugging Face hub](/documentation/tutorials/huggingface-datasets/) tutorial, `Qdrant/arxiv-titles-instructorxl-embeddings`
from the [Hugging Face hub](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings). Let's download it in a streaming
mode, as we are only going to use part of it.
```python
from datasets import load_dataset
dataset = load_dataset(
"Qdrant/arxiv-titles-instructorxl-embeddings", split="train", streaming=True
)
```
We need some data to be indexed and another set for the testing purposes. Let's get the first 50000 items for the training and the next 1000
for the testing.
```python
dataset_iterator = iter(dataset)
train_dataset = [next(dataset_iterator) for _ in range(60000)]
test_dataset = [next(dataset_iterator) for _ in range(1000)]
```
Now, let's create a collection and index the training data. This collection will be created with the default configuration. Please be aware that
it might be different from your collection settings, and it's always important to test exactly the same configuration you are going to use later
in production.
<aside role="status">
Distance function is another parameter that may impact the retrieval quality. If the embedding model was not trained to minimize cosine
distance, you can get suboptimal search results by using it. Please test different distance functions to find the best one for your embeddings,
if you don't know the specifics of the model training.
</aside>
```python
from qdrant_client import QdrantClient, models
client = QdrantClient("http://localhost:6333")
client.create_collection(
collection_name="arxiv-titles-instructorxl-embeddings",
vectors_config=models.VectorParams(
size=768, # Size of the embeddings generated by InstructorXL model
distance=models.Distance.COSINE,
),
)
```
We are now ready to index the training data. Uploading the records is going to trigger the indexing process, which will build the HNSW graph.
The indexing process may take some time, depending on the size of the dataset, but your data is going to be available for search immediately
after receiving the response from the `upsert` endpoint. **As long as the indexing is not finished, and HNSW not built, Qdrant will perform
the exact search**. We have to wait until the indexing is finished to be sure that the approximate search is performed.
```python
client.upload_points( # upload_points is available as of qdrant-client v1.7.1
collection_name="arxiv-titles-instructorxl-embeddings",
points=[
models.PointStruct(
id=item["id"],
vector=item["vector"],
payload=item,
)
for item in train_dataset
]
)
while True:
collection_info = client.get_collection(collection_name="arxiv-titles-instructorxl-embeddings")
if collection_info.status == models.CollectionStatus.GREEN:
# Collection status is green, which means the indexing is finished
break
```
## Standard mode vs exact search
Qdrant has a built-in exact search mode, which can be used to measure the quality of the search results. In this mode, Qdrant performs a
full kNN search for each query, without any approximation. It is not suitable for production use with high load, but it is perfect for the
evaluation of the ANN algorithm and its parameters. It might be triggered by setting the `exact` parameter to `True` in the search request.
We are simply going to use all the examples from the test dataset as queries and compare the results of the approximate search with the
results of the exact search. Let's create a helper function with `k` being a parameter, so we can calculate the `precision@k` for different
values of `k`.
```python
def avg_precision_at_k(k: int):
precisions = []
for item in test_dataset:
ann_result = client.query_points(
collection_name="arxiv-titles-instructorxl-embeddings",
query=item["vector"],
limit=k,
).points
knn_result = client.query_points(
collection_name="arxiv-titles-instructorxl-embeddings",
query=item["vector"],
limit=k,
search_params=models.SearchParams(
exact=True, # Turns on the exact search mode
),
).points
# We can calculate the precision@k by comparing the ids of the search results
ann_ids = set(item.id for item in ann_result)
knn_ids = set(item.id for item in knn_result)
precision = len(ann_ids.intersection(knn_ids)) / k
precisions.append(precision)
return sum(precisions) / len(precisions)
```
Calculating the `precision@5` is as simple as calling the function with the corresponding parameter:
```python
print(f"avg(precision@5) = {avg_precision_at_k(k=5)}")
```
Response:
```text
avg(precision@5) = 0.9935999999999995
```
As we can see, the precision of the approximate search vs exact search is pretty high. There are, however, some scenarios when we
need higher precision and can accept higher latency. HNSW is pretty tunable, and we can increase the precision by changing its parameters.
## Tweaking the HNSW parameters
HNSW is a hierarchical graph, where each node has a set of links to other nodes. The number of edges per node is called the `m` parameter.
The larger the value of it, the higher the precision of the search, but more space required. The `ef_construct` parameter is the number of
neighbours to consider during the index building. Again, the larger the value, the higher the precision, but the longer the indexing time.
The default values of these parameters are `m=16` and `ef_construct=100`. Let's try to increase them to `m=32` and `ef_construct=200` and
see how it affects the precision. Of course, we need to wait until the indexing is finished before we can perform the search.
```python
client.update_collection(
collection_name="arxiv-titles-instructorxl-embeddings",
hnsw_config=models.HnswConfigDiff(
m=32, # Increase the number of edges per node from the default 16 to 32
ef_construct=200, # Increase the number of neighbours from the default 100 to 200
)
)
while True:
collection_info = client.get_collection(collection_name="arxiv-titles-instructorxl-embeddings")
if collection_info.status == models.CollectionStatus.GREEN:
# Collection status is green, which means the indexing is finished
break
```
The same function can be used to calculate the average `precision@5`:
```python
print(f"avg(precision@5) = {avg_precision_at_k(k=5)}")
```
Response:
```text
avg(precision@5) = 0.9969999999999998
```
The precision has obviously increased, and we know how to control it. However, there is a trade-off between the precision and the search
latency and memory requirements. In some specific cases, we may want to increase the precision as much as possible, so now we know how
to do it.
## Wrapping up
Assessing the quality of retrieval is a critical aspect of evaluating semantic search performance. It is imperative to measure retrieval quality when aiming for optimal quality of.
your search results. Qdrant provides a built-in exact search mode, which can be used to measure the quality of the ANN algorithm itself,
even in an automated way, as part of your CI/CD pipeline.
Again, **the quality of the embeddings is the most important factor**. HNSW does a pretty good job in terms of precision, and it is
parameterizable and tunable, when required. There are some other ANN algorithms available out there, such as [IVF*](https://github.com/facebookresearch/faiss/wiki/Faiss-indexes#cell-probe-methods-indexivf-indexes),
but they usually [perform worse than HNSW in terms of quality and performance](https://nirantk.com/writing/pgvector-vs-qdrant/#correctness).
| documentation/tutorials/retrieval-quality.md |
---
title: Neural Search Service
weight: 1
---
# Create a Simple Neural Search Service
| Time: 30 min | Level: Beginner | Output: [GitHub](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) |
| --- | ----------- | ----------- |----------- |
This tutorial shows you how to build and deploy your own neural search service to look through descriptions of companies from [startups-list.com](https://www.startups-list.com/) and pick the most similar ones to your query. The website contains the company names, descriptions, locations, and a picture for each entry.
A neural search service uses artificial neural networks to improve the accuracy and relevance of search results. Besides offering simple keyword results, this system can retrieve results by meaning. It can understand and interpret complex search queries and provide more contextually relevant output, effectively enhancing the user's search experience.
<aside role="status">
There is a version of this tutorial that uses <a href="https://github.com/qdrant/fastembed">Fastembed</a> model inference engine instead of Sentence Transformers.
Check it out <a href="/documentation/tutorials/hybrid-search-fastembed/">here</a>.
</aside>
## Workflow
To create a neural search service, you will need to transform your raw data and then create a search function to manipulate it. First, you will 1) download and prepare a sample dataset using a modified version of the BERT ML model. Then, you will 2) load the data into Qdrant, 3) create a neural search API and 4) serve it using FastAPI.
![Neural Search Workflow](/docs/workflow-neural-search.png)
> **Note**: The code for this tutorial can be found here: | [Step 1: Data Preparation Process](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers). |
## Prerequisites
To complete this tutorial, you will need:
- Docker - The easiest way to use Qdrant is to run a pre-built Docker image.
- [Raw parsed data](https://storage.googleapis.com/generall-shared-data/startups_demo.json) from startups-list.com.
- Python version >=3.8
## Prepare sample dataset
To conduct a neural search on startup descriptions, you must first encode the description data into vectors. To process text, you can use a pre-trained models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) or sentence transformers. The [sentence-transformers](https://github.com/UKPLab/sentence-transformers) library lets you conveniently download and use many pre-trained models, such as DistilBERT, MPNet, etc.
1. First you need to download the dataset.
```bash
wget https://storage.googleapis.com/generall-shared-data/startups_demo.json
```
2. Install the SentenceTransformer library as well as other relevant packages.
```bash
pip install sentence-transformers numpy pandas tqdm
```
3. Import the required modules.
```python
from sentence_transformers import SentenceTransformer
import numpy as np
import json
import pandas as pd
from tqdm.notebook import tqdm
```
You will be using a pre-trained model called `all-MiniLM-L6-v2`.
This is a performance-optimized sentence embedding model and you can read more about it and other available models [here](https://www.sbert.net/docs/pretrained_models.html).
4. Download and create a pre-trained sentence encoder.
```python
model = SentenceTransformer(
"all-MiniLM-L6-v2", device="cuda"
) # or device="cpu" if you don't have a GPU
```
5. Read the raw data file.
```python
df = pd.read_json("./startups_demo.json", lines=True)
```
6. Encode all startup descriptions to create an embedding vector for each. Internally, the `encode` function will split the input into batches, which will significantly speed up the process.
```python
vectors = model.encode(
[row.alt + ". " + row.description for row in df.itertuples()],
show_progress_bar=True,
)
```
All of the descriptions are now converted into vectors. There are 40474 vectors of 384 dimensions. The output layer of the model has this dimension
```python
vectors.shape
# > (40474, 384)
```
7. Download the saved vectors into a new file named `startup_vectors.npy`
```python
np.save("startup_vectors.npy", vectors, allow_pickle=False)
```
## Run Qdrant in Docker
Next, you need to manage all of your data using a vector engine. Qdrant lets you store, update or delete created vectors. Most importantly, it lets you search for the nearest vectors via a convenient API.
> **Note:** Before you begin, create a project directory and a virtual python environment in it.
1. Download the Qdrant image from DockerHub.
```bash
docker pull qdrant/qdrant
```
2. Start Qdrant inside of Docker.
```bash
docker run -p 6333:6333 \
-v $(pwd)/qdrant_storage:/qdrant/storage \
qdrant/qdrant
```
You should see output like this
```text
...
[2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers
[2021-02-05T00:08:51Z INFO actix_server::builder] Starting "actix-web-service-0.0.0.0:6333" service on 0.0.0.0:6333
```
Test the service by going to [http://localhost:6333/](http://localhost:6333/). You should see the Qdrant version info in your browser.
All data uploaded to Qdrant is saved inside the `./qdrant_storage` directory and will be persisted even if you recreate the container.
## Upload data to Qdrant
1. Install the official Python client to best interact with Qdrant.
```bash
pip install qdrant-client
```
At this point, you should have startup records in the `startups_demo.json` file, encoded vectors in `startup_vectors.npy` and Qdrant running on a local machine.
Now you need to write a script to upload all startup data and vectors into the search engine.
2. Create a client object for Qdrant.
```python
# Import client library
from qdrant_client import QdrantClient
from qdrant_client.models import VectorParams, Distance
client = QdrantClient("http://localhost:6333")
```
3. Related vectors need to be added to a collection. Create a new collection for your startup vectors.
```python
if not client.collection_exists("startups"):
client.create_collection(
collection_name="startups",
vectors_config=VectorParams(size=384, distance=Distance.COSINE),
)
```
<aside role="status">
- The `vector_size` parameter defines the size of the vectors for a specific collection. If their size is different, it is impossible to calculate the distance between them. `384` is the encoder output dimensionality. You can also use `model.get_sentence_embedding_dimension()` to get the dimensionality of the model you are using.
- The `distance` parameter lets you specify the function used to measure the distance between two points.
</aside>
4. Create an iterator over the startup data and vectors.
The Qdrant client library defines a special function that allows you to load datasets into the service.
However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input.
```python
fd = open("./startups_demo.json")
# payload is now an iterator over startup data
payload = map(json.loads, fd)
# Load all vectors into memory, numpy array works as iterable for itself.
# Other option would be to use Mmap, if you don't want to load all data into RAM
vectors = np.load("./startup_vectors.npy")
```
5. Upload the data
```python
client.upload_collection(
collection_name="startups",
vectors=vectors,
payload=payload,
ids=None, # Vector ids will be assigned automatically
batch_size=256, # How many vectors will be uploaded in a single request?
)
```
Vectors are now uploaded to Qdrant.
## Build the search API
Now that all the preparations are complete, let's start building a neural search class.
In order to process incoming requests, neural search will need 2 things: 1) a model to convert the query into a vector and 2) the Qdrant client to perform search queries.
1. Create a file named `neural_searcher.py` and specify the following.
```python
from qdrant_client import QdrantClient
from sentence_transformers import SentenceTransformer
class NeuralSearcher:
def __init__(self, collection_name):
self.collection_name = collection_name
# Initialize encoder model
self.model = SentenceTransformer("all-MiniLM-L6-v2", device="cpu")
# initialize Qdrant client
self.qdrant_client = QdrantClient("http://localhost:6333")
```
2. Write the search function.
```python
def search(self, text: str):
# Convert text query into vector
vector = self.model.encode(text).tolist()
# Use `vector` for search for closest vectors in the collection
search_result = self.qdrant_client.query_points(
collection_name=self.collection_name,
query=vector,
query_filter=None, # If you don't want any filters for now
limit=5, # 5 the most closest results is enough
).points
# `search_result` contains found vector ids with similarity scores along with the stored payload
# In this function you are interested in payload only
payloads = [hit.payload for hit in search_result]
return payloads
```
3. Add search filters.
With Qdrant it is also feasible to add some conditions to the search.
For example, if you wanted to search for startups in a certain city, the search query could look like this:
```python
from qdrant_client.models import Filter
...
city_of_interest = "Berlin"
# Define a filter for cities
city_filter = Filter(**{
"must": [{
"key": "city", # Store city information in a field of the same name
"match": { # This condition checks if payload field has the requested value
"value": city_of_interest
}
}]
})
search_result = self.qdrant_client.query_points(
collection_name=self.collection_name,
query=vector,
query_filter=city_filter,
limit=5
).points
...
```
You have now created a class for neural search queries. Now wrap it up into a service.
## Deploy the search with FastAPI
To build the service you will use the FastAPI framework.
1. Install FastAPI.
To install it, use the command
```bash
pip install fastapi uvicorn
```
2. Implement the service.
Create a file named `service.py` and specify the following.
The service will have only one API endpoint and will look like this:
```python
from fastapi import FastAPI
# The file where NeuralSearcher is stored
from neural_searcher import NeuralSearcher
app = FastAPI()
# Create a neural searcher instance
neural_searcher = NeuralSearcher(collection_name="startups")
@app.get("/api/search")
def search_startup(q: str):
return {"result": neural_searcher.search(text=q)}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
3. Run the service.
```bash
python service.py
```
4. Open your browser at [http://localhost:8000/docs](http://localhost:8000/docs).
You should be able to see a debug interface for your service.
![FastAPI Swagger interface](/docs/fastapi_neural_search.png)
Feel free to play around with it, make queries regarding the companies in our corpus, and check out the results.
## Next steps
The code from this tutorial has been used to develop a [live online demo](https://qdrant.to/semantic-search-demo).
You can try it to get an intuition for cases when the neural search is useful.
The demo contains a switch that selects between neural and full-text searches.
You can turn the neural search on and off to compare your result with a regular full-text search.
> **Note**: The code for this tutorial can be found here: | [Step 1: Data Preparation Process](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers). |
Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, publish other examples of neural networks and neural search applications.
| documentation/tutorials/neural-search.md |
---
title: Semantic Search 101
weight: -100
aliases:
- /documentation/tutorials/mighty.md/
---
# Semantic Search for Beginners
| Time: 5 - 15 min | Level: Beginner | | |
| --- | ----------- | ----------- |----------- |
<p align="center"><iframe width="560" height="315" src="https://www.youtube.com/embed/AASiqmtKo54" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p>
## Overview
If you are new to vector databases, this tutorial is for you. In 5 minutes you will build a semantic search engine for science fiction books. After you set it up, you will ask the engine about an impending alien threat. Your creation will recommend books as preparation for a potential space attack.
Before you begin, you need to have a [recent version of Python](https://www.python.org/downloads/) installed. If you don't know how to run this code in a virtual environment, follow Python documentation for [Creating Virtual Environments](https://docs.python.org/3/tutorial/venv.html#creating-virtual-environments) first.
This tutorial assumes you're in the bash shell. Use the Python documentation to activate a virtual environment, with commands such as:
```bash
source tutorial-env/bin/activate
```
## 1. Installation
You need to process your data so that the search engine can work with it. The [Sentence Transformers](https://www.sbert.net/) framework gives you access to common Large Language Models that turn raw data into embeddings.
```bash
pip install -U sentence-transformers
```
Once encoded, this data needs to be kept somewhere. Qdrant lets you store data as embeddings. You can also use Qdrant to run search queries against this data. This means that you can ask the engine to give you relevant answers that go way beyond keyword matching.
```bash
pip install -U qdrant-client
```
<aside role="status">
This tutorial requires qdrant-client version 1.7.1 or higher.
</aside>
### Import the models
Once the two main frameworks are defined, you need to specify the exact models this engine will use. Before you do, activate the Python prompt (`>>>`) with the `python` command.
```python
from qdrant_client import models, QdrantClient
from sentence_transformers import SentenceTransformer
```
The [Sentence Transformers](https://www.sbert.net/index.html) framework contains many embedding models. However, [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) is the fastest encoder for this tutorial.
```python
encoder = SentenceTransformer("all-MiniLM-L6-v2")
```
## 2. Add the dataset
[all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) will encode the data you provide. Here you will list all the science fiction books in your library. Each book has metadata, a name, author, publication year and a short description.
```python
documents = [
{
"name": "The Time Machine",
"description": "A man travels through time and witnesses the evolution of humanity.",
"author": "H.G. Wells",
"year": 1895,
},
{
"name": "Ender's Game",
"description": "A young boy is trained to become a military leader in a war against an alien race.",
"author": "Orson Scott Card",
"year": 1985,
},
{
"name": "Brave New World",
"description": "A dystopian society where people are genetically engineered and conditioned to conform to a strict social hierarchy.",
"author": "Aldous Huxley",
"year": 1932,
},
{
"name": "The Hitchhiker's Guide to the Galaxy",
"description": "A comedic science fiction series following the misadventures of an unwitting human and his alien friend.",
"author": "Douglas Adams",
"year": 1979,
},
{
"name": "Dune",
"description": "A desert planet is the site of political intrigue and power struggles.",
"author": "Frank Herbert",
"year": 1965,
},
{
"name": "Foundation",
"description": "A mathematician develops a science to predict the future of humanity and works to save civilization from collapse.",
"author": "Isaac Asimov",
"year": 1951,
},
{
"name": "Snow Crash",
"description": "A futuristic world where the internet has evolved into a virtual reality metaverse.",
"author": "Neal Stephenson",
"year": 1992,
},
{
"name": "Neuromancer",
"description": "A hacker is hired to pull off a near-impossible hack and gets pulled into a web of intrigue.",
"author": "William Gibson",
"year": 1984,
},
{
"name": "The War of the Worlds",
"description": "A Martian invasion of Earth throws humanity into chaos.",
"author": "H.G. Wells",
"year": 1898,
},
{
"name": "The Hunger Games",
"description": "A dystopian society where teenagers are forced to fight to the death in a televised spectacle.",
"author": "Suzanne Collins",
"year": 2008,
},
{
"name": "The Andromeda Strain",
"description": "A deadly virus from outer space threatens to wipe out humanity.",
"author": "Michael Crichton",
"year": 1969,
},
{
"name": "The Left Hand of Darkness",
"description": "A human ambassador is sent to a planet where the inhabitants are genderless and can change gender at will.",
"author": "Ursula K. Le Guin",
"year": 1969,
},
{
"name": "The Three-Body Problem",
"description": "Humans encounter an alien civilization that lives in a dying system.",
"author": "Liu Cixin",
"year": 2008,
},
]
```
## 3. Define storage location
You need to tell Qdrant where to store embeddings. This is a basic demo, so your local computer will use its memory as temporary storage.
```python
client = QdrantClient(":memory:")
```
## 4. Create a collection
All data in Qdrant is organized by collections. In this case, you are storing books, so we are calling it `my_books`.
```python
client.create_collection(
collection_name="my_books",
vectors_config=models.VectorParams(
size=encoder.get_sentence_embedding_dimension(), # Vector size is defined by used model
distance=models.Distance.COSINE,
),
)
```
- The `vector_size` parameter defines the size of the vectors for a specific collection. If their size is different, it is impossible to calculate the distance between them. 384 is the encoder output dimensionality. You can also use model.get_sentence_embedding_dimension() to get the dimensionality of the model you are using.
- The `distance` parameter lets you specify the function used to measure the distance between two points.
## 5. Upload data to collection
Tell the database to upload `documents` to the `my_books` collection. This will give each record an id and a payload. The payload is just the metadata from the dataset.
```python
client.upload_points(
collection_name="my_books",
points=[
models.PointStruct(
id=idx, vector=encoder.encode(doc["description"]).tolist(), payload=doc
)
for idx, doc in enumerate(documents)
],
)
```
## 6. Ask the engine a question
Now that the data is stored in Qdrant, you can ask it questions and receive semantically relevant results.
```python
hits = client.query_points(
collection_name="my_books",
query=encoder.encode("alien invasion").tolist(),
limit=3,
).points
for hit in hits:
print(hit.payload, "score:", hit.score)
```
**Response:**
The search engine shows three of the most likely responses that have to do with the alien invasion. Each of the responses is assigned a score to show how close the response is to the original inquiry.
```text
{'name': 'The War of the Worlds', 'description': 'A Martian invasion of Earth throws humanity into chaos.', 'author': 'H.G. Wells', 'year': 1898} score: 0.570093257022374
{'name': "The Hitchhiker's Guide to the Galaxy", 'description': 'A comedic science fiction series following the misadventures of an unwitting human and his alien friend.', 'author': 'Douglas Adams', 'year': 1979} score: 0.5040468703143637
{'name': 'The Three-Body Problem', 'description': 'Humans encounter an alien civilization that lives in a dying system.', 'author': 'Liu Cixin', 'year': 2008} score: 0.45902943411768216
```
### Narrow down the query
How about the most recent book from the early 2000s?
```python
hits = client.query_points(
collection_name="my_books",
query=encoder.encode("alien invasion").tolist(),
query_filter=models.Filter(
must=[models.FieldCondition(key="year", range=models.Range(gte=2000))]
),
limit=1,
).points
for hit in hits:
print(hit.payload, "score:", hit.score)
```
**Response:**
The query has been narrowed down to one result from 2008.
```text
{'name': 'The Three-Body Problem', 'description': 'Humans encounter an alien civilization that lives in a dying system.', 'author': 'Liu Cixin', 'year': 2008} score: 0.45902943411768216
```
## Next Steps
Congratulations, you have just created your very first search engine! Trust us, the rest of Qdrant is not that complicated, either. For your next tutorial you should try building an actual [Neural Search Service with a complete API and a dataset](../../tutorials/neural-search/).
## Return to the bash shell
To return to the bash prompt:
1. Press Ctrl+D to exit the Python prompt (`>>>`).
1. Enter the `deactivate` command to deactivate the virtual environment.
| documentation/tutorials/search-beginners.md |
---
title: Multimodal Search
weight: 4
---
# Multimodal Search with Qdrant and FastEmbed
| Time: 15 min | Level: Beginner |Output: [GitHub](https://github.com/qdrant/examples/blob/master/multimodal-search/Multimodal_Search_with_FastEmbed.ipynb)|[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/master/multimodal-search/Multimodal_Search_with_FastEmbed.ipynb) |
| --- | ----------- | ----------- | ----------- |
In this tutorial, you will set up a simple Multimodal Image & Text Search with Qdrant & FastEmbed.
## Overview
We often understand and share information more effectively when combining different types of data. For example, the taste of comfort food can trigger childhood memories. We might describe a song with just “pam pam clap” sounds. Instead of writing paragraphs. Sometimes, we may use emojis and stickers to express how we feel or to share complex ideas.
Modalities of data such as **text**, **images**, **video** and **audio** in various combinations form valuable use cases for Semantic Search applications.
Vector databases, being **modality-agnostic**, are perfect for building these applications.
In this simple tutorial, we are working with two simple modalities: **image** and **text** data. However, you can create a Semantic Search application with any combination of modalities if you choose the right embedding model to bridge the **semantic gap**.
> The **semantic gap** refers to the difference between low-level features (aka brightness) and high-level concepts (aka cuteness).
For example, the [ImageBind model](https://github.com/facebookresearch/ImageBind) from Meta AI is said to bind all 4 mentioned modalities in one shared space.
## Prerequisites
> **Note**: The code for this tutorial can be found [here](https://github.com/qdrant/examples/multimodal-search)
To complete this tutorial, you will need either Docker to run a pre-built Docker image of Qdrant and Python version ≥ 3.8 or a Google Collab Notebook if you don't want to install anything locally.
We showed how to run Qdrant in Docker in the ["Create a Simple Neural Search Service"](https://qdrant.tech/documentation/tutorials/neural-search/) Tutorial.
## Setup
First, install the required libraries `qdrant-client`, `fastembed` and `Pillow`.
For example, with the `pip` package manager, it can be done in the following way.
```bash
python3 -m pip install --upgrade qdrant-client fastembed Pillow
```
<aside role="status">
We will use <a href="https://qdrant.tech/documentation/fastembed/">FastEmbed</a> for generating multimodal embeddings and <b>Qdrant</b> for storing and retrieving them.
</aside>
## Dataset
To make the demonstration simple, we created a tiny dataset of images and their captions for you.
Images can be downloaded from [here](https://github.com/qdrant/examples/multimodal-search/images).
It's **important** to place them in the same folder as your code/notebook, in the folder named `images`.
You can check out how images look like in the following way:
```python
from PIL import Image
Image.open('images/lizard.jpg')
```
## Vectorize data
`FastEmbed` supports **Contrastive Language–Image Pre-training** ([CLIP](https://openai.com/index/clip/)) model, the old (2021) but gold classics of multimodal Image-Text Machine Learning.
**CLIP** model was one of the first models of such kind with ZERO-SHOT capabilities.
When using it for semantic search, it's important to remember that the textual encoder of CLIP is trained to process no more than **77 tokens**,
so CLIP is good for short texts.
Let's embed a very short selection of images and their captions in the **shared embedding space** with CLIP.
```python
from fastembed import TextEmbedding, ImageEmbedding
documents = [{"caption": "A photo of a cute pig",
"image": "images/piggy.jpg"},
{"caption": "A picture with a coffee cup",
"image": "images/coffee.jpg"},
{"caption": "A photo of a colourful lizard",
"image": "images/lizard.jpg"}
]
text_model_name = "Qdrant/clip-ViT-B-32-text" #CLIP text encoder
text_model = TextEmbedding(model_name=text_model_name)
text_embeddings_size = text_model._get_model_description(text_model_name)["dim"] #dimension of text embeddings, produced by CLIP text encoder (512)
texts_embeded = list(text_model.embed([document["caption"] for document in documents])) #embedding captions with CLIP text encoder
image_model_name = "Qdrant/clip-ViT-B-32-vision" #CLIP image encoder
image_model = ImageEmbedding(model_name=image_model_name)
image_embeddings_size = image_model._get_model_description(image_model_name)["dim"] #dimension of image embeddings, produced by CLIP image encoder (512)
images_embeded = list(image_model.embed([document["image"] for document in documents])) #embedding images with CLIP image encoder
```
## Upload data to Qdrant
1. **Create a client object for Qdrant**.
```python
from qdrant_client import QdrantClient, models
client = QdrantClient("http://localhost:6333") #or QdrantClient(":memory:") if you're using Google Collab, this option is suitable only for simple prototypes/demos with Python client
```
2. **Create a new collection for your images with captions**.
CLIP’s weights were trained to maximize the scaled **Cosine Similarity** of truly corresponding image/caption pairs,
so that's the **Distance Metric** we will choose for our [Collection](https://qdrant.tech/documentation/concepts/collections/) of [Named Vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors).
Using **Named Vectors**, we can easily showcase both Text-to-Image and Image-to-Text (Image-to-Image and Text-to-Text) search.
```python
if not client.collection_exists("text_image"): #creating a Collection
client.create_collection(
collection_name ="text_image",
vectors_config={ #Named Vectors
"image": models.VectorParams(size=image_embeddings_size, distance=models.Distance.COSINE),
"text": models.VectorParams(size=text_embeddings_size, distance=models.Distance.COSINE),
}
)
```
3. **Upload our images with captions to the Collection**.
Each image with its caption will create a [Point](https://qdrant.tech/documentation/concepts/points/) in Qdrant.
```python
client.upload_points(
collection_name="text_image",
points=[
models.PointStruct(
id=idx, #unique id of a point, pre-defined by the user
vector={
"text": texts_embeded[idx], #embeded caption
"image": images_embeded[idx] #embeded image
},
payload=doc #original image and its caption
)
for idx, doc in enumerate(documents)
]
)
```
## Search
<h3 style="font-size: 1.25em;">Text-to-Image</h3>
Let's see what image we will get to the query "*What would make me energetic in the morning?*"
```python
from PIL import Image
find_image = text_model.embed(["What would make me energetic in the morning?"]) #query, we embed it, so it also becomes a vector
Image.open(client.search(
collection_name="text_image", #searching in our collection
query_vector=("image", list(find_image)[0]), #searching only among image vectors with our textual query
with_payload=["image"], #user-readable information about search results, we are interested to see which image we will find
limit=1 #top-1 similar to the query result
)[0].payload['image'])
```
**Response:**
![Coffee Image](/docs/coffee.jpg)
<h3 style="font-size: 1.25em;">Image-to-Text</h3>
Now, let's do a reverse search with an image:
```python
from PIL import Image
Image.open('images/piglet.jpg')
```
![Piglet Image](/docs/piglet.jpg)
Let's see what caption we will get, searching by this piglet image, which, as you can check, is not in our **Collection**.
```python
find_image = image_model.embed(['images/piglet.jpg']) #embedding our image query
client.search(
collection_name="text_image",
query_vector=("text", list(find_image)[0]), #now we are searching only among text vectors with our image query
with_payload=["caption"], #user-readable information about search results, we are interested to see which caption we will get
limit=1
)[0].payload['caption']
```
**Response:**
```text
'A photo of a cute pig'
```
## Next steps
Use cases of even just Image & Text Multimodal Search are countless: E-Commerce, Media Management, Content Recommendation, Emotion Recognition Systems, Biomedical Image Retrieval, Spoken Sign Language Transcription, etc.
Imagine a scenario: user wants to find a product similar to a picture they have, but they also have specific textual requirements, like "*in beige colour*".
You can search using just texts or images and combine their embeddings in a **late fusion manner** (summing and weighting might work surprisingly well).
Moreover, using [Discovery Search](https://qdrant.tech/articles/discovery-search/) with both modalities, you can provide users with information that is impossible to retrieve unimodally!
Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, experiment, and have fun! | documentation/tutorials/multimodal-search-fastembed.md |
---
title: Load Hugging Face dataset
weight: 19
---
# Loading a dataset from Hugging Face hub
[Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and
datasets. [Qdrant](https://huggingface.co/Qdrant) also publishes datasets along with the
embeddings that you can use to practice with Qdrant and build your applications based on semantic
search. **Please [let us know](https://qdrant.to/discord) if you'd like to see a specific dataset!**
## arxiv-titles-instructorxl-embeddings
[This dataset](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) contains
embeddings generated from the paper titles only. Each vector has a payload with the title used to
create it, along with the DOI (Digital Object Identifier).
```json
{
"title": "Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities",
"DOI": "1612.05191"
}
```
You can find a detailed description of the dataset in the [Practice Datasets](/documentation/datasets/#journal-article-titles)
section. If you prefer loading the dataset from a Qdrant snapshot, it also linked there.
Loading the dataset is as simple as using the `load_dataset` function from the `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("Qdrant/arxiv-titles-instructorxl-embeddings")
```
<aside role="status">The dataset has over 16 GB, so it might take a while to download.</aside>
The dataset contains 2,250,000 vectors. This is how you can check the list of the features in the dataset:
```python
dataset.features
```
### Streaming the dataset
Dataset streaming lets you work with a dataset without downloading it. The data is streamed as
you iterate over the dataset. You can read more about it in the [Hugging Face
documentation](https://huggingface.co/docs/datasets/stream).
```python
from datasets import load_dataset
dataset = load_dataset(
"Qdrant/arxiv-titles-instructorxl-embeddings", split="train", streaming=True
)
```
### Loading the dataset into Qdrant
You can load the dataset into Qdrant using the [Python SDK](https://github.com/qdrant/qdrant-client).
The embeddings are already precomputed, so you can store them in a collection, that we're going
to create in a second:
```python
from qdrant_client import QdrantClient, models
client = QdrantClient("http://localhost:6333")
client.create_collection(
collection_name="arxiv-titles-instructorxl-embeddings",
vectors_config=models.VectorParams(
size=768,
distance=models.Distance.COSINE,
),
)
```
It is always a good idea to use batching, while loading a large dataset, so let's do that.
We are going to need a helper function to split the dataset into batches:
```python
from itertools import islice
def batched(iterable, n):
iterator = iter(iterable)
while batch := list(islice(iterator, n)):
yield batch
```
If you are a happy user of Python 3.12+, you can use the [`batched` function from the `itertools`
](https://docs.python.org/3/library/itertools.html#itertools.batched) package instead.
No matter what Python version you are using, you can use the `upsert` method to load the dataset,
batch by batch, into Qdrant:
```python
batch_size = 100
for batch in batched(dataset, batch_size):
ids = [point.pop("id") for point in batch]
vectors = [point.pop("vector") for point in batch]
client.upsert(
collection_name="arxiv-titles-instructorxl-embeddings",
points=models.Batch(
ids=ids,
vectors=vectors,
payloads=batch,
),
)
```
Your collection is ready to be used for search! Please [let us know using Discord](https://qdrant.to/discord)
if you would like to see more datasets published on Hugging Face hub.
| documentation/tutorials/huggingface-datasets.md |
---
title: Hybrid Search with Fastembed
weight: 2
aliases:
- /documentation/tutorials/neural-search-fastembed/
---
# Create a Hybrid Search Service with Fastembed
| Time: 20 min | Level: Beginner | Output: [GitHub](https://github.com/qdrant/qdrant_demo/) |
| --- | ----------- | ----------- |----------- |
This tutorial shows you how to build and deploy your own hybrid search service to look through descriptions of companies from [startups-list.com](https://www.startups-list.com/) and pick the most similar ones to your query.
The website contains the company names, descriptions, locations, and a picture for each entry.
As we have already written on our [blog](/articles/hybrid-search/), there is no single definition of hybrid search.
In this tutorial we are covering the case with a combination of dense and [sparse embeddings](/articles/sparse-vectors/).
The former ones refer to the embeddings generated by such well-known neural networks as BERT, while the latter ones are more related to a traditional full-text search approach.
Our hybrid search service will use [Fastembed](https://github.com/qdrant/fastembed) package to generate embeddings of text descriptions and [FastAPI](https://fastapi.tiangolo.com/) to serve the search API.
Fastembed natively integrates with Qdrant client, so you can easily upload the data into Qdrant and perform search queries.
![Hybrid Search Schema](/documentation/tutorials/hybrid-search-with-fastembed/hybrid-search-schema.png)
## Workflow
To create a hybrid search service, you will need to transform your raw data and then create a search function to manipulate it.
First, you will 1) download and prepare a sample dataset using a modified version of the BERT ML model. Then, you will 2) load the data into Qdrant, 3) create a hybrid search API and 4) serve it using FastAPI.
![Hybrid Search Workflow](/docs/workflow-neural-search.png)
## Prerequisites
To complete this tutorial, you will need:
- Docker - The easiest way to use Qdrant is to run a pre-built Docker image.
- [Raw parsed data](https://storage.googleapis.com/generall-shared-data/startups_demo.json) from startups-list.com.
- Python version >=3.8
## Prepare sample dataset
To conduct a hybrid search on startup descriptions, you must first encode the description data into vectors.
Fastembed integration into qdrant client combines encoding and uploading into a single step.
It also takes care of batching and parallelization, so you don't have to worry about it.
Let's start by downloading the data and installing the necessary packages.
1. First you need to download the dataset.
```bash
wget https://storage.googleapis.com/generall-shared-data/startups_demo.json
```
## Run Qdrant in Docker
Next, you need to manage all of your data using a vector engine. Qdrant lets you store, update or delete created vectors. Most importantly, it lets you search for the nearest vectors via a convenient API.
> **Note:** Before you begin, create a project directory and a virtual python environment in it.
1. Download the Qdrant image from DockerHub.
```bash
docker pull qdrant/qdrant
```
2. Start Qdrant inside of Docker.
```bash
docker run -p 6333:6333 \
-v $(pwd)/qdrant_storage:/qdrant/storage \
qdrant/qdrant
```
You should see output like this
```text
...
[2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers
[2021-02-05T00:08:51Z INFO actix_server::builder] Starting "actix-web-service-0.0.0.0:6333" service on 0.0.0.0:6333
```
Test the service by going to [http://localhost:6333/](http://localhost:6333/). You should see the Qdrant version info in your browser.
All data uploaded to Qdrant is saved inside the `./qdrant_storage` directory and will be persisted even if you recreate the container.
## Upload data to Qdrant
1. Install the official Python client to best interact with Qdrant.
```bash
pip install "qdrant-client[fastembed]>=1.8.2"
```
> **Note:** This tutorial requires fastembed of version >=0.2.6.
At this point, you should have startup records in the `startups_demo.json` file and Qdrant running on a local machine.
Now you need to write a script to upload all startup data and vectors into the search engine.
2. Create a client object for Qdrant.
```python
# Import client library
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
```
3. Select model to encode your data.
You will be using two pre-trained models to compute dense and sparse vectors correspondingly: `sentence-transformers/all-MiniLM-L6-v2` and `prithivida/Splade_PP_en_v1`.
<aside role="status">
Hybrid search implementation can be easily switched to a dense vector search by omitting the lines related to sparse vectors.
</aside>
```python
client.set_model("sentence-transformers/all-MiniLM-L6-v2")
# comment this line to use dense vectors only
client.set_sparse_model("prithivida/Splade_PP_en_v1")
```
4. Related vectors need to be added to a collection. Create a new collection for your startup vectors.
```python
if not client.collection_exists("startups"):
client.create_collection(
collection_name="startups",
vectors_config=client.get_fastembed_vector_params(),
# comment this line to use dense vectors only
sparse_vectors_config=client.get_fastembed_sparse_vector_params(),
)
```
Qdrant requires vectors to have their own names and configurations.
Methods `get_fastembed_vector_params` and `get_fastembed_sparse_vector_params` help you to get the corresponding parameters for the models you are using.
These parameters include vector size, distance function, etc.
Without fastembed integration, you would need to specify the vector size and distance function manually. Read more about it [here](/documentation/tutorials/neural-search/).
Additionally, you can specify extended configuration for your vectors, like `quantization_config` or `hnsw_config`.
5. Read data from the file.
```python
import json
payload_path = "startups_demo.json"
metadata = []
documents = []
with open(payload_path) as fd:
for line in fd:
obj = json.loads(line)
documents.append(obj.pop("description"))
metadata.append(obj)
```
In this block of code, we read data from `startups_demo.json` file and split it into 2 lists: `documents` and `metadata`.
Documents are the raw text descriptions of startups. Metadata is the payload associated with each startup, such as the name, location, and picture.
We will use `documents` to encode the data into vectors.
6. Encode and upload data.
```python
client.add(
collection_name="startups",
documents=documents,
metadata=metadata,
parallel=0, # Use all available CPU cores to encode data.
# Requires wrapping code into if __name__ == '__main__' block
)
```
<aside role="status">
Vector generation process might be time-consuming. In order to save time, you can skip this step by uploading already processed data (available under the spoiler).
</aside>
<details>
<summary>Upload processed data</summary>
Download and unpack the processed data from [here](https://storage.googleapis.com/dataset-startup-search/startup-list-com/startups_hybrid_search_processed_40k.tar.gz) or use the following script:
```bash
wget https://storage.googleapis.com/dataset-startup-search/startup-list-com/startups_hybrid_search_processed_40k.tar.gz
tar -xvf startups_hybrid_search_processed_40k.tar.gz
```
Then you can upload the data to Qdrant.
```python
from typing import List
import json
import numpy as np
from qdrant_client import models
def named_vectors(vectors: List[float], sparse_vectors: List[models.SparseVector]) -> dict:
# make sure to use the same client object as previously
# or `set_model_name` and `set_sparse_model_name` manually
dense_vector_name = client.get_vector_field_name()
sparse_vector_name = client.get_sparse_vector_field_name()
for vector, sparse_vector in zip(vectors, sparse_vectors):
yield {
dense_vector_name: vector,
sparse_vector_name: models.SparseVector(**sparse_vector),
}
with open("dense_vectors.npy", "rb") as f:
vectors = np.load(f)
with open("sparse_vectors.json", "r") as f:
sparse_vectors = json.load(f)
with open("payload.json", "r",) as f:
payload = json.load(f)
client.upload_collection(
"startups", vectors=named_vectors(vectors, sparse_vectors), payload=payload
)
```
</details>
The `add` method will encode all documents and upload them to Qdrant.
This is one of the two fastembed-specific methods, that combines encoding and uploading into a single step.
The `parallel` parameter enables data-parallelism instead of built-in ONNX parallelism.
Additionally, you can specify ids for each document, if you want to use them later to update or delete documents.
If you don't specify ids, they will be generated automatically and returned as a result of the `add` method.
You can monitor the progress of the encoding by passing tqdm progress bar to the `add` method.
```python
from tqdm import tqdm
client.add(
collection_name="startups",
documents=documents,
metadata=metadata,
ids=tqdm(range(len(documents))),
)
```
## Build the search API
Now that all the preparations are complete, let's start building a neural search class.
In order to process incoming requests, the hybrid search class will need 3 things: 1) models to convert the query into a vector, 2) the Qdrant client to perform search queries, 3) fusion function to re-rank dense and sparse search results.
Fastembed integration encapsulates query encoding, search and fusion into a single method call.
Fastembed leverages [reciprocal rank fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) in order combine the results.
1. Create a file named `hybrid_searcher.py` and specify the following.
```python
from qdrant_client import QdrantClient
class HybridSearcher:
DENSE_MODEL = "sentence-transformers/all-MiniLM-L6-v2"
SPARSE_MODEL = "prithivida/Splade_PP_en_v1"
def __init__(self, collection_name):
self.collection_name = collection_name
# initialize Qdrant client
self.qdrant_client = QdrantClient("http://localhost:6333")
self.qdrant_client.set_model(self.DENSE_MODEL)
# comment this line to use dense vectors only
self.qdrant_client.set_sparse_model(self.SPARSE_MODEL)
```
2. Write the search function.
```python
def search(self, text: str):
search_result = self.qdrant_client.query(
collection_name=self.collection_name,
query_text=text,
query_filter=None, # If you don't want any filters for now
limit=5, # 5 the closest results
)
# `search_result` contains found vector ids with similarity scores
# along with the stored payload
# Select and return metadata
metadata = [hit.metadata for hit in search_result]
return metadata
```
3. Add search filters.
With Qdrant it is also feasible to add some conditions to the search.
For example, if you wanted to search for startups in a certain city, the search query could look like this:
```python
from qdrant_client import models
...
city_of_interest = "Berlin"
# Define a filter for cities
city_filter = models.Filter(
must=[
models.FieldCondition(
key="city",
match=models.MatchValue(value=city_of_interest)
)
]
)
search_result = self.qdrant_client.query(
collection_name=self.collection_name,
query_text=text,
query_filter=city_filter,
limit=5
)
...
```
You have now created a class for neural search queries. Now wrap it up into a service.
## Deploy the search with FastAPI
To build the service you will use the FastAPI framework.
1. Install FastAPI.
To install it, use the command
```bash
pip install fastapi uvicorn
```
2. Implement the service.
Create a file named `service.py` and specify the following.
The service will have only one API endpoint and will look like this:
```python
from fastapi import FastAPI
# The file where HybridSearcher is stored
from hybrid_searcher import HybridSearcher
app = FastAPI()
# Create a neural searcher instance
hybrid_searcher = HybridSearcher(collection_name="startups")
@app.get("/api/search")
def search_startup(q: str):
return {"result": hybrid_searcher.search(text=q)}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
3. Run the service.
```bash
python service.py
```
4. Open your browser at [http://localhost:8000/docs](http://localhost:8000/docs).
You should be able to see a debug interface for your service.
![FastAPI Swagger interface](/docs/fastapi_neural_search.png)
Feel free to play around with it, make queries regarding the companies in our corpus, and check out the results.
Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, publish other examples of neural networks and neural search applications.
| documentation/tutorials/hybrid-search-fastembed.md |
---
title: Asynchronous API
weight: 14
---
# Using Qdrant asynchronously
Asynchronous programming is being broadly adopted in the Python ecosystem. Tools such as FastAPI [have embraced this new
paradigm](https://fastapi.tiangolo.com/async/), but it is also becoming a standard for ML models served as SaaS. For example, the Cohere SDK
[provides an async client](https://github.com/cohere-ai/cohere-python/blob/856a4c3bd29e7a75fa66154b8ac9fcdf1e0745e0/src/cohere/client.py#L189) next to its synchronous counterpart.
Databases are often launched as separate services and are accessed via a network. All the interactions with them are IO-bound and can
be performed asynchronously so as not to waste time actively waiting for a server response. In Python, this is achieved by
using [`async/await`](https://docs.python.org/3/library/asyncio-task.html) syntax. That lets the interpreter switch to another task
while waiting for a response from the server.
## When to use async API
There is no need to use async API if the application you are writing will never support multiple users at once (e.g it is a script that runs once per day). However, if you are writing a web service that multiple users will use simultaneously, you shouldn't be
blocking the threads of the web server as it limits the number of concurrent requests it can handle. In this case, you should use
the async API.
Modern web frameworks like [FastAPI](https://fastapi.tiangolo.com/) and [Quart](https://quart.palletsprojects.com/en/latest/) support
async API out of the box. Mixing asynchronous code with an existing synchronous codebase might be a challenge. The `async/await` syntax
cannot be used in synchronous functions. On the other hand, calling an IO-bound operation synchronously in async code is considered
an antipattern. Therefore, if you build an async web service, exposed through an [ASGI](https://asgi.readthedocs.io/en/latest/) server,
you should use the async API for all the interactions with Qdrant.
<aside role="status">
All the async code has to be launched in an async context. Usually, it means you have to use <code>asyncio.run</code> or <code>asyncio.create_task</code> to run them.
Please refer to the <a href="https://docs.python.org/3/library/asyncio.html">asyncio documentation</a> for more details.
</aside>
### Using Qdrant asynchronously
The simplest way of running asynchronous code is to use define `async` function and use the `asyncio.run` in the following way to run it:
```python
from qdrant_client import models
import qdrant_client
import asyncio
async def main():
client = qdrant_client.AsyncQdrantClient("localhost")
# Create a collection
await client.create_collection(
collection_name="my_collection",
vectors_config=models.VectorParams(size=4, distance=models.Distance.COSINE),
)
# Insert a vector
await client.upsert(
collection_name="my_collection",
points=[
models.PointStruct(
id="5c56c793-69f3-4fbf-87e6-c4bf54c28c26",
payload={
"color": "red",
},
vector=[0.9, 0.1, 0.1, 0.5],
),
],
)
# Search for nearest neighbors
points = await client.query_points(
collection_name="my_collection",
query=[0.9, 0.1, 0.1, 0.5],
limit=2,
).points
# Your async code using AsyncQdrantClient might be put here
# ...
asyncio.run(main())
```
The `AsyncQdrantClient` provides the same methods as the synchronous counterpart `QdrantClient`. If you already have a synchronous
codebase, switching to async API is as simple as replacing `QdrantClient` with `AsyncQdrantClient` and adding `await` before each
method call.
<aside role="status">
Asynchronous client was introduced in <code>qdrant-client</code> version 1.6.1. If you are using an older version, you need to use autogenerated async clients directly.
</aside>
## Supported Python libraries
Qdrant integrates with numerous Python libraries. Until recently, only [Langchain](https://python.langchain.com) provided async Python API support.
Qdrant is the only vector database with full coverage of async API in Langchain. Their documentation [describes how to use
it](https://python.langchain.com/docs/modules/data_connection/vectorstores/#asynchronous-operations).
| documentation/tutorials/async-api.md |
---
title: Create and restore from snapshot
weight: 14
---
# Create and restore collections from snapshot
| Time: 20 min | Level: Beginner | | |
|--------------|-----------------|--|----|
A collection is a basic unit of data storage in Qdrant. It contains vectors, their IDs, and payloads. However, keeping the search efficient requires additional data structures to be built on top of the data. Building these data structures may take a while, especially for large collections.
That's why using snapshots is the best way to export and import Qdrant collections, as they contain all the bits and pieces required to restore the entire collection efficiently.
This tutorial will show you how to create a snapshot of a collection and restore it. Since working with snapshots in a distributed environment might be thought to be a bit more complex, we will use a 3-node Qdrant cluster. However, the same approach applies to a single-node setup.
<aside role="status">Snapshots cannot be created in local mode of Python SDK. You need to spin up a Qdrant Docker container or use Qdrant Cloud.</aside>
You can use the techniques described in this page to migrate a cluster. Follow the instructions
in this tutorial to create and download snapshots. When you [Restore from snapshot](#restore-from-snapshot), restore your data to the new cluster.
## Prerequisites
Let's assume you already have a running Qdrant instance or a cluster. If not, you can follow the [installation guide](/documentation/guides/installation/) to set up a local Qdrant instance or use [Qdrant Cloud](https://cloud.qdrant.io/) to create a cluster in a few clicks.
Once the cluster is running, let's install the required dependencies:
```shell
pip install qdrant-client datasets
```
### Establish a connection to Qdrant
We are going to use the Python SDK and raw HTTP calls to interact with Qdrant. Since we are going to use a 3-node cluster, we need to know the URLs of all the nodes. For the simplicity, let's keep them all in constants, along with the API key, so we can refer to them later:
```python
QDRANT_MAIN_URL = "https://my-cluster.com:6333"
QDRANT_NODES = (
"https://node-0.my-cluster.com:6333",
"https://node-1.my-cluster.com:6333",
"https://node-2.my-cluster.com:6333",
)
QDRANT_API_KEY = "my-api-key"
```
<aside role="status">If you are using Qdrant Cloud, you can find the URL and API key in the <a href="https://cloud.qdrant.io/">Qdrant Cloud dashboard</a>.</aside>
We can now create a client instance:
```python
from qdrant_client import QdrantClient
client = QdrantClient(QDRANT_MAIN_URL, api_key=QDRANT_API_KEY)
```
First of all, we are going to create a collection from a precomputed dataset. If you already have a collection, you can skip this step and start by [creating a snapshot](#create-and-download-snapshots).
<details>
<summary>(Optional) Create collection and import data</summary>
### Load the dataset
We are going to use a dataset with precomputed embeddings, available on Hugging Face Hub. The dataset is called [Qdrant/arxiv-titles-instructorxl-embeddings](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) and was created using the [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) model. It contains 2.25M embeddings for the titles of the papers from the [arXiv](https://arxiv.org/) dataset.
Loading the dataset is as simple as:
```python
from datasets import load_dataset
dataset = load_dataset(
"Qdrant/arxiv-titles-instructorxl-embeddings", split="train", streaming=True
)
```
We used the streaming mode, so the dataset is not loaded into memory. Instead, we can iterate through it and extract the id and vector embedding:
```python
for payload in dataset:
id_ = payload.pop("id")
vector = payload.pop("vector")
print(id_, vector, payload)
```
A single payload looks like this:
```json
{
'title': 'Dynamics of partially localized brane systems',
'DOI': '1109.1415'
}
```
### Create a collection
First things first, we need to create our collection. We're not going to play with the configuration of it, but it makes sense to do it right now.
The configuration is also a part of the collection snapshot.
```python
from qdrant_client import models
if not client.collection_exists("test_collection"):
client.create_collection(
collection_name="test_collection",
vectors_config=models.VectorParams(
size=768, # Size of the embedding vector generated by the InstructorXL model
distance=models.Distance.COSINE
),
)
```
### Upload the dataset
Calculating the embeddings is usually a bottleneck of the vector search pipelines, but we are happy to have them in place already. Since the goal of this tutorial is to show how to create a snapshot, **we are going to upload only a small part of the dataset**.
```python
ids, vectors, payloads = [], [], []
for payload in dataset:
id_ = payload.pop("id")
vector = payload.pop("vector")
ids.append(id_)
vectors.append(vector)
payloads.append(payload)
# We are going to upload only 1000 vectors
if len(ids) == 1000:
break
client.upsert(
collection_name="test_collection",
points=models.Batch(
ids=ids,
vectors=vectors,
payloads=payloads,
),
)
```
Our collection is now ready to be used for search. Let's create a snapshot of it.
</details>
If you already have a collection, you can skip the previous step and start by [creating a snapshot](#create-and-download-snapshots).
## Create and download snapshots
Qdrant exposes an HTTP endpoint to request creating a snapshot, but we can also call it with the Python SDK.
Our setup consists of 3 nodes, so we need to call the endpoint **on each of them** and create a snapshot on each node. While using Python SDK, that means creating a separate client instance for each node.
<aside role="status">You may get a timeout error, if the collection size is big. You can trigger the snapshot process in the background, without awaiting for the result, by using <code>wait=false</code> parameter. You can always <a href="/documentation/concepts/snapshots/#list-snapshot">list all the snapshots through the API</a> later on.</aside>
```python
snapshot_urls = []
for node_url in QDRANT_NODES:
node_client = QdrantClient(node_url, api_key=QDRANT_API_KEY)
snapshot_info = node_client.create_snapshot(collection_name="test_collection")
snapshot_url = f"{node_url}/collections/test_collection/snapshots/{snapshot_info.name}"
snapshot_urls.append(snapshot_url)
```
```http
// for `https://node-0.my-cluster.com:6333`
POST /collections/test_collection/snapshots
// for `https://node-1.my-cluster.com:6333`
POST /collections/test_collection/snapshots
// for `https://node-2.my-cluster.com:6333`
POST /collections/test_collection/snapshots
```
<details>
<summary>Response</summary>
```json
{
"result": {
"name": "test_collection-559032209313046-2024-01-03-13-20-11.snapshot",
"creation_time": "2024-01-03T13:20:11",
"size": 18956800
},
"status": "ok",
"time": 0.307644965
}
```
</details>
Once we have the snapshot URLs, we can download them. Please make sure to include the API key in the request headers.
Downloading the snapshot **can be done only through the HTTP API**, so we are going to use the `requests` library.
```python
import requests
import os
# Create a directory to store snapshots
os.makedirs("snapshots", exist_ok=True)
local_snapshot_paths = []
for snapshot_url in snapshot_urls:
snapshot_name = os.path.basename(snapshot_url)
local_snapshot_path = os.path.join("snapshots", snapshot_name)
response = requests.get(
snapshot_url, headers={"api-key": QDRANT_API_KEY}
)
with open(local_snapshot_path, "wb") as f:
response.raise_for_status()
f.write(response.content)
local_snapshot_paths.append(local_snapshot_path)
```
Alternatively, you can use the `wget` command:
```bash
wget https://node-0.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313046-2024-01-03-13-20-11.snapshot \
--header="api-key: ${QDRANT_API_KEY}" \
-O node-0-shapshot.snapshot
wget https://node-1.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313047-2024-01-03-13-20-12.snapshot \
--header="api-key: ${QDRANT_API_KEY}" \
-O node-1-shapshot.snapshot
wget https://node-2.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313048-2024-01-03-13-20-13.snapshot \
--header="api-key: ${QDRANT_API_KEY}" \
-O node-2-shapshot.snapshot
```
The snapshots are now stored locally. We can use them to restore the collection to a different Qdrant instance, or treat them as a backup. We will create another collection using the same data on the same cluster.
## Restore from snapshot
Our brand-new snapshot is ready to be restored. Typically, it is used to move a collection to a different Qdrant instance, but we are going to use it to create a new collection on the same cluster.
It is just going to have a different name, `test_collection_import`. We do not need to create a collection first, as it is going to be created automatically.
Restoring collection is also done separately on each node, but our Python SDK does not support it yet. We are going to use the HTTP API instead,
and send a request to each node using `requests` library.
```python
for node_url, snapshot_path in zip(QDRANT_NODES, local_snapshot_paths):
snapshot_name = os.path.basename(snapshot_path)
requests.post(
f"{node_url}/collections/test_collection_import/snapshots/upload?priority=snapshot",
headers={
"api-key": QDRANT_API_KEY,
},
files={"snapshot": (snapshot_name, open(snapshot_path, "rb"))},
)
```
Alternatively, you can use the `curl` command:
```bash
curl -X POST 'https://node-0.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \
-H 'api-key: ${QDRANT_API_KEY}' \
-H 'Content-Type:multipart/form-data' \
-F 'snapshot=@node-0-shapshot.snapshot'
curl -X POST 'https://node-1.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \
-H 'api-key: ${QDRANT_API_KEY}' \
-H 'Content-Type:multipart/form-data' \
-F 'snapshot=@node-1-shapshot.snapshot'
curl -X POST 'https://node-2.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \
-H 'api-key: ${QDRANT_API_KEY}' \
-H 'Content-Type:multipart/form-data' \
-F 'snapshot=@node-2-shapshot.snapshot'
```
**Important:** We selected `priority=snapshot` to make sure that the snapshot is preferred over the data stored on the node. You can read mode about the priority in the [documentation](/documentation/concepts/snapshots/#snapshot-priority).
| documentation/tutorials/create-snapshot.md |
---
title: Collaborative filtering
short_description: "Build an effective movie recommendation system using collaborative filtering and Qdrant's similarity search."
description: "Build an effective movie recommendation system using collaborative filtering and Qdrant's similarity search."
preview_image: /blog/collaborative-filtering/social_preview.png
social_preview_image: /blog/collaborative-filtering/social_preview.png
weight: 23
---
# Create a collaborative filtering system
| Time: 45 min | Level: Intermediate | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/master/collaborative-filtering/collaborative-filtering.ipynb) | |
|--------------|---------------------|--|----|
Every time Spotify recommends the next song from a band you've never heard of, it uses a recommendation algorithm based on other users' interactions with that song. This type of algorithm is known as **collaborative filtering**.
Unlike content-based recommendations, collaborative filtering excels when the objects' semantics are loosely or unrelated to users' preferences. This adaptability is what makes it so fascinating. Movie, music, or book recommendations are good examples of such use cases. After all, we rarely choose which book to read purely based on the plot twists.
The traditional way to build a collaborative filtering engine involves training a model that converts the sparse matrix of user-to-item relations into a compressed, dense representation of user and item vectors. Some of the most commonly referenced algorithms for this purpose include [SVD (Singular Value Decomposition)](https://en.wikipedia.org/wiki/Singular_value_decomposition) and [Factorization Machines](https://en.wikipedia.org/wiki/Matrix_factorization_(recommender_systems)). However, the model training approach requires significant resource investments. Model training necessitates data, regular re-training, and a mature infrastructure.
## Methodology
Fortunately, there is a way to build collaborative filtering systems without any model training. You can obtain interpretable recommendations and have a scalable system using a technique based on similarity search. Let’s explore how this works with an example of building a movie recommendation system.
<p align="center"><iframe width="560" height="315" src="https://www.youtube.com/embed/9B7RrmQCQeQ?si=nHp-fM_szHynLcH8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
## Implementation
To implement this, you will use a simple yet powerful resource: [Qdrant with Sparse Vectors](https://qdrant.tech/articles/sparse-vectors/).
Notebook: [You can try this code here](https://githubtocolab.com/qdrant/examples/blob/master/collaborative-filtering/collaborative-filtering.ipynb)
### Setup
You have to first import the necessary libraries and define the environment.
```python
import os
import pandas as pd
import requests
from qdrant_client import QdrantClient, models
from qdrant_client.models import PointStruct, SparseVector, NamedSparseVector
from collections import defaultdict
# OMDB API Key - for movie posters
omdb_api_key = os.getenv("OMDB_API_KEY")
# Collection name
collection_name = "movies"
# Set Qdrant Client
qdrant_client = QdrantClient(
os.getenv("QDRANT_HOST"),
api_key=os.getenv("QDRANT_API_KEY")
)
```
### Define output
Here, you will configure the recommendation engine to retrieve movie posters as output.
```python
# Function to get movie poster using OMDB API
def get_movie_poster(imdb_id, api_key):
url = f"https://www.omdbapi.com/?i={imdb_id}&apikey={api_key}"
data = requests.get(url).json()
return data.get('Poster'), data
```
### Prepare the data
Load the movie datasets. These include three main CSV files: user ratings, movie titles, and OMDB IDs.
```python
# Load CSV files
ratings_df = pd.read_csv('data/ratings.csv', low_memory=False)
movies_df = pd.read_csv('data/movies.csv', low_memory=False)
# Convert movieId in ratings_df and movies_df to string
ratings_df['movieId'] = ratings_df['movieId'].astype(str)
movies_df['movieId'] = movies_df['movieId'].astype(str)
rating = ratings_df['rating']
# Normalize ratings
ratings_df['rating'] = (rating - rating.mean()) / rating.std()
# Merge ratings with movie metadata to get movie titles
merged_df = ratings_df.merge(
movies_df[['movieId', 'title']],
left_on='movieId', right_on='movieId', how='inner'
)
# Aggregate ratings to handle duplicate (userId, title) pairs
ratings_agg_df = merged_df.groupby(['userId', 'movieId']).rating.mean().reset_index()
ratings_agg_df.head()
```
| |userId |movieId |rating |
|---|-----------|---------|---------|
|0 |1 |1 |0.429960 |
|1 |1 |1036 |1.369846 |
|2 |1 |1049 |-0.509926|
|3 |1 |1066 |0.429960 |
|4 |1 |110 |0.429960 |
### Convert to sparse
If you want to search across numerous reviews from different users, you can represent these reviews in a sparse matrix.
```python
# Convert ratings to sparse vectors
user_sparse_vectors = defaultdict(lambda: {"values": [], "indices": []})
for row in ratings_agg_df.itertuples():
user_sparse_vectors[row.userId]["values"].append(row.rating)
user_sparse_vectors[row.userId]["indices"].append(int(row.movieId))
```
![collaborative-filtering](/blog/collaborative-filtering/collaborative-filtering.png)
### Upload the data
Here, you will initialize the Qdrant client and create a new collection to store the data.
Convert the user ratings to sparse vectors and include the `movieId` in the payload.
```python
# Define a data generator
def data_generator():
for user_id, sparse_vector in user_sparse_vectors.items():
yield PointStruct(
id=user_id,
vector={"ratings": SparseVector(
indices=sparse_vector["indices"],
values=sparse_vector["values"]
)},
payload={"user_id": user_id, "movie_id": sparse_vector["indices"]}
)
# Upload points using the data generator
qdrant_client.upload_points(
collection_name=collection_name,
points=data_generator()
)
```
### Define query
In order to get recommendations, we need to find users with similar tastes to ours.
Let's describe our preferences by providing ratings for some of our favorite movies.
`1` indicates that we like the movie, `-1` indicates that we dislike it.
```python
my_ratings = {
603: 1, # Matrix
13475: 1, # Star Trek
11: 1, # Star Wars
1091: -1, # The Thing
862: 1, # Toy Story
597: -1, # Titanic
680: -1, # Pulp Fiction
13: 1, # Forrest Gump
120: 1, # Lord of the Rings
87: -1, # Indiana Jones
562: -1 # Die Hard
}
```
<details>
<summary>Click to see the code for <code>to_vector</code> </summary>
```python
# Create sparse vector from my_ratings
def to_vector(ratings):
vector = SparseVector(
values=[],
indices=[]
)
for movie_id, rating in ratings.items():
vector.values.append(rating)
vector.indices.append(movie_id)
return vector
```
</details>
### Run the query
From the uploaded list of movies with ratings, we can perform a search in Qdrant to get the top most similar users to us.
```python
# Perform the search
results = qdrant_client.query_points(
collection_name=collection_name,
query=to_vector(my_ratings),
using="ratings",
limit=20
).points
```
Now we can find the movies liked by the other similar users, but we haven't seen yet.
Let's combine the results from found users, filter out seen movies, and sort by the score.
```python
# Convert results to scores and sort by score
def results_to_scores(results):
movie_scores = defaultdict(lambda: 0)
for result in results:
for movie_id in result.payload["movie_id"]:
movie_scores[movie_id] += result.score
return movie_scores
# Convert results to scores and sort by score
movie_scores = results_to_scores(results)
top_movies = sorted(movie_scores.items(), key=lambda x: x[1], reverse=True)
```
<details>
<summary> Visualize results in Jupyter Notebook </summary>
Finally, we display the top 5 recommended movies along with their posters and titles.
```python
# Create HTML to display top 5 results
html_content = "<div class='movies-container'>"
for movie_id, score in top_movies[:5]:
imdb_id_row = links.loc[links['movieId'] == int(movie_id), 'imdbId']
if not imdb_id_row.empty:
imdb_id = imdb_id_row.values[0]
poster_url, movie_info = get_movie_poster(imdb_id, omdb_api_key)
movie_title = movie_info.get('Title', 'Unknown Title')
html_content += f"""
<div class='movie-card'>
<img src="{poster_url}" alt="Poster" class="movie-poster">
<div class="movie-title">{movie_title}</div>
<div class="movie-score">Score: {score}</div>
</div>
"""
else:
continue # Skip if imdb_id is not found
html_content += "</div>"
display(HTML(html_content))
```
</details>
## Recommendations
For a complete display of movie posters, check the [notebook output](https://github.com/qdrant/examples/blob/master/collaborative-filtering/collaborative-filtering.ipynb). Here are the results without html content.
```text
Toy Story, Score: 131.2033799
Monty Python and the Holy Grail, Score: 131.2033799
Star Wars: Episode V - The Empire Strikes Back, Score: 131.2033799
Star Wars: Episode VI - Return of the Jedi, Score: 131.2033799
Men in Black, Score: 131.2033799
```
On top of collaborative filtering, we can further enhance the recommendation system by incorporating other features like user demographics, movie genres, or movie tags.
Or, for example, only consider recent ratings via a time-based filter. This way, we can recommend movies that are currently popular among users.
## Conclusion
As demonstrated, it is possible to build an interesting movie recommendation system without intensive model training using Qdrant and Sparse Vectors. This approach not only simplifies the recommendation process but also makes it scalable and interpretable. In future tutorials, we can experiment more with this combination to further enhance our recommendation systems.
| documentation/tutorials/collaborative-filtering.md |
---
title: Tutorials
weight: 13
# If the index.md file is empty, the link to the section will be hidden from the sidebar
is_empty: false
aliases:
- how-to
- tutorials
---
# Tutorials
These tutorials demonstrate different ways you can build vector search into your applications.
| Essential How-Tos | Description | Stack |
|---------------------------------------------------------------------------------|-------------------------------------------------------------------|---------------------------------------------|
| [Semantic Search for Beginners](../tutorials/search-beginners/) | Create a simple search engine locally in minutes. | Qdrant |
| [Simple Neural Search](../tutorials/neural-search/) | Build and deploy a neural search that browses startup data. | Qdrant, BERT, FastAPI |
| [Neural Search with FastEmbed](../tutorials/neural-search-fastembed/) | Build and deploy a neural search with our FastEmbed library. | Qdrant |
| [Multimodal Search](../tutorials/multimodal-search-fastembed/) | Create a simple multimodal search. | Qdrant |
| [Bulk Upload Vectors](../tutorials/bulk-upload/) | Upload a large scale dataset. | Qdrant |
| [Asynchronous API](../tutorials/async-api/) | Communicate with Qdrant server asynchronously with Python SDK. | Qdrant, Python |
| [Create Dataset Snapshots](../tutorials/create-snapshot/) | Turn a dataset into a snapshot by exporting it from a collection. | Qdrant |
| [Load HuggingFace Dataset](../tutorials/huggingface-datasets/) | Load a Hugging Face dataset to Qdrant | Qdrant, Python, datasets |
| [Measure Retrieval Quality](../tutorials/retrieval-quality/) | Measure and fine-tune the retrieval quality | Qdrant, Python, datasets |
| [Search Through Code](../tutorials/code-search/) | Implement semantic search application for code search tasks | Qdrant, Python, sentence-transformers, Jina |
| [Setup Collaborative Filtering](../tutorials/collaborative-filtering/) | Implement a collaborative filtering system for recommendation engines | Qdrant|
| documentation/tutorials/_index.md |
---
title: Semantic-Router
---
# Semantic-Router
[Semantic-Router](https://www.aurelio.ai/semantic-router/) is a library to build decision-making layers for your LLMs and agents. It uses vector embeddings to make tool-use decisions rather than LLM generations, routing our requests using semantic meaning.
Qdrant is available as a supported index in Semantic-Router for you to ingest route data and perform retrievals.
## Installation
To use Semantic-Router with Qdrant, install the `qdrant` extra:
```console
pip install semantic-router[qdrant]
```
## Usage
Set up `QdrantIndex` with the appropriate configurations:
```python
from semantic_router.index import QdrantIndex
qdrant_index = QdrantIndex(
url="https://xyz-example.eu-central.aws.cloud.qdrant.io", api_key="<your-api-key>"
)
```
Once the Qdrant index is set up with the appropriate configurations, we can pass it to the `RouteLayer`.
```python
from semantic_router.layer import RouteLayer
RouteLayer(encoder=some_encoder, routes=some_routes, index=qdrant_index)
```
## Complete Example
<details>
<summary><b>Click to expand</b></summary>
```python
import os
from semantic_router import Route
from semantic_router.encoders import OpenAIEncoder
from semantic_router.index import QdrantIndex
from semantic_router.layer import RouteLayer
# we could use this as a guide for our chatbot to avoid political conversations
politics = Route(
name="politics value",
utterances=[
"isn't politics the best thing ever",
"why don't you tell me about your political opinions",
"don't you just love the president",
"they're going to destroy this country!",
"they will save the country!",
],
)
# this could be used as an indicator to our chatbot to switch to a more
# conversational prompt
chitchat = Route(
name="chitchat",
utterances=[
"how's the weather today?",
"how are things going?",
"lovely weather today",
"the weather is horrendous",
"let's go to the chippy",
],
)
# we place both of our decisions together into single list
routes = [politics, chitchat]
os.environ["OPENAI_API_KEY"] = "<YOUR_API_KEY>"
encoder = OpenAIEncoder()
rl = RouteLayer(
encoder=encoder,
routes=routes,
index=QdrantIndex(location=":memory:"),
)
print(rl("What have you been upto?").name)
```
This returns:
```console
[Out]: 'chitchat'
```
</details>
## 📚 Further Reading
- Semantic-Router [Documentation](https://github.com/aurelio-labs/semantic-router/tree/main/docs)
- Semantic-Router [Video Course](https://www.aurelio.ai/course/semantic-router)
- [Source Code](https://github.com/aurelio-labs/semantic-router/blob/main/semantic_router/index/qdrant.py)
| documentation/frameworks/semantic-router.md |
---
title: Testcontainers
---
# Testcontainers
Qdrant is available as a [Testcontainers module](https://testcontainers.com/modules/qdrant/) in multiple languages. It facilitates the spawning of a Qdrant instance for end-to-end testing.
As noted by [Testcontainers](https://testcontainers.com/), it "is an open source framework for providing throwaway, lightweight instances of databases, message brokers, web browsers, or just about anything that can run in a Docker container."
## Usage
```java
import org.testcontainers.qdrant.QdrantContainer;
QdrantContainer qdrantContainer = new QdrantContainer("qdrant/qdrant");
```
```go
import (
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/modules/qdrant"
)
qdrantContainer, err := qdrant.RunContainer(ctx, testcontainers.WithImage("qdrant/qdrant"))
```
```typescript
import { QdrantContainer } from "@testcontainers/qdrant";
const qdrantContainer = await new QdrantContainer("qdrant/qdrant").start();
```
```python
from testcontainers.qdrant import QdrantContainer
qdrant_container = QdrantContainer("qdrant/qdrant").start()
```
Testcontainers modules provide options/methods to configure ENVs, volumes, and virtually everything you can configure in a Docker container.
## Further reading
- [Testcontainers Guides](https://testcontainers.com/guides/)
- [Testcontainers Qdrant Module](https://testcontainers.com/modules/qdrant/)
| documentation/frameworks/testcontainers.md |
---
title: Stanford DSPy
aliases: [ ../integrations/dspy/ ]
---
# Stanford DSPy
[DSPy](https://github.com/stanfordnlp/dspy) is the framework for solving advanced tasks with language models (LMs) and retrieval models (RMs). It unifies techniques for prompting and fine-tuning LMs — and approaches for reasoning, self-improvement, and augmentation with retrieval and tools.
- Provides composable and declarative modules for instructing LMs in a familiar Pythonic syntax.
- Introduces an automatic compiler that teaches LMs how to conduct the declarative steps in your program.
Qdrant can be used as a retrieval mechanism in the DSPy flow.
## Installation
For the Qdrant retrieval integration, include `dspy-ai` with the `qdrant` extra:
```bash
pip install dspy-ai[qdrant]
```
## Usage
We can configure `DSPy` settings to use the Qdrant retriever model like so:
```python
import dspy
from dspy.retrieve.qdrant_rm import QdrantRM
from qdrant_client import QdrantClient
turbo = dspy.OpenAI(model="gpt-3.5-turbo")
qdrant_client = QdrantClient() # Defaults to a local instance at http://localhost:6333/
qdrant_retriever_model = QdrantRM("collection-name", qdrant_client, k=3)
dspy.settings.configure(lm=turbo, rm=qdrant_retriever_model)
```
Using the retriever is pretty simple. The `dspy.Retrieve(k)` module will search for the top-k passages that match a given query.
```python
retrieve = dspy.Retrieve(k=3)
question = "Some question about my data"
topK_passages = retrieve(question).passages
print(f"Top {retrieve.k} passages for question: {question} \n", "\n")
for idx, passage in enumerate(topK_passages):
print(f"{idx+1}]", passage, "\n")
```
With Qdrant configured as the retriever for contexts, you can set up a DSPy module like so:
```python
class RAG(dspy.Module):
def __init__(self, num_passages=3):
super().__init__()
self.retrieve = dspy.Retrieve(k=num_passages)
...
def forward(self, question):
context = self.retrieve(question).passages
...
```
With the generic RAG blueprint now in place, you can add the many interactions offered by DSPy with context retrieval powered by Qdrant.
## Next steps
- Find DSPy usage docs and examples [here](https://github.com/stanfordnlp/dspy#4-documentation--tutorials).
- [Source Code](https://github.com/stanfordnlp/dspy/blob/main/dspy/retrieve/qdrant_rm.py)
| documentation/frameworks/dspy.md |
---
title: FiftyOne
aliases: [ ../integrations/fifty-one ]
---
# FiftyOne
[FiftyOne](https://voxel51.com/) is an open-source toolkit designed to enhance computer vision workflows by optimizing dataset quality
and providing valuable insights about your models. FiftyOne 0.20, which includes a native integration with Qdrant, supporting workflows
like [image similarity search](https://docs.voxel51.com/user_guide/brain.html#image-similarity) and
[text search](https://docs.voxel51.com/user_guide/brain.html#text-similarity).
Qdrant helps FiftyOne to find the most similar images in the dataset using vector embeddings.
FiftyOne is available as a Python package that might be installed in the following way:
```bash
pip install fiftyone
```
Please check out the documentation of FiftyOne on [Qdrant integration](https://docs.voxel51.com/integrations/qdrant.html).
| documentation/frameworks/fifty-one.md |
---
title: Pinecone Canopy
---
# Pinecone Canopy
[Canopy](https://github.com/pinecone-io/canopy) is an open-source framework and context engine to build chat assistants at scale.
Qdrant is supported as a knowledge base within Canopy for context retrieval and augmented generation.
## Usage
Install the SDK with the Qdrant extra as described in the [Canopy README](https://github.com/pinecone-io/canopy?tab=readme-ov-file#extras).
```bash
pip install canopy-sdk[qdrant]
```
### Creating a knowledge base
```python
from canopy.knowledge_base import QdrantKnowledgeBase
kb = QdrantKnowledgeBase(collection_name="<YOUR_COLLECTION_NAME>")
```
<aside role="status">The constructor accepts additional <a href="https://github.com/qdrant/qdrant-client/blob/eda201a1dbf1bbc67415f8437a5619f6f83e8ac6/qdrant_client/qdrant_client.py#L36-L61">options</a> to customize your connection to Qdrant.</aside>
To create a new Qdrant collection and connect it to the knowledge base, use the `create_canopy_collection` method:
```python
kb.create_canopy_collection()
```
You can always verify the connection to the collection with the `verify_index_connection` method:
```python
kb.verify_index_connection()
```
Learn more about customizing the knowledge base and its inner components [in the Canopy library](https://github.com/pinecone-io/canopy/blob/main/docs/library.md#understanding-knowledgebase-workings).
### Adding data to the knowledge base
To insert data into the knowledge base, you can create a list of documents and use the `upsert` method:
```python
from canopy.models.data_models import Document
documents = [
Document(
id="1",
text="U2 are an Irish rock band from Dublin, formed in 1976.",
source="https://en.wikipedia.org/wiki/U2",
),
Document(
id="2",
text="Arctic Monkeys are an English rock band formed in Sheffield in 2002.",
source="https://en.wikipedia.org/wiki/Arctic_Monkeys",
metadata={"my-key": "my-value"},
),
]
kb.upsert(documents)
```
### Querying the knowledge base
You can query the knowledge base with the `query` method to find the most similar documents to a given text:
```python
from canopy.models.data_models import Query
kb.query(
[
Query(text="Arctic Monkeys music genre"),
Query(
text="U2 music genre",
top_k=10,
metadata_filter={"key": "my-key", "match": {"value": "my-value"}},
),
]
)
```
## Further Reading
- [Introduction to Canopy](https://www.pinecone.io/blog/canopy-rag-framework/)
- [Canopy library reference](https://github.com/pinecone-io/canopy/blob/main/docs/library.md)
- [Source Code](https://github.com/pinecone-io/canopy/tree/main/src/canopy/knowledge_base/qdrant)
| documentation/frameworks/canopy.md |
---
title: Langchain Go
---
# Langchain Go
[Langchain Go](https://tmc.github.io/langchaingo/docs/) is a framework for developing data-aware applications powered by language models in Go.
You can use Qdrant as a vector store in Langchain Go.
## Setup
Install the `langchain-go` project dependency
```bash
go get -u github.com/tmc/langchaingo
```
## Usage
Before you use the following code sample, customize the following values for your configuration:
- `YOUR_QDRANT_REST_URL`: If you've set up Qdrant using the [Quick Start](/documentation/quick-start/) guide,
set this value to `http://localhost:6333`.
- `YOUR_COLLECTION_NAME`: Use our [Collections](/documentation/concepts/collections/) guide to create or
list collections.
```go
package main
import (
"log"
"net/url"
"github.com/tmc/langchaingo/embeddings"
"github.com/tmc/langchaingo/llms/openai"
"github.com/tmc/langchaingo/vectorstores/qdrant"
)
func main() {
llm, err: = openai.New()
if err != nil {
log.Fatal(err)
}
e, err: = embeddings.NewEmbedder(llm)
if err != nil {
log.Fatal(err)
}
url, err: = url.Parse("YOUR_QDRANT_REST_URL")
if err != nil {
log.Fatal(err)
}
store, err: = qdrant.New(
qdrant.WithURL( * url),
qdrant.WithCollectionName("YOUR_COLLECTION_NAME"),
qdrant.WithEmbedder(e),
)
if err != nil {
log.Fatal(err)
}
}
```
## Further Reading
- You can find usage examples of Langchain Go [here](https://github.com/tmc/langchaingo/tree/main/examples).
- [Source Code](https://github.com/tmc/langchaingo/tree/main/vectorstores/qdrant)
| documentation/frameworks/langchain-go.md |
---
title: Firebase Genkit
---
# Firebase Genkit
[Genkit](https://firebase.google.com/products/genkit) is a framework to build, deploy, and monitor production-ready AI-powered apps.
You can build apps that generate custom content, use semantic search, handle unstructured inputs, answer questions with your business data, autonomously make decisions, orchestrate tool calls, and more.
You can use Qdrant for indexing/semantic retrieval of data in your Genkit applications via the [Qdrant-Genkit plugin](https://github.com/qdrant/qdrant-genkit).
Genkit currently supports server-side development in JavaScript/TypeScript (Node.js) with Go support in active development.
## Installation
```bash
npm i genkitx-qdrant
```
## Configuration
To use this plugin, specify it when you call `configureGenkit()`:
```js
import { qdrant } from 'genkitx-qdrant';
import { textEmbeddingGecko } from '@genkit-ai/vertexai';
export default configureGenkit({
plugins: [
qdrant([
{
clientParams: {
host: 'localhost',
port: 6333,
},
collectionName: 'some-collection',
embedder: textEmbeddingGecko,
},
]),
],
// ...
});
```
You'll need to specify a collection name, the embedding model you want to use and the Qdrant client parameters. In
addition, there are a few optional parameters:
- `embedderOptions`: Additional options to pass options to the embedder:
```js
embedderOptions: { taskType: 'RETRIEVAL_DOCUMENT' },
```
- `contentPayloadKey`: Name of the payload filed with the document content. Defaults to "content".
```js
contentPayloadKey: 'content';
```
- `metadataPayloadKey`: Name of the payload filed with the document metadata. Defaults to "metadata".
```js
metadataPayloadKey: 'metadata';
```
- `collectionCreateOptions`: [Additional options](<(https://qdrant.tech/documentation/concepts/collections/#create-a-collection)>) when creating the Qdrant collection.
## Usage
Import retriever and indexer references like so:
```js
import { qdrantIndexerRef, qdrantRetrieverRef } from 'genkitx-qdrant';
import { Document, index, retrieve } from '@genkit-ai/ai/retriever';
```
Then, pass the references to `retrieve()` and `index()`:
```js
// To specify an indexer:
export const qdrantIndexer = qdrantIndexerRef({
collectionName: 'some-collection',
displayName: 'Some Collection indexer',
});
await index({ indexer: qdrantIndexer, documents });
```
```js
// To specify a retriever:
export const qdrantRetriever = qdrantRetrieverRef({
collectionName: 'some-collection',
displayName: 'Some Collection Retriever',
});
let docs = await retrieve({ retriever: qdrantRetriever, query });
```
You can refer to [Retrieval-augmented generation](https://firebase.google.com/docs/genkit/rag) for a general
discussion on indexers and retrievers.
## Further Reading
- [Introduction to Genkit](https://firebase.google.com/docs/genkit)
- [Genkit Documentation](https://firebase.google.com/docs/genkit/get-started)
- [Source Code](https://github.com/qdrant/qdrant-genkit)
| documentation/frameworks/genkit.md |
---
title: Langchain4J
---
# LangChain for Java
LangChain for Java, also known as [Langchain4J](https://github.com/langchain4j/langchain4j), is a community port of [Langchain](https://www.langchain.com/) for building context-aware AI applications in Java
You can use Qdrant as a vector store in Langchain4J through the [`langchain4j-qdrant`](https://central.sonatype.com/artifact/dev.langchain4j/langchain4j-qdrant) module.
## Setup
Add the `langchain4j-qdrant` to your project dependencies.
```xml
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-qdrant</artifactId>
<version>VERSION</version>
</dependency>
```
## Usage
Before you use the following code sample, customize the following values for your configuration:
- `YOUR_COLLECTION_NAME`: Use our [Collections](/documentation/concepts/collections/) guide to create or
list collections.
- `YOUR_HOST_URL`: Use the GRPC URL for your system. If you used the [Quick Start](/documentation/quick-start/) guide,
it may be http://localhost:6334. If you've deployed in the [Qdrant Cloud](/documentation/cloud/), you may have a
longer URL such as `https://example.location.cloud.qdrant.io:6334`.
- `YOUR_API_KEY`: Substitute the API key associated with your configuration.
```java
import dev.langchain4j.store.embedding.EmbeddingStore;
import dev.langchain4j.store.embedding.qdrant.QdrantEmbeddingStore;
EmbeddingStore<TextSegment> embeddingStore =
QdrantEmbeddingStore.builder()
// Ensure the collection is configured with the appropriate dimensions
// of the embedding model.
// Reference https://qdrant.tech/documentation/concepts/collections/
.collectionName("YOUR_COLLECTION_NAME")
.host("YOUR_HOST_URL")
// GRPC port of the Qdrant server
.port(6334)
.apiKey("YOUR_API_KEY")
.build();
```
`QdrantEmbeddingStore` supports all the semantic features of Langchain4J.
## Further Reading
- You can refer to the [Langchain4J examples](https://github.com/langchain4j/langchain4j-examples/) to get started.
- [Source Code](https://github.com/langchain4j/langchain4j/tree/main/langchain4j-qdrant)
| documentation/frameworks/langchain4j.md |
---
title: Langchain
aliases:
- ../integrations/langchain/
- /documentation/overview/integrations/langchain/
---
# Langchain
Langchain is a library that makes developing Large Language Model-based applications much easier. It unifies the interfaces
to different libraries, including major embedding providers and Qdrant. Using Langchain, you can focus on the business value instead of writing the boilerplate.
Langchain distributes the Qdrant integration as a partner package.
It might be installed with pip:
```bash
pip install langchain-qdrant
```
The integration supports searching for relevant documents usin dense/sparse and hybrid retrieval.
Qdrant acts as a vector index that may store the embeddings with the documents used to generate them. There are various ways to use it, but calling `QdrantVectorStore.from_texts` or `QdrantVectorStore.from_documents` is probably the most straightforward way to get started:
```python
from langchain_qdrant import QdrantVectorStore
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
doc_store = QdrantVectorStore.from_texts(
texts, embeddings, url="<qdrant-url>", api_key="<qdrant-api-key>", collection_name="texts"
)
```
## Using an existing collection
To get an instance of `langchain_qdrant.QdrantVectorStore` without loading any new documents or texts, you can use the `QdrantVectorStore.from_existing_collection()` method.
```python
doc_store = QdrantVectorStore.from_existing_collection(
embeddings=embeddings,
collection_name="my_documents",
url="<qdrant-url>",
api_key="<qdrant-api-key>",
)
```
## Local mode
Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things
out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kept in memory or
persisted on disk.
### In-memory
For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the
client is destroyed - usually at the end of your script/notebook.
```python
qdrant = QdrantVectorStore.from_documents(
docs,
embeddings,
location=":memory:", # Local mode with in-memory storage only
collection_name="my_documents",
)
```
### On-disk storage
Local mode, without using the Qdrant server, may also store your vectors on disk so they’re persisted between runs.
```python
qdrant = Qdrant.from_documents(
docs,
embeddings,
path="/tmp/local_qdrant",
collection_name="my_documents",
)
```
### On-premise server deployment
No matter if you choose to launch QdrantVectorStore locally with [a Docker container](/documentation/guides/installation/), or
select a Kubernetes deployment with [the official Helm chart](https://github.com/qdrant/qdrant-helm), the way you're
going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service.
```python
url = "<---qdrant url here --->"
qdrant = QdrantVectorStore.from_documents(
docs,
embeddings,
url,
prefer_grpc=True,
collection_name="my_documents",
)
```
## Similarity search
`QdrantVectorStore` supports 3 modes for similarity searches. They can be configured using the `retrieval_mode` parameter when setting up the class.
- Dense Vector Search(Default)
- Sparse Vector Search
- Hybrid Search
### Dense Vector Search
To search with only dense vectors,
- The `retrieval_mode` parameter should be set to `RetrievalMode.DENSE`(default).
- A [dense embeddings](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) value should be provided for the `embedding` parameter.
```py
from langchain_qdrant import RetrievalMode
qdrant = QdrantVectorStore.from_documents(
docs,
embedding=embeddings,
location=":memory:",
collection_name="my_documents",
retrieval_mode=RetrievalMode.DENSE,
)
query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search(query)
```
### Sparse Vector Search
To search with only sparse vectors,
- The `retrieval_mode` parameter should be set to `RetrievalMode.SPARSE`.
- An implementation of the [SparseEmbeddings interface](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter.
The `langchain-qdrant` package provides a [FastEmbed](https://github.com/qdrant/fastembed) based implementation out of the box.
To use it, install the [FastEmbed package](https://github.com/qdrant/fastembed#-installation).
```python
from langchain_qdrant import FastEmbedSparse, RetrievalMode
sparse_embeddings = FastEmbedSparse(model_name="Qdrant/BM25")
qdrant = QdrantVectorStore.from_documents(
docs,
sparse_embedding=sparse_embeddings,
location=":memory:",
collection_name="my_documents",
retrieval_mode=RetrievalMode.SPARSE,
)
query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search(query)
```
### Hybrid Vector Search
To perform a hybrid search using dense and sparse vectors with score fusion,
- The `retrieval_mode` parameter should be set to `RetrievalMode.HYBRID`.
- A [dense embeddings](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) value should be provided for the `embedding` parameter.
- An implementation of the [SparseEmbeddings interface](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter.
```python
from langchain_qdrant import FastEmbedSparse, RetrievalMode
sparse_embeddings = FastEmbedSparse(model_name="Qdrant/bm25")
qdrant = QdrantVectorStore.from_documents(
docs,
embedding=embeddings,
sparse_embedding=sparse_embeddings,
location=":memory:",
collection_name="my_documents",
retrieval_mode=RetrievalMode.HYBRID,
)
query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search(query)
```
Note that if you've added documents with HYBRID mode, you can switch to any retrieval mode when searching. Since both the dense and sparse vectors are available in the collection.
## Next steps
If you'd like to know more about running Qdrant in a Langchain-based application, please read our article
[Question Answering with Langchain and Qdrant without boilerplate](/articles/langchain-integration/). Some more information
might also be found in the [Langchain documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant).
- [Source Code](https://github.com/langchain-ai/langchain/tree/master/libs%2Fpartners%2Fqdrant)
| documentation/frameworks/langchain.md |
---
title: LlamaIndex
aliases:
- ../integrations/llama-index/
- /documentation/overview/integrations/llama-index/
---
# LlamaIndex
Llama Index acts as an interface between your external data and Large Language Models. So you can bring your
private data and augment LLMs with it. LlamaIndex simplifies data ingestion and indexing, integrating Qdrant as a vector index.
Installing Llama Index is straightforward if we use pip as a package manager. Qdrant is not installed by default, so we need to
install it separately. The integration of both tools also comes as another package.
```bash
pip install llama-index llama-index-vector-stores-qdrant
```
Llama Index requires providing an instance of `QdrantClient`, so it can interact with Qdrant server.
```python
from llama_index.core.indices.vector_store.base import VectorStoreIndex
from llama_index.vector_stores.qdrant import QdrantVectorStore
import qdrant_client
client = qdrant_client.QdrantClient(
"<qdrant-url>",
api_key="<qdrant-api-key>", # For Qdrant Cloud, None for local instance
)
vector_store = QdrantVectorStore(client=client, collection_name="documents")
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
```
## Further Reading
- [LlamaIndex Documentation](https://docs.llamaindex.ai/en/stable/examples/vector_stores/QdrantIndexDemo/)
- [Example Notebook](https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/QdrantIndexDemo.ipynb)
- [Source Code](https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/vector_stores/llama-index-vector-stores-qdrant)
| documentation/frameworks/llama-index.md |
---
title: DocArray
aliases: [ ../integrations/docarray/ ]
---
# DocArray
You can use Qdrant natively in DocArray, where Qdrant serves as a high-performance document store to enable scalable vector search.
DocArray is a library from Jina AI for nested, unstructured data in transit, including text, image, audio, video, 3D mesh, etc.
It allows deep-learning engineers to efficiently process, embed, search, recommend, store, and transfer the data with a Pythonic API.
To install DocArray with Qdrant support, please do
```bash
pip install "docarray[qdrant]"
```
## Further Reading
- [DocArray documentations](https://docarray.jina.ai/advanced/document-store/qdrant/).
- [Source Code](https://github.com/docarray/docarray/blob/main/docarray/index/backends/qdrant.py)
| documentation/frameworks/docarray.md |
---
title: Pandas-AI
---
# Pandas-AI
Pandas-AI is a Python library that uses a generative AI model to interpret natural language queries and translate them into Python code to interact with pandas data frames and return the final results to the user.
## Installation
```console
pip install pandasai[qdrant]
```
## Usage
You can begin a conversation by instantiating an `Agent` instance based on your Pandas data frame. The default Pandas-AI LLM requires an [API key](https://pandabi.ai).
You can find the list of all supported LLMs [here](https://docs.pandas-ai.com/en/latest/LLMs/llms/)
```python
import os
import pandas as pd
from pandasai import Agent
# Sample DataFrame
sales_by_country = pd.DataFrame(
{
"country": [
"United States",
"United Kingdom",
"France",
"Germany",
"Italy",
"Spain",
"Canada",
"Australia",
"Japan",
"China",
],
"sales": [5000, 3200, 2900, 4100, 2300, 2100, 2500, 2600, 4500, 7000],
}
)
os.environ["PANDASAI_API_KEY"] = "YOUR_API_KEY"
agent = Agent(sales_by_country)
agent.chat("Which are the top 5 countries by sales?")
# OUTPUT: China, United States, Japan, Germany, Australia
```
## Qdrant support
You can train Pandas-AI to understand your data better and improve the quality of the results.
Qdrant can be configured as a vector store to ingest training data and retrieve semantically relevant content.
```python
from pandasai.ee.vectorstores.qdrant import Qdrant
qdrant = Qdrant(
collection_name="<SOME_COLLECTION>",
embedding_model="sentence-transformers/all-MiniLM-L6-v2",
url="http://localhost:6333",
grpc_port=6334,
prefer_grpc=True
)
agent = Agent(df, vector_store=qdrant)
# Train with custom information
agent.train(docs="The fiscal year starts in April")
# Train the q/a pairs of code snippets
query = "What are the total sales for the current fiscal year?"
response = """
import pandas as pd
df = dfs[0]
# Calculate the total sales for the current fiscal year
total_sales = df[df['date'] >= pd.to_datetime('today').replace(month=4, day=1)]['sales'].sum()
result = { "type": "number", "value": total_sales }
"""
agent.train(queries=[query], codes=[response])
# # The model will use the information provided in the training to generate a response
```
## Further reading
- [Getting Started with Pandas-AI](https://pandasai-docs.readthedocs.io/en/latest/getting-started/)
- [Pandas-AI Reference](https://pandasai-docs.readthedocs.io/en/latest/)
- [Source Code](https://github.com/Sinaptik-AI/pandas-ai/blob/main/pandasai/ee/vectorstores/qdrant.py)
| documentation/frameworks/pandas-ai.md |
---
title: MemGPT
---
# MemGPT
[MemGPT](https://memgpt.ai/) is a system that enables LLMs to manage their own memory and overcome limited context windows to
- Create perpetual chatbots that learn about you and change their personalities over time.
- Create perpetual chatbots that can interface with large data stores.
Qdrant is available as a storage backend in MemGPT for storing and semantically retrieving data.
## Usage
#### Installation
To install the required dependencies, install `pymemgpt` with the `qdrant` extra.
```sh
pip install 'pymemgpt[qdrant]'
```
You can configure MemGPT to use either a Qdrant server or an in-memory instance with the `memgpt configure` command.
#### Configuring the Qdrant server
When you run `memgpt configure`, go through the prompts as described in the [MemGPT configuration documentation](https://memgpt.readme.io/docs/config).
After you address several `memgpt` questions, you come to the following `memgpt` prompts:
```console
? Select storage backend for archival data: qdrant
? Select Qdrant backend: server
? Enter the Qdrant instance URI (Default: localhost:6333): https://xyz-example.eu-central.aws.cloud.qdrant.io
```
You can set an API key for authentication using the `QDRANT_API_KEY` environment variable.
#### Configuring an in-memory instance
```console
? Select storage backend for archival data: qdrant
? Select Qdrant backend: local
```
The data is persisted at the default MemGPT storage directory.
## Further Reading
- [MemGPT Examples](https://github.com/cpacker/MemGPT/tree/main/examples)
- [MemGPT Documentation](https://memgpt.readme.io/docs/index).
| documentation/frameworks/memgpt.md |
---
title: Vanna.AI
---
# Vanna.AI
[Vanna](https://vanna.ai/) is a Python package that uses retrieval augmentation to help you generate accurate SQL queries for your database using LLMs.
Vanna works in two easy steps - train a RAG "model" on your data, and then ask questions which will return SQL queries that can be set up to automatically run on your database.
Qdrant is available as a support vector store for ingesting and retrieving your RAG data.
## Installation
```console
pip install 'vanna[qdrant]'
```
## Setup
You can set up a Vanna agent using Qdrant as your vector store and any of the [LLMs supported by Vanna](https://vanna.ai/docs/postgres-openai-vanna-vannadb/).
We'll use OpenAI for demonstration.
```python
from vanna.openai import OpenAI_Chat
from vanna.qdrant import Qdrant_VectorStore
from qdrant_client import QdrantClient
class MyVanna(Qdrant, OpenAI_Chat):
def __init__(self, config=None):
Qdrant_VectorStore.__init__(self, config=config)
OpenAI_Chat.__init__(self, config=config)
vn = MyVanna(config={
'client': QdrantClient(...),
'api_key': sk-...,
'model': gpt-4-...,
})
```
## Usage
Once a Vanna agent is instantiated, you can connect it to [any SQL database](https://vanna.ai/docs/FAQ/#can-i-use-this-with-my-sql-database) of your choosing.
For example, Postgres.
```python
vn.connect_to_postgres(host='my-host', dbname='my-dbname', user='my-user', password='my-password', port='my-port')
```
You can now train and begin querying your database with SQL.
```python
# You can add DDL statements that specify table names, column names, types, and potentially relationships
vn.train(ddl="""
CREATE TABLE IF NOT EXISTS my-table (
id INT PRIMARY KEY,
name VARCHAR(100),
age INT
)
""")
# You can add documentation about your business terminology or definitions.
vn.train(documentation="Our business defines OTIF score as the percentage of orders that are delivered on time and in full")
# You can also add SQL queries to your training data. This is useful if you have some queries already laying around.
vn.train(sql="SELECT * FROM my-table WHERE name = 'John Doe'")
# You can remove training data if there's obsolete/incorrect information.
vn.remove_training_data(id='1-ddl')
# Whenever you ask a new question, Vanna will retrieve 10 most relevant pieces of training data and use it as part of the LLM prompt to generate the SQL.
vn.ask(question="<YOUR_QUESTION>")
```
## Further reading
- [Getting started with Vanna.AI](https://vanna.ai/docs/app/)
- [Vanna.AI documentation](https://vanna.ai/docs/)
- [Source Code](https://github.com/vanna-ai/vanna/tree/main/src/vanna/qdrant)
| documentation/frameworks/vanna-ai.md |
---
title: Spring AI
---
# Spring AI
[Spring AI](https://docs.spring.io/spring-ai/reference/) is a Java framework that provides a [Spring-friendly](https://spring.io/) API and abstractions for developing AI applications.
Qdrant is available as supported vector database for use within your Spring AI projects.
## Installation
You can find the Spring AI installation instructions [here](https://docs.spring.io/spring-ai/reference/getting-started.html).
Add the Qdrant boot starter package.
```xml
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-qdrant-store-spring-boot-starter</artifactId>
</dependency>
```
## Usage
Configure Qdrant with Spring Boot’s `application.properties`.
```
spring.ai.vectorstore.qdrant.host=<host of your qdrant instance>
spring.ai.vectorstore.qdrant.port=<the GRPC port of your qdrant instance>
spring.ai.vectorstore.qdrant.api-key=<your api key>
spring.ai.vectorstore.qdrant.collection-name=<The name of the collection to use in Qdrant>
```
Learn more about these options in the [configuration reference](https://docs.spring.io/spring-ai/reference/api/vectordbs/qdrant.html#qdrant-vectorstore-properties).
Or you can set up the Qdrant vector store with the `QdrantVectorStoreConfig` options.
```java
@Bean
public QdrantVectorStoreConfig qdrantVectorStoreConfig() {
return QdrantVectorStoreConfig.builder()
.withHost("<QDRANT_HOSTNAME>")
.withPort(<QDRANT_GRPC_PORT>)
.withCollectionName("<QDRANT_COLLECTION_NAME>")
.withApiKey("<QDRANT_API_KEY>")
.build();
}
```
Build the vector store using the config and any of the support [Spring AI embedding providers](https://docs.spring.io/spring-ai/reference/api/embeddings.html#available-implementations).
```java
@Bean
public VectorStore vectorStore(QdrantVectorStoreConfig config, EmbeddingClient embeddingClient) {
return new QdrantVectorStore(config, embeddingClient);
}
```
You can now use the `VectorStore` instance backed by Qdrant as a vector store in the Spring AI APIs.
<aside role="status">If the collection is not <a href="/documentation/concepts/collections/#create-a-collection">created in advance</a>, <code>QdrantVectorStore</code> will attempt to create one using cosine similarity and the dimension of the configured <code>EmbeddingClient</code>.</aside>
## 📚 Further Reading
- Spring AI [Qdrant reference](https://docs.spring.io/spring-ai/reference/api/vectordbs/qdrant.html)
- Spring AI [API reference](https://docs.spring.io/spring-ai/reference/index.html)
- [Source Code](https://github.com/spring-projects/spring-ai/tree/main/vector-stores/spring-ai-qdrant-store)
| documentation/frameworks/spring-ai.md |
---
title: Autogen
aliases: [ ../integrations/autogen/ ]
---
# Microsoft Autogen
[AutoGen](https://github.com/microsoft/autogen) is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.
- Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM.
- Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ.
- Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed.
With the Autogen-Qdrant integration, you can use the `QdrantRetrieveUserProxyAgent` from autogen to build retrieval augmented generation(RAG) services with ease.
## Installation
```bash
pip install "pyautogen[retrievechat]" "qdrant_client[fastembed]"
```
## Usage
A demo application that generates code based on context w/o human feedback
#### Set your API Endpoint
The config_list_from_json function loads a list of configurations from an environment variable or a JSON file.
```python
from autogen import config_list_from_json
from autogen.agentchat.contrib.retrieve_assistant_agent import RetrieveAssistantAgent
from autogen.agentchat.contrib.qdrant_retrieve_user_proxy_agent import QdrantRetrieveUserProxyAgent
from qdrant_client import QdrantClient
config_list = config_list_from_json(
env_or_file="OAI_CONFIG_LIST",
file_location="."
)
```
It first looks for the environment variable "OAI_CONFIG_LIST" which needs to be a valid JSON string. If that variable is not found, it then looks for a JSON file named "OAI_CONFIG_LIST". The file structure sample can be found [here](https://github.com/microsoft/autogen/blob/main/OAI_CONFIG_LIST_sample).
#### Construct agents for RetrieveChat
We start by initializing the RetrieveAssistantAgent and QdrantRetrieveUserProxyAgent. The system message needs to be set to "You are a helpful assistant." for RetrieveAssistantAgent. The detailed instructions are given in the user message.
```python
# Print the generation steps
autogen.ChatCompletion.start_logging()
# 1. create a RetrieveAssistantAgent instance named "assistant"
assistant = RetrieveAssistantAgent(
name="assistant",
system_message="You are a helpful assistant.",
llm_config={
"request_timeout": 600,
"seed": 42,
"config_list": config_list,
},
)
# 2. create a QdrantRetrieveUserProxyAgent instance named "qdrantagent"
# By default, the human_input_mode is "ALWAYS", i.e. the agent will ask for human input at every step.
# `docs_path` is the path to the docs directory.
# `task` indicates the kind of task we're working on.
# `chunk_token_size` is the chunk token size for the retrieve chat.
# We use an in-memory QdrantClient instance here. Not recommended for production.
rag_proxy_agent = QdrantRetrieveUserProxyAgent(
name="qdrantagent",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
retrieve_config={
"task": "code",
"docs_path": "./path/to/docs",
"chunk_token_size": 2000,
"model": config_list[0]["model"],
"client": QdrantClient(":memory:"),
"embedding_model": "BAAI/bge-small-en-v1.5",
},
)
```
#### Run the retriever service
```python
# Always reset the assistant before starting a new conversation.
assistant.reset()
# We use the ragproxyagent to generate a prompt to be sent to the assistant as the initial message.
# The assistant receives the message and generates a response. The response will be sent back to the ragproxyagent for processing.
# The conversation continues until the termination condition is met, in RetrieveChat, the termination condition when no human-in-loop is no code block detected.
# The query used below is for demonstration. It should usually be related to the docs made available to the agent
code_problem = "How can I use FLAML to perform a classification task?"
rag_proxy_agent.initiate_chat(assistant, problem=code_problem)
```
## Next steps
- Autogen [examples](https://microsoft.github.io/autogen/docs/Examples)
- AutoGen [documentation](https://microsoft.github.io/autogen/)
- [Source Code](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/qdrant_retrieve_user_proxy_agent.py)
| documentation/frameworks/autogen.md |
---
title: txtai
aliases: [ ../integrations/txtai/ ]
---
# txtai
Qdrant might be also used as an embedding backend in [txtai](https://neuml.github.io/txtai/) semantic applications.
txtai simplifies building AI-powered semantic search applications using Transformers. It leverages the neural embeddings and their
properties to encode high-dimensional data in a lower-dimensional space and allows to find similar objects based on their embeddings'
proximity.
Qdrant is not built-in txtai backend and requires installing an additional dependency:
```bash
pip install qdrant-txtai
```
The examples and some more information might be found in [qdrant-txtai repository](https://github.com/qdrant/qdrant-txtai).
| documentation/frameworks/txtai.md |
---
title: Frameworks
weight: 15
---
## Framework Integrations
| Framework | Description |
| ------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| [AutoGen](./autogen/) | Framework from Microsoft building LLM applications using multiple conversational agents. |
| [Canopy](./canopy/) | Framework from Pinecone for building RAG applications using LLMs and knowledge bases. |
| [Cheshire Cat](./cheshire-cat/) | Framework to create personalized AI assistants using custom data. |
| [DocArray](./docarray/) | Python library for managing data in multi-modal AI applications. |
| [DSPy](./dspy/) | Framework for algorithmically optimizing LM prompts and weights. |
| [Fifty-One](./fifty-one/) | Toolkit for building high-quality datasets and computer vision models. |
| [Genkit](./genkit/) | Framework to build, deploy, and monitor production-ready AI-powered apps. |
| [Haystack](./haystack/) | LLM orchestration framework to build customizable, production-ready LLM applications. |
| [Langchain](./langchain/) | Python framework for building context-aware, reasoning applications using LLMs. |
| [Langchain-Go](./langchain-go/) | Go framework for building context-aware, reasoning applications using LLMs. |
| [Langchain4j](./langchain4j/) | Java framework for building context-aware, reasoning applications using LLMs. |
| [LlamaIndex](./llama-index/) | A data framework for building LLM applications with modular integrations. |
| [MemGPT](./memgpt/) | System to build LLM agents with long term memory & custom tools |
| [Pandas-AI](./pandas-ai/) | Python library to query/visualize your data (CSV, XLSX, PostgreSQL, etc.) in natural language |
| [Semantic Router](./semantic-router/) | Python library to build a decision-making layer for AI applications using vector search. |
| [Spring AI](./spring-ai/) | Java AI framework for building with Spring design principles such as portability and modular design. |
| [Testcontainers](./testcontainers/) | Set of frameworks for running containerized dependencies in tests. |
| [txtai](./txtai/) | Python library for semantic search, LLM orchestration and language model workflows. |
| [Vanna AI](./vanna-ai/) | Python RAG framework for SQL generation and querying. |
| documentation/frameworks/_index.md |
---
title: Haystack
aliases:
- ../integrations/haystack/
- /documentation/overview/integrations/haystack/
---
# Haystack
[Haystack](https://haystack.deepset.ai/) serves as a comprehensive NLP framework, offering a modular methodology for constructing
cutting-edge generative AI, QA, and semantic knowledge base search systems. A critical element in contemporary NLP systems is an
efficient database for storing and retrieving extensive text data. Vector databases excel in this role, as they house vector
representations of text and implement effective methods for swift retrieval. Thus, we are happy to announce the integration
with Haystack - `QdrantDocumentStore`. This document store is unique, as it is maintained externally by the Qdrant team.
The new document store comes as a separate package and can be updated independently of Haystack:
```bash
pip install qdrant-haystack
```
`QdrantDocumentStore` supports [all the configuration properties](/documentation/collections/#create-collection) available in
the Qdrant Python client. If you want to customize the default configuration of the collection used under the hood, you can
provide that settings when you create an instance of the `QdrantDocumentStore`. For example, if you'd like to enable the
Scalar Quantization, you'd make that in the following way:
```python
from qdrant_haystack.document_stores import QdrantDocumentStore
from qdrant_client import models
document_store = QdrantDocumentStore(
":memory:",
index="Document",
embedding_dim=512,
recreate_index=True,
quantization_config=models.ScalarQuantization(
scalar=models.ScalarQuantizationConfig(
type=models.ScalarType.INT8,
quantile=0.99,
always_ram=True,
),
),
)
```
## Further Reading
- [Haystack Documentation](https://haystack.deepset.ai/integrations/qdrant-document-store)
- [Source Code](https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/qdrant)
| documentation/frameworks/haystack.md |
---
title: Cheshire Cat
aliases: [ ../integrations/cheshire-cat/ ]
---
# Cheshire Cat
[Cheshire Cat](https://cheshirecat.ai/) is an open-source framework that allows you to develop intelligent agents on top of many Large Language Models (LLM). You can develop your custom AI architecture to assist you in a wide range of tasks.
![Cheshire cat](/documentation/frameworks/cheshire-cat/cat.jpg)
## Cheshire Cat and Qdrant
Cheshire Cat uses Qdrant as the default [Vector Memory](https://cheshire-cat-ai.github.io/docs/faq/llm-concepts/vector-memory/) for ingesting and retrieving documents.
```
# Decide host and port for your Cat. Default will be localhost:1865
CORE_HOST=localhost
CORE_PORT=1865
# Qdrant server
# QDRANT_HOST=localhost
# QDRANT_PORT=6333
```
Cheshire Cat takes great advantage of the following features of Qdrant:
* [Collection Aliases](../../concepts/collections/#collection-aliases) to manage the change from one embedder to another.
* [Quantization](../../guides/quantization/) to obtain a good balance between speed, memory usage and quality of the results.
* [Snapshots](../../concepts/snapshots/) to not miss any information.
* [Community](https://discord.com/invite/tdtYvXjC4h)
![RAG Pipeline](/documentation/frameworks/cheshire-cat/stregatto.jpg)
## How to use the Cheshire Cat
### Requirements
To run the Cheshire Cat, you need to have [Docker](https://docs.docker.com/engine/install/) and [docker-compose](https://docs.docker.com/compose/install/) already installed on your system.
```shell
docker run --rm -it -p 1865:80 ghcr.io/cheshire-cat-ai/core:latest
```
* Chat with the Cheshire Cat on [localhost:1865/admin](http://localhost:1865/admin).
* You can also interact via REST API and try out the endpoints on [localhost:1865/docs](http://localhost:1865/docs)
Check the [instructions on github](https://github.com/cheshire-cat-ai/core/blob/main/README.md) for a more comprehensive quick start.
### First configuration of the LLM
* Open the Admin Portal in your browser at [localhost:1865/admin](http://localhost:1865/admin).
* Configure the LLM in the `Settings` tab.
* If you don't explicitly choose it using `Settings` tab, the Embedder follows the LLM.
## Next steps
For more information, refer to the Cheshire Cat [documentation](https://cheshire-cat-ai.github.io/docs/) and [blog](https://cheshirecat.ai/blog/).
* [Getting started](https://cheshirecat.ai/hello-world/)
* [How the Cat works](https://cheshirecat.ai/how-the-cat-works/)
* [Write Your First Plugin](https://cheshirecat.ai/write-your-first-plugin/)
* [Cheshire Cat's use of Qdrant - Vector Space](https://cheshirecat.ai/dont-get-lost-in-vector-space/)
* [Cheshire Cat's use of Qdrant - Aliases](https://cheshirecat.ai/the-drunken-cat-effect/)
* [Discord Community](https://discord.com/invite/bHX5sNFCYU)
| documentation/frameworks/cheshire-cat.md |
---
title: Understanding Vector Search in Qdrant
weight: 1
social_preview_image: /docs/gettingstarted/vector-social.png
---
# How Does Vector Search Work in Qdrant?
<p align="center"><iframe width="560" height="315" src="https://www.youtube.com/embed/mXNrhyw4q84?si=wruP9wWSa8JW4t78" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
If you are still trying to figure out how vector search works, please read ahead. This document describes how vector search is used, covers Qdrant's place in the larger ecosystem, and outlines how you can use Qdrant to augment your existing projects.
For those who want to start writing code right away, visit our [Complete Beginners tutorial](/documentation/tutorials/search-beginners/) to build a search engine in 5-15 minutes.
## A Brief History of Search
Human memory is unreliable. Thus, as long as we have been trying to collect ‘knowledge’ in written form, we had to figure out how to search for relevant content without rereading the same books repeatedly. That’s why some brilliant minds introduced the inverted index. In the simplest form, it’s an appendix to a book, typically put at its end, with a list of the essential terms-and links to pages they occur at. Terms are put in alphabetical order. Back in the day, that was a manually crafted list requiring lots of effort to prepare. Once digitalization started, it became a lot easier, but still, we kept the same general principles. That worked, and still, it does.
If you are looking for a specific topic in a particular book, you can try to find a related phrase and quickly get to the correct page. Of course, assuming you know the proper term. If you don’t, you must try and fail several times or find somebody else to help you form the correct query.
{{< figure src=/docs/gettingstarted/inverted-index.png caption="A simplified version of the inverted index." >}}
Time passed, and we haven’t had much change in that area for quite a long time. But our textual data collection started to grow at a greater pace. So we also started building up many processes around those inverted indexes. For example, we allowed our users to provide many words and started splitting them into pieces. That allowed finding some documents which do not necessarily contain all the query words, but possibly part of them. We also started converting words into their root forms to cover more cases, removing stopwords, etc. Effectively we were becoming more and more user-friendly. Still, the idea behind the whole process is derived from the most straightforward keyword-based search known since the Middle Ages, with some tweaks.
{{< figure src=/docs/gettingstarted/tokenization.png caption="The process of tokenization with an additional stopwords removal and converstion to root form of a word." >}}
Technically speaking, we encode the documents and queries into so-called sparse vectors where each position has a corresponding word from the whole dictionary. If the input text contains a specific word, it gets a non-zero value at that position. But in reality, none of the texts will contain more than hundreds of different words. So the majority of vectors will have thousands of zeros and a few non-zero values. That’s why we call them sparse. And they might be already used to calculate some word-based similarity by finding the documents which have the biggest overlap.
{{< figure src=/docs/gettingstarted/query.png caption="An example of a query vectorized to sparse format." >}}
Sparse vectors have relatively **high dimensionality**; equal to the size of the dictionary. And the dictionary is obtained automatically from the input data. So if we have a vector, we are able to partially reconstruct the words used in the text that created that vector.
## The Tower of Babel
Every once in a while, when we discover new problems with inverted indexes, we come up with a new heuristic to tackle it, at least to some extent. Once we realized that people might describe the same concept with different words, we started building lists of synonyms to convert the query to a normalized form. But that won’t work for the cases we didn’t foresee. Still, we need to craft and maintain our dictionaries manually, so they can support the language that changes over time. Another difficult issue comes to light with multilingual scenarios. Old methods require setting up separate pipelines and keeping humans in the loop to maintain the quality.
{{< figure src=/docs/gettingstarted/babel.jpg caption="The Tower of Babel, Pieter Bruegel." >}}
## The Representation Revolution
The latest research in Machine Learning for NLP is heavily focused on training Deep Language Models. In this process, the neural network takes a large corpus of text as input and creates a mathematical representation of the words in the form of vectors. These vectors are created in such a way that words with similar meanings and occurring in similar contexts are grouped together and represented by similar vectors. And we can also take, for example, an average of all the word vectors to create the vector for a whole text (e.g query, sentence, or paragraph).
![deep neural](/docs/gettingstarted/deep-neural.png)
We can take those **dense vectors** produced by the network and use them as a **different data representation**. They are dense because neural networks will rarely produce zeros at any position. In contrary to sparse ones, they have a relatively low dimensionality — hundreds or a few thousand only. Unfortunately, if we want to have a look and understand the content of the document by looking at the vector it’s no longer possible. Dimensions are no longer representing the presence of specific words.
Dense vectors can capture the meaning, not the words used in a text. That being said, **Large Language Models can automatically handle synonyms**. Moreso, since those neural networks might have been trained with multilingual corpora, they translate the same sentence, written in different languages, to similar vector representations, also called **embeddings**. And we can compare them to find similar pieces of text by calculating the distance to other vectors in our database.
{{< figure src=/docs/gettingstarted/input.png caption="Input queries contain different words, but they are still converted into similar vector representations, because the neural encoder can capture the meaning of the sentences. That feature can capture synonyms but also different languages.." >}}
**Vector search** is a process of finding similar objects based on their embeddings similarity. The good thing is, you don’t have to design and train your neural network on your own. Many pre-trained models are available, either on **HuggingFace** or by using libraries like [SentenceTransformers](https://www.sbert.net/?ref=hackernoon.com). If you, however, prefer not to get your hands dirty with neural models, you can also create the embeddings with SaaS tools, like [co.embed API](https://docs.cohere.com/reference/embed?ref=hackernoon.com).
## Why Qdrant?
The challenge with vector search arises when we need to find similar documents in a big set of objects. If we want to find the closest examples, the naive approach would require calculating the distance to every document. That might work with dozens or even hundreds of examples but may become a bottleneck if we have more than that. When we work with relational data, we set up database indexes to speed things up and avoid full table scans. And the same is true for vector search. Qdrant is a fully-fledged vector database that speeds up the search process by using a graph-like structure to find the closest objects in sublinear time. So you don’t calculate the distance to every object from the database, but some candidates only.
{{< figure src=/docs/gettingstarted/vector-search.png caption="Vector search with Qdrant. Thanks to HNSW graph we are able to compare the distance to some of the objects from the database, not to all of them." >}}
While doing a semantic search at scale, because this is what we sometimes call the vector search done on texts, we need a specialized tool to do it effectively — a tool like Qdrant.
## Next Steps
Vector search is an exciting alternative to sparse methods. It solves the issues we had with the keyword-based search without needing to maintain lots of heuristics manually. It requires an additional component, a neural encoder, to convert text into vectors.
[**Tutorial 1 - Qdrant for Complete Beginners**](/documentation/tutorials/search-beginners/)
Despite its complicated background, vectors search is extraordinarily simple to set up. With Qdrant, you can have a search engine up-and-running in five minutes. Our [Complete Beginners tutorial](../../tutorials/search-beginners/) will show you how.
[**Tutorial 2 - Question and Answer System**](/articles/qa-with-cohere-and-qdrant/)
However, you can also choose SaaS tools to generate them and avoid building your model. Setting up a vector search project with Qdrant Cloud and Cohere co.embed API is fairly easy if you follow the [Question and Answer system tutorial](/articles/qa-with-cohere-and-qdrant/).
There is another exciting thing about vector search. You can search for any kind of data as long as there is a neural network that would vectorize your data type. Do you think about a reverse image search? That’s also possible with vector embeddings.
| documentation/overview/vector-search.md |
---
title: What is Qdrant?
weight: 3
aliases:
- overview
---
# Introduction
Vector databases are a relatively new way for interacting with abstract data representations
derived from opaque machine learning models such as deep learning architectures. These
representations are often called vectors or embeddings and they are a compressed version of
the data used to train a machine learning model to accomplish a task like sentiment analysis,
speech recognition, object detection, and many others.
These new databases shine in many applications like [semantic search](https://en.wikipedia.org/wiki/Semantic_search)
and [recommendation systems](https://en.wikipedia.org/wiki/Recommender_system), and here, we'll
learn about one of the most popular and fastest growing vector databases in the market, [Qdrant](https://github.com/qdrant/qdrant).
## What is Qdrant?
[Qdrant](https://github.com/qdrant/qdrant) "is a vector similarity search engine that provides a production-ready
service with a convenient API to store, search, and manage points (i.e. vectors) with an additional
payload." You can think of the payloads as additional pieces of information that can help you
hone in on your search and also receive useful information that you can give to your users.
You can get started using Qdrant with the Python `qdrant-client`, by pulling the latest docker
image of `qdrant` and connecting to it locally, or by trying out [Qdrant's Cloud](https://cloud.qdrant.io/)
free tier option until you are ready to make the full switch.
With that out of the way, let's talk about what are vector databases.
## What Are Vector Databases?
![dbs](https://raw.githubusercontent.com/ramonpzg/mlops-sydney-2023/main/images/databases.png)
Vector databases are a type of database designed to store and query high-dimensional vectors
efficiently. In traditional [OLTP](https://www.ibm.com/topics/oltp) and [OLAP](https://www.ibm.com/topics/olap)
databases (as seen in the image above), data is organized in rows and columns (and these are
called **Tables**), and queries are performed based on the values in those columns. However,
in certain applications including image recognition, natural language processing, and recommendation
systems, data is often represented as vectors in a high-dimensional space, and these vectors, plus
an id and a payload, are the elements we store in something called a **Collection** within a vector
database like Qdrant.
A vector in this context is a mathematical representation of an object or data point, where elements of
the vector implicitly or explicitly correspond to specific features or attributes of the object. For example,
in an image recognition system, a vector could represent an image, with each element of the vector
representing a pixel value or a descriptor/characteristic of that pixel. In a music recommendation
system, each vector could represent a song, and elements of the vector would capture song characteristics
such as tempo, genre, lyrics, and so on.
Vector databases are optimized for **storing** and **querying** these high-dimensional vectors
efficiently, and they often using specialized data structures and indexing techniques such as
Hierarchical Navigable Small World (HNSW) -- which is used to implement Approximate Nearest
Neighbors -- and Product Quantization, among others. These databases enable fast similarity
and semantic search while allowing users to find vectors that are the closest to a given query
vector based on some distance metric. The most commonly used distance metrics are Euclidean
Distance, Cosine Similarity, and Dot Product, and these three are fully supported Qdrant.
Here's a quick overview of the three:
- [**Cosine Similarity**](https://en.wikipedia.org/wiki/Cosine_similarity) - Cosine similarity
is a way to measure how similar two vectors are. To simplify, it reflects whether the vectors
have the same direction (similar) or are poles apart. Cosine similarity is often used with text representations
to compare how similar two documents or sentences are to each other. The output of cosine similarity ranges
from -1 to 1, where -1 means the two vectors are completely dissimilar, and 1 indicates maximum similarity.
- [**Dot Product**](https://en.wikipedia.org/wiki/Dot_product) - The dot product similarity metric is another way
of measuring how similar two vectors are. Unlike cosine similarity, it also considers the length of the vectors.
This might be important when, for example, vector representations of your documents are built
based on the term (word) frequencies. The dot product similarity is calculated by multiplying the respective values
in the two vectors and then summing those products. The higher the sum, the more similar the two vectors are.
If you normalize the vectors (so the numbers in them sum up to 1), the dot product similarity will become
the cosine similarity.
- [**Euclidean Distance**](https://en.wikipedia.org/wiki/Euclidean_distance) - Euclidean
distance is a way to measure the distance between two points in space, similar to how we
measure the distance between two places on a map. It's calculated by finding the square root
of the sum of the squared differences between the two points' coordinates. This distance metric
is also commonly used in machine learning to measure how similar or dissimilar two vectors are.
Now that we know what vector databases are and how they are structurally different than other
databases, let's go over why they are important.
## Why do we need Vector Databases?
Vector databases play a crucial role in various applications that require similarity search, such
as recommendation systems, content-based image retrieval, and personalized search. By taking
advantage of their efficient indexing and searching techniques, vector databases enable faster
and more accurate retrieval of unstructured data already represented as vectors, which can
help put in front of users the most relevant results to their queries.
In addition, other benefits of using vector databases include:
1. Efficient storage and indexing of high-dimensional data.
3. Ability to handle large-scale datasets with billions of data points.
4. Support for real-time analytics and queries.
5. Ability to handle vectors derived from complex data types such as images, videos, and natural language text.
6. Improved performance and reduced latency in machine learning and AI applications.
7. Reduced development and deployment time and cost compared to building a custom solution.
Keep in mind that the specific benefits of using a vector database may vary depending on the
use case of your organization and the features of the database you ultimately choose.
Let's now evaluate, at a high-level, the way Qdrant is architected.
## High-Level Overview of Qdrant's Architecture
![qdrant](https://raw.githubusercontent.com/ramonpzg/mlops-sydney-2023/main/images/qdrant_overview_high_level.png)
The diagram above represents a high-level overview of some of the main components of Qdrant. Here
are the terminologies you should get familiar with.
- [Collections](../concepts/collections/): A collection is a named set of points (vectors with a payload) among which you can search. The vector of each point within the same collection must have the same dimensionality and be compared by a single metric. [Named vectors](../concepts/collections/#collection-with-multiple-vectors) can be used to have multiple vectors in a single point, each of which can have their own dimensionality and metric requirements.
- [Distance Metrics](https://en.wikipedia.org/wiki/Metric_space): These are used to measure
similarities among vectors and they must be selected at the same time you are creating a
collection. The choice of metric depends on the way the vectors were obtained and, in particular,
on the neural network that will be used to encode new queries.
- [Points](../concepts/points/): The points are the central entity that
Qdrant operates with and they consist of a vector and an optional id and payload.
- id: a unique identifier for your vectors.
- Vector: a high-dimensional representation of data, for example, an image, a sound, a document, a video, etc.
- [Payload](../concepts/payload/): A payload is a JSON object with additional data you can add to a vector.
- [Storage](../concepts/storage/): Qdrant can use one of two options for
storage, **In-memory** storage (Stores all vectors in RAM, has the highest speed since disk
access is required only for persistence), or **Memmap** storage, (creates a virtual address
space associated with the file on disk).
- Clients: the programming languages you can use to connect to Qdrant.
## Next Steps
Now that you know more about vector databases and Qdrant, you are ready to get started with one
of our tutorials. If you've never used a vector database, go ahead and jump straight into
the **Getting Started** section. Conversely, if you are a seasoned developer in these
technology, jump to the section most relevant to your use case.
As you go through the tutorials, please let us know if any questions come up in our
[Discord channel here](https://qdrant.to/discord). 😎
| documentation/overview/_index.md |
---
title: Qdrant Web UI
weight: 2
aliases:
- /documentation/web-ui/
---
# Qdrant Web UI
You can manage both local and cloud Qdrant deployments through the Web UI.
If you've set up a deployment locally with the Qdrant [Quickstart](/documentation/quick-start/),
navigate to http://localhost:6333/dashboard.
If you've set up a deployment in a cloud cluster, find your Cluster URL in your
cloud dashboard, at https://cloud.qdrant.io. Add `:6333/dashboard` to the end
of the URL.
## Access the Web UI
Qdrant's Web UI is an intuitive and efficient graphic interface for your Qdrant Collections, REST API and data points.
In the **Console**, you may use the REST API to interact with Qdrant, while in **Collections**, you can manage all the collections and upload Snapshots.
![Qdrant Web UI](/articles_data/qdrant-1.3.x/web-ui.png)
### Qdrant Web UI features
In the Qdrant Web UI, you can:
- Run HTTP-based calls from the console
- List and search existing [collections](/documentation/concepts/collections/)
- Learn from our interactive tutorial
You can navigate to these options directly. For example, if you used our
[quick start](/documentation/quick-start/) to set up a cluster on localhost,
you can review our tutorial at http://localhost:6333/dashboard#/tutorial.
| documentation/interfaces/web-ui.md |
---
title: API & SDKs
weight: 6
aliases:
- /documentation/interfaces/
---
# Interfaces
Qdrant supports these "official" clients.
> **Note:** If you are using a language that is not listed here, you can use the REST API directly or generate a client for your language
using [OpenAPI](https://github.com/qdrant/qdrant/blob/master/docs/redoc/master/openapi.json)
or [protobuf](https://github.com/qdrant/qdrant/tree/master/lib/api/src/grpc/proto) definitions.
## Client Libraries
||Client Repository|Installation|Version|
|-|-|-|-|
|[![python](/docs/misc/python.webp)](https://python-client.qdrant.tech/)|**[Python](https://github.com/qdrant/qdrant-client)** + **[(Client Docs)](https://python-client.qdrant.tech/)**|`pip install qdrant-client[fastembed]`|[Latest Release](https://github.com/qdrant/qdrant-client/releases)|
|![typescript](/docs/misc/ts.webp)|**[JavaScript / Typescript](https://github.com/qdrant/qdrant-js)**|`npm install @qdrant/js-client-rest`|[Latest Release](https://github.com/qdrant/qdrant-js/releases)|
|![rust](/docs/misc/rust.png)|**[Rust](https://github.com/qdrant/rust-client)**|`cargo add qdrant-client`|[Latest Release](https://github.com/qdrant/rust-client/releases)|
|![golang](/docs/misc/go.webp)|**[Go](https://github.com/qdrant/go-client)**|`go get github.com/qdrant/go-client`|[Latest Release](https://github.com/qdrant/go-client)|
|![.net](/docs/misc/dotnet.webp)|**[.NET](https://github.com/qdrant/qdrant-dotnet)**|`dotnet add package Qdrant.Client`|[Latest Release](https://github.com/qdrant/qdrant-dotnet/releases)|
|![java](/docs/misc/java.webp)|**[Java](https://github.com/qdrant/java-client)**|[Available on Maven Central](https://central.sonatype.com/artifact/io.qdrant/client)|[Latest Release](https://github.com/qdrant/java-client/releases)|
## API Reference
All interaction with Qdrant takes place via the REST API. We recommend using REST API if you are using Qdrant for the first time or if you are working on a prototype.
| API | Documentation |
| -------- | ------------------------------------------------------------------------------------ |
| REST API | [OpenAPI Specification](https://api.qdrant.tech/api-reference) |
| gRPC API | [gRPC Documentation](https://github.com/qdrant/qdrant/blob/master/docs/grpc/docs.md) |
### gRPC Interface
The gRPC methods follow the same principles as REST. For each REST endpoint, there is a corresponding gRPC method.
As per the [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), the gRPC interface is available on the specified port.
```yaml
service:
grpc_port: 6334
```
<aside role="status">If you decide to use gRPC, you must expose the port when starting Qdrant.</aside>
Running the service inside of Docker will look like this:
```bash
docker run -p 6333:6333 -p 6334:6334 \
-v $(pwd)/qdrant_storage:/qdrant/storage:z \
qdrant/qdrant
```
**When to use gRPC:** The choice between gRPC and the REST API is a trade-off between convenience and speed. gRPC is a binary protocol and can be more challenging to debug. We recommend using gRPC if you are already familiar with Qdrant and are trying to optimize the performance of your application.
| documentation/interfaces/_index.md |
---
title: API Reference
weight: 1
type: external-link
external_url: https://api.qdrant.tech/api-reference
sitemapExclude: True
--- | documentation/interfaces/api-reference.md |
---
title: About Us
--- | about-us/_index.md |
---
title: Retrieval Augmented Generation (RAG)
description: Unlock the full potential of your AI with RAG powered by Qdrant. Dive into a new era of intelligent applications that understand and interact with unprecedented accuracy and depth.
startFree:
text: Get Started
url: https://cloud.qdrant.io/
learnMore:
text: Contact Us
url: /contact-us/
image:
src: /img/vectors/vector-2.svg
alt: Retrieval Augmented Generation
sitemapExclude: true
---
| retrieval-augmented-generation/retrieval-augmented-generation-hero.md |
---
title: RAG with Qdrant
description: RAG, powered by Qdrant's efficient data retrieval, elevates AI's capacity to generate rich, context-aware content across text, code, and multimedia, enhancing relevance and precision on a scalable platform. Discover why Qdrant is the perfect choice for your RAG project.
features:
- id: 0
icon:
src: /icons/outline/speedometer-blue.svg
alt: Speedometer
title: Highest RPS
description: Qdrant leads with top requests-per-second, outperforming alternative vector databases in various datasets by up to 4x.
- id: 1
icon:
src: /icons/outline/time-blue.svg
alt: Time
title: Fast Retrieval
description: "Qdrant achieves the lowest latency, ensuring quicker response times in data retrieval: 3ms response for 1M Open AI embeddings."
- id: 2
icon:
src: /icons/outline/vectors-blue.svg
alt: Vectors
title: Multi-Vector Support
description: Integrate the strengths of multiple vectors per document, such as title and body, to create search experiences your customers admire.
- id: 3
icon:
src: /icons/outline/compression-blue.svg
alt: Compression
title: Built-in Compression
description: Significantly reduce memory usage, improve search performance and save up to 30x cost for high-dimensional vectors with Quantization.
sitemapExclude: true
---
| retrieval-augmented-generation/retrieval-augmented-generation-features.md |
---
title: Learn how to get started with Qdrant for your RAG use case
features:
- id: 0
image:
src: /img/retrieval-augmented-generation-use-cases/case1.svg
srcMobile: /img/retrieval-augmented-generation-use-cases/case1-mobile.svg
alt: Music recommendation
title: Question and Answer System with LlamaIndex
description: Combine Qdrant and LlamaIndex to create a self-updating Q&A system.
link:
text: Video Tutorial
url: https://www.youtube.com/watch?v=id5ql-Abq4Y&t=56s
- id: 1
image:
src: /img/retrieval-augmented-generation-use-cases/case2.svg
srcMobile: /img/retrieval-augmented-generation-use-cases/case2-mobile.svg
alt: Food discovery
title: Retrieval Augmented Generation with OpenAI and Qdrant
description: Basic RAG pipeline with Qdrant and OpenAI SDKs.
link:
text: Learn More
url: /articles/food-discovery-demo/
caseStudy:
logo:
src: /img/retrieval-augmented-generation-use-cases/customer-logo.svg
alt: Logo
title: See how Dust is using Qdrant for RAG
description: Dust provides companies with the core platform to execute on their GenAI bet for their teams by deploying LLMs across the organization and providing context aware AI assistants through RAG.
link:
text: Read Case Study
url: /blog/dust-and-qdrant/
image:
src: /img/retrieval-augmented-generation-use-cases/case-study.png
alt: Preview
sitemapExclude: true
---
| retrieval-augmented-generation/retrieval-augmented-generation-use-cases.md |
---
title: RAG Evaluation
descriptionFirstPart: Retrieval Augmented Generation (RAG) harnesses large language models to enhance content generation by effectively leveraging existing information. By amalgamating specific details from various sources, RAG facilitates accurate and relevant query results, making it invaluable across domains such as medical, finance, and academia for content creation, Q&A applications, and information synthesis.
descriptionSecondPart: However, evaluating RAG systems is essential to refine and optimize their performance, ensuring alignment with user expectations and validating their functionality.
image:
src: /img/retrieval-augmented-generation-evaluation/become-a-partner-graphic.svg
alt: Graphic
partnersTitle: "We work with the best in the industry on RAG evaluation:"
logos:
- id: 0
icon:
src: /img/retrieval-augmented-generation-evaluation/arize-logo.svg
alt: Arize logo
- id: 1
icon:
src: /img/retrieval-augmented-generation-evaluation/ragas-logo.svg
alt: Ragas logo
- id: 2
icon:
src: /img/retrieval-augmented-generation-evaluation/quotient-logo.svg
alt: Quotient logo
sitemapExclude: true
---
| retrieval-augmented-generation/retrieval-augmented-generation-evaluation.md |
---
title: Qdrant integrates with all leading LLM providers and frameworks
integrations:
- id: 0
icon:
src: /img/integrations/integration-cohere.svg
alt: Cohere logo
title: Cohere
description: Integrate Qdrant with Cohere's co.embed API and Python SDK.
- id: 1
icon:
src: /img/integrations/integration-gemini.svg
alt: Gemini logo
title: Gemini
description: Connect Qdrant with Google's Gemini Embedding Model API seamlessly.
- id: 2
icon:
src: /img/integrations/integration-open-ai.svg
alt: OpenAI logo
title: OpenAI
description: Easily integrate OpenAI embeddings with Qdrant using the official Python SDK.
- id: 3
icon:
src: /img/integrations/integration-aleph-alpha.svg
alt: Aleph Alpha logo
title: Aleph Alpha
description: Integrate Qdrant with Aleph Alpha's multimodal, multilingual embeddings.
- id: 4
icon:
src: /img/integrations/integration-jina.svg
alt: Jina logo
title: Jina AI
description: Easily integrate Qdrant with Jina AI's embeddings API.
- id: 5
icon:
src: /img/integrations/integration-aws.svg
alt: AWS logo
title: AWS Bedrock
description: Utilize AWS Bedrock's embedding models with Qdrant seamlessly.
- id: 6
icon:
src: /img/integrations/integration-lang-chain.svg
alt: LangChain logo
title: LangChain
description: Qdrant seamlessly integrates with LangChain for LLM development.
- id: 7
icon:
src: /img/integrations/integration-llama-index.svg
alt: LlamaIndex logo
title: LlamaIndex
description: Qdrant integrates with LlamaIndex for efficient data indexing in LLMs.
sitemapExclude: true
---
| retrieval-augmented-generation/retrieval-augmented-generation-integrations.md |
---
title: "RAG Use Case: Advanced Vector Search for AI Applications"
description: "Learn how Qdrant's advanced vector search enhances Retrieval-Augmented Generation (RAG) AI applications, offering scalable and efficient solutions."
url: rag
build:
render: always
cascade:
- build:
list: local
publishResources: false
render: never
---
| retrieval-augmented-generation/_index.md |
---
title: Qdrant Hybrid Cloud
salesTitle: Hybrid Cloud
description: Bring your own Kubernetes clusters from any cloud provider, on-premise infrastructure, or edge locations and connect them to the Managed Cloud.
cards:
- id: 0
icon: /icons/outline/separate-blue.svg
title: Deployment Flexibility
description: Use your existing infrastructure, whether it be on cloud platforms, on-premise setups, or even at edge locations.
- id: 1
icon: /icons/outline/money-growth-blue.svg
title: Unmatched Cost Advantage
description: Maximum deployment flexibility to leverage the best available resources, in the cloud or on-premise.
- id: 2
icon: /icons/outline/switches-blue.svg
title: Transparent Control
description: Fully managed experience for your Qdrant clusters, while your data remains exclusively yours.
form:
title: Connect with us
# description:
id: contact-sales-form
hubspotFormOptions: '{
"region": "eu1",
"portalId": "139603372",
"formId": "f583c7ea-15ff-4c57-9859-650b8f34f5d3",
"submitButtonClass": "button button_contained",
}'
logosSectionTitle: Qdrant is trusted by top-tier enterprises
---
| contact-hybrid-cloud/_index.md |