text
stringlengths 25
143k
| source
stringlengths 12
112
|
---|---|
---
logos:
- /img/customers-logo/discord.svg
- /img/customers-logo/johnson-and-johnson.svg
- /img/customers-logo/perplexity.svg
- /img/customers-logo/mozilla.svg
- /img/customers-logo/voiceflow.svg
- /img/customers-logo/bosch-digital.svg
sitemapExclude: true
--- | customers/logo-cards-1.md |
---
review: “We looked at all the big options out there right now for vector databases, with our focus on ease of use, performance, pricing, and communication. <strong>Qdrant came out on top in each category...</strong> ultimately, it wasn't much of a contest.”
names: Alex Webb
positions: Director of Engineering, CB Insights
avatar:
src: /img/customers/alex-webb.svg
alt: Alex Webb Avatar
logo:
src: /img/brands/cb-insights.svg
alt: Logo
sitemapExclude: true
---
| customers/customers-testimonial1.md |
---
title: Customers
description: Learn how Qdrant powers thousands of top AI solutions that require vector search with unparalleled efficiency, performance and massive-scale data processing.
caseStudy:
logo:
src: /img/customers-case-studies/customer-logo.svg
alt: Logo
title: Recommendation Engine with Qdrant Vector Database
description: Dailymotion leverages Qdrant to optimize its <b>video recommendation engine</b>, managing over 420 million videos and processing 13 million recommendations daily. With this, Dailymotion was able to <b>reduced content processing times from hours to minutes</b> and <b>increased user interactions and click-through rates by more than 3x.</b>
link:
text: Read Case Study
url: /blog/case-study-dailymotion/
image:
src: /img/customers-case-studies/case-study.png
alt: Preview
cases:
- id: 0
logo:
src: /img/customers-case-studies/visua.svg
alt: Visua Logo
image:
src: /img/customers-case-studies/case-visua.png
alt: The hands of a person in a medical gown holding a tablet against the background of a pharmacy shop
title: VISUA improves quality control process for computer vision with anomaly detection by 10x.
link:
text: Read Story
url: /blog/case-study-visua/
- id: 1
logo:
src: /img/customers-case-studies/dust.svg
alt: Dust Logo
image:
src: /img/customers-case-studies/case-dust.png
alt: A man in a jeans shirt is holding a smartphone, only his hands are visible. In the foreground, there is an image of a robot surrounded by chat and sound waves.
title: Dust uses Qdrant for RAG, achieving millisecond retrieval, reducing costs by 50%, and boosting scalability.
link:
text: Read Story
url: /blog/dust-and-qdrant/
- id: 2
logo:
src: /img/customers-case-studies/iris-agent.svg
alt: Logo
image:
src: /img/customers-case-studies/case-iris-agent.png
alt: Hands holding a smartphone, styled smartphone interface visualisation in the foreground. First-person view
title: IrisAgent uses Qdrant for RAG to automate support, and improve resolution times, transforming customer service.
link:
text: Read Story
url: /blog/iris-agent-qdrant/
sitemapExclude: true
---
| customers/customers-case-studies.md |
---
review: “We LOVE Qdrant! The exceptional engineering, strong business value, and outstanding team behind the product drove our choice. Thank you for your great contribution to the technology community!”
names: Kyle Tobin
positions: Principal, Cognizant
avatar:
src: /img/customers/kyle-tobin.png
alt: Kyle Tobin Avatar
logo:
src: /img/brands/cognizant.svg
alt: Cognizant Logo
sitemapExclude: true
---
| customers/customers-testimonial2.md |
---
logos:
- /img/customers-logo/gitbook.svg
- /img/customers-logo/deloitte.svg
- /img/customers-logo/disney.svg
sitemapExclude: true
--- | customers/logo-cards-3.md |
---
title: Vector Space Wall
link:
url: https://testimonial.to/qdrant/all
text: Submit Your Testimonial
testimonials:
- id: 0
name: Jonathan Eisenzopf
position: Chief Strategy and Research Officer at Talkmap
avatar:
src: /img/customers/jonathan-eisenzopf.svg
alt: Avatar
text: “With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform on enterprise scale.”
- id: 1
name: Angel Luis Almaraz Sánchez
position: Full Stack | DevOps
avatar:
src: /img/customers/angel-luis-almaraz-sanchez.svg
alt: Avatar
text: Thank you, great work, Qdrant is my favorite option for similarity search.
- id: 2
name: Shubham Krishna
position: ML Engineer @ ML6
avatar:
src: /img/customers/shubham-krishna.svg
alt: Avatar
text: Go ahead and checkout Qdrant. I plan to build a movie retrieval search where you can ask anything regarding a movie based on the vector embeddings generated by a LLM. It can also be used for getting recommendations.
- id: 3
name: Kwok Hing LEON
position: Data Science
avatar:
src: /img/customers/kwok-hing-leon.svg
alt: Avatar
text: Check out qdrant for improving searches. Bye to non-semantic KM engines.
- id: 4
name: Ankur S
position: Building
avatar:
src: /img/customers/ankur-s.svg
alt: Avatar
text: Quadrant is a great vector database. There is a real sense of thought behind the api!
- id: 5
name: Yasin Salimibeni View Yasin Salimibeni’s profile
position: AI Evangelist | Generative AI Product Designer | Entrepreneur | Mentor
avatar:
src: /img/customers/yasin-salimibeni-view-yasin-salimibeni.svg
alt: Avatar
text: Great work. I just started testing Qdrant Azure and I was impressed by the efficiency and speed. Being deploy-ready on large cloud providers is a great plus. Way to go!
- id: 6
name: Marcel Coetzee
position: Data and AI Plumber
avatar:
src: /img/customers/marcel-coetzee.svg
alt: Avatar
text: Using Qdrant as a blazing fact vector store for a stealth project of mine. It offers fantasic functionality for semantic search ✨
- id: 7
name: Andrew Rove
position: Principal Software Engineer
avatar:
src: /img/customers/andrew-rove.svg
alt: Avatar
text: We have been using Qdrant in production now for over 6 months to store vectors for cosine similarity search and it is way more stable and faster than our old ElasticSearch vector index.<br/><br/>No merging segments, no red indexes at random times. It just works and was super easy to deploy via docker to our cluster.<br/><br/>It’s faster, cheaper to host, and more stable, and open source to boot!
- id: 8
name: Josh Lloyd
position: ML Engineer
avatar:
src: /img/customers/josh-lloyd.svg
alt: Avatar
text: I'm using Qdrant to search through thousands of documents to find similar text phrases for question answering. Qdrant's awesome filtering allows me to slice along metadata while I'm at it! 🚀 and it's fast ⏩🔥
- id: 9
name: Leonard Püttmann
position: data scientist
avatar:
src: /img/customers/leonard-puttmann.svg
alt: Avatar
text: Amidst the hype around vector databases, Qdrant is by far my favorite one. It's super fast (written in Rust) and open-source! At Kern AI we use Qdrant for fast document retrieval and to do quick similarity search for text data.
- id: 10
name: Stanislas Polu
position: Software Engineer & Co-Founder, Dust
avatar:
src: /img/customers/stanislas-polu.svg
alt: Avatar
text: Qdrant's the best. By. Far.
- id: 11
name: Sivesh Sukumar
position: Investor at Balderton
avatar:
src: /img/customers/sivesh-sukumar.svg
alt: Avatar
text: We're using Qdrant to help segment and source Europe's next wave of extraordinary companies!
- id: 12
name: Saksham Gupta
position: AI Governance Machine Learning Engineer
avatar:
src: /img/customers/saksham-gupta.svg
alt: Avatar
text: Looking forward to using Qdrant vector similarity search in the clinical trial space! OpenAI Embeddings + Qdrant = Match made in heaven!
- id: 12
name: Rishav Dash
position: Data Scientist
avatar:
src: /img/customers/rishav-dash.svg
alt: Avatar
text: awesome stuff 🔥
sitemapExclude: true
---
| customers/customers-vector-space-wall.md |
---
title: Customers
description: Learn how Qdrant powers thousands of top AI solutions that require vector search with unparalleled efficiency, performance and massive-scale data processing.
sitemapExclude: true
---
| customers/customers-hero.md |
---
title: Customers
description: Customers
build:
render: always
cascade:
- build:
list: local
publishResources: false
render: never
---
| customers/_index.md |
---
logos:
- /img/customers-logo/flipkart.svg
- /img/customers-logo/x.svg
- /img/customers-logo/quora.svg
sitemapExclude: true
--- | customers/logo-cards-2.md |
---
title: Qdrant Demos and Tutorials
description: Experience firsthand how Qdrant powers intelligent search, anomaly detection, and personalized recommendations, showcasing the full capabilities of vector search to revolutionize data exploration and insights.
cards:
- id: 0
title: Semantic Search Demo - Startup Search
paragraphs:
- id: 0
content: This demo leverages a pre-trained SentenceTransformer model to perform semantic searches on startup descriptions, transforming them into vectors for the Qdrant engine.
- id: 1
content: Enter a query to see how neural search compares to traditional full-text search, with the option to toggle neural search on and off for direct comparison.
link:
text: View Demo
url: https://qdrant.to/semantic-search-demo
- id: 1
title: Semantic Search and Recommendations Demo - Food Discovery
paragraphs:
- id: 0
content: Explore personalized meal recommendations with our demo, using Delivery Service data. Like or dislike dish photos to refine suggestions based on visual appeal.
- id: 1
content: Filter options allow for restaurant selections within your delivery area, tailoring your dining experience to your preferences.
link:
text: View Demo
url: https://food-discovery.qdrant.tech/
- id: 2
title: Categorization Demo -<br> E-Commerce Products
paragraphs:
- id: 0
content: Discover the power of vector databases in e-commerce through our demo. Simply input a product name and watch as our multi-language model intelligently categorizes it. The dots you see represent product clusters, highlighting our system's efficient categorization.
link:
text: View Demo
url: https://qdrant.to/extreme-classification-demo
- id: 3
title: Code Search Demo -<br> Explore Qdrant's Codebase
paragraphs:
- id: 0
content: Semantic search isn't just for natural language. By combining results from two models, qdrant is able to locate relevant code snippets down to the exact line.
link:
text: View Demo
url: https://code-search.qdrant.tech/
--- | demo/_index.md |
---
content: Learn more about all features that are supported on Qdrant Cloud.
link:
text: Qdrant Features
url: /qdrant-vector-database/
sitemapExclude: true
---
| qdrant-cloud/qdrant-cloud-features-link.md |
---
title: Qdrant Cloud
description: Qdrant Cloud provides optimal flexibility and offers a suite of features focused on efficient and scalable vector search - fully managed. Available on AWS, Google Cloud, and Azure.
startFree:
text: Start Free
url: https://cloud.qdrant.io/
contactUs:
text: Contact us
url: /contact-us/
icon:
src: /icons/fill/lightning-purple.svg
alt: Lightning
content: "Learn how to get up and running in minutes:"
#video:
# src: /
# button: Watch Demo
# icon:
# src: /icons/outline/play-white.svg
# alt: Play
# preview: /img/qdrant-cloud-demo.png
sitemapExclude: true
---
| qdrant-cloud/qdrant-cloud-hero.md |
---
items:
- id: 0
title: Run Anywhere
description: Available on <b>AWS</b>, <b>Google Cloud</b>, and <b>Azure</b> regions globally for deployment flexibility and quick data access.
image:
src: /img/qdrant-cloud-bento-cards/run-anywhere-graphic.png
alt: Run anywhere graphic
- id: 1
title: Simple Setup and Start Free
description: Deploying a cluster via the Qdrant Cloud Console takes only a few seconds and scales up as needed.
image:
src: /img/qdrant-cloud-bento-cards/simple-setup-illustration.png
alt: Simple setup illustration
- id: 2
title: Efficient Resource Management
description: Dramatically reduce memory usage with built-in compression options and offload data to disk.
image:
src: /img/qdrant-cloud-bento-cards/efficient-resource-management.png
alt: Efficient resource management diagram
- id: 3
title: Zero-downtime Upgrades
description: Uninterrupted service during scaling and model updates for continuous operation and deployment flexibility.
link:
text: Cluster Scaling
url: /documentation/cloud/cluster-scaling/
image:
src: /img/qdrant-cloud-bento-cards/zero-downtime-upgrades.png
alt: Zero downtime upgrades illustration
- id: 4
title: Continuous Backups
description: Automated, configurable backups for data safety and easy restoration to previous states.
link:
text: Backups
url: /documentation/cloud/backups/
image:
src: /img/qdrant-cloud-bento-cards/continuous-backups.png
alt: Continuous backups illustration
sitemapExclude: true
---
| qdrant-cloud/qdrant-cloud-bento-cards.md |
---
title: "Qdrant Cloud: Scalable Managed Cloud Services"
url: cloud
description: "Discover Qdrant Cloud, the cutting-edge managed cloud for scalable, high-performance AI applications. Manage and deploy your vector data with ease today."
build:
render: always
cascade:
- build:
list: local
publishResources: false
render: never
---
| qdrant-cloud/_index.md |
---
logo:
title: Our Logo
description: "The Qdrant logo represents a paramount expression of our core brand identity. With consistent placement, sizing, clear space, and color usage, our logo affirms its recognition across all platforms."
logoCards:
- id: 0
logo:
src: /img/brand-resources-logos/logo.svg
alt: Logo Full Color
title: Logo Full Color
link:
url: /img/brand-resources-logos/logo.svg
text: Download
- id: 1
logo:
src: /img/brand-resources-logos/logo-black.svg
alt: Logo Black
title: Logo Black
link:
url: /img/brand-resources-logos/logo-black.svg
text: Download
- id: 2
logo:
src: /img/brand-resources-logos/logo-white.svg
alt: Logo White
title: Logo White
link:
url: /img/brand-resources-logos/logo-white.svg
text: Download
logomarkTitle: Logomark
logomarkCards:
- id: 0
logo:
src: /img/brand-resources-logos/logomark.svg
alt: Logomark Full Color
title: Logomark Full Color
link:
url: /img/brand-resources-logos/logomark.svg
text: Download
- id: 1
logo:
src: /img/brand-resources-logos/logomark-black.svg
alt: Logomark Black
title: Logomark Black
link:
url: /img/brand-resources-logos/logomark-black.svg
text: Download
- id: 2
logo:
src: /img/brand-resources-logos/logomark-white.svg
alt: Logomark White
title: Logomark White
link:
url: /img/brand-resources-logos/logomark-white.svg
text: Download
colors:
title: Colors
description: Our brand colors play a crucial role in maintaining a cohesive visual identity. The careful balance of these colors ensures a consistent and impactful representation of Qdrant, reinforcing our commitment to excellence and precision in every aspect of our work.
cards:
- id: 0
name: Amaranth
type: HEX
code: "DC244C"
- id: 1
name: Blue
type: HEX
code: "2F6FF0"
- id: 2
name: Violet
type: HEX
code: "8547FF"
- id: 3
name: Teal
type: HEX
code: "038585"
- id: 4
name: Black
type: HEX
code: "090E1A"
- id: 5
name: White
type: HEX
code: "FFFFFF"
typography:
title: Typography
description: Main typography is Satoshi, this is employed for both UI and marketing purposes. Headlines are set in Bold (600), while text is rendered in Medium (500).
example: AaBb
specimen: "ABCDEFGHIJKLMNOPQRSTUVWXYZ<br>abcdefghijklmnopqrstuvwxyz<br>0123456789 !@#$%^&*()"
link:
url: https://api.fontshare.com/v2/fonts/download/satoshi
text: Download
trademarks:
title: Trademarks
description: All features associated with the Qdrant brand are safeguarded by relevant trademark, copyright, and intellectual property regulations. Utilization of the Qdrant trademark must adhere to the specified Qdrant Trademark Standards for Use.<br><br>Should you require clarification or seek permission to utilize these resources, feel free to reach out to us at
link:
url: "mailto:info@qdrant.com"
text: info@qdrant.com.
sitemapExclude: true
---
| brand-resources/brand-resources-content.md |
---
title: Qdrant Brand Resources
buttons:
- id: 0
url: "#logo"
text: Logo
- id: 1
url: "#colors"
text: Colors
- id: 2
url: "#typography"
text: Typography
- id: 3
url: "#trademarks"
text: Trademarks
sitemapExclude: true
---
| brand-resources/brand-resources-hero.md |
---
title: brand-resources
description: brand-resources
build:
render: always
cascade:
- build:
list: local
publishResources: false
render: never
---
| brand-resources/_index.md |
---
title: Cloud Quickstart
weight: 4
aliases:
- quickstart-cloud
- ../cloud-quick-start
- cloud-quick-start
- cloud-quickstart
- cloud/quickstart-cloud/
---
# How to Get Started With Qdrant Cloud
<p align="center"><iframe width="560" height="315" src="https://www.youtube.com/embed/g6uJhjAoNMg?si=EZ3OtmEdKKHIOgFy" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p style="text-align: center;">You can try vector search on Qdrant Cloud in three steps.
</br> Instructions are below, but the video is faster:</p>
## Setup a Qdrant Cloud cluster
1. Register for a [Cloud account](https://cloud.qdrant.io/) with your email, Google or Github credentials.
2. Go to **Overview** and follow the onboarding instructions under **Create First Cluster**.
![create a cluster](/docs/gettingstarted/gui-quickstart/create-cluster.png)
3. When you create it, you will receive an API key. You will need to copy and paste it soon.
4. Your new cluster will be created under **Clusters**. Give it a few moments to provision.
## Access the cluster dashboard
1. Go to your **Clusters**. Under **Actions**, open the **Dashboard**.
2. Paste your new API key here. If you lost it, make another in **Access Management**.
3. The key will grant you access to your Qdrant instance. Now you can see the cluster Dashboard.
![access the dashboard](/docs/gettingstarted/gui-quickstart/access-dashboard.png)
## Try the Tutorial sandbox
1. Open the interactive **Tutorial**. Here, you can test basic Qdrant API requests.
2. Using the **Quickstart** instructions, create a collection, add vectors and run a search.
3. The output on the right will show you some basic semantic search results.
![interactive-tutorial](/docs/gettingstarted/gui-quickstart/interactive-tutorial.png)
## That's vector search!
You can stay in the sandbox and continue trying our different API calls.</br>
When ready, use the Console and our complete REST API to try other operations.
## What's next?
Now that you have a Qdrant Cloud cluster up and running, you should [test remote access](/documentation/cloud/authentication/#test-cluster-access) with a Qdrant Client.
| documentation/quickstart-cloud.md |
---
title: Release Notes
weight: 24
type: external-link
external_url: https://github.com/qdrant/qdrant/releases
sitemapExclude: True
---
| documentation/release-notes.md |
---
title: Benchmarks
weight: 33
draft: true
---
| documentation/benchmarks.md |
---
title: Community links
weight: 42
draft: true
---
# Community Contributions
Though we do not officially maintain this content, we still feel that is is valuable and thank our dedicated contributors.
| Link | Description | Stack |
|------|------------------------------|--------|
| [Pinecone to Qdrant Migration](https://github.com/NirantK/qdrant_tools) | Complete python toolset that supports migration between two products. | Qdrant, Pinecone |
| [LlamaIndex Support for Qdrant](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/QdrantIndexDemo.html) | Documentation on common integrations with LlamaIndex. | Qdrant, LlamaIndex |
| [Geo.Rocks Semantic Search Tutorial](https://geo.rocks/post/qdrant-transformers-js-semantic-search/) | Create a fully working semantic search stack with a built in search API and a minimal stack. | Qdrant, HuggingFace, SentenceTransformers, transformers.js |
| documentation/community-links.md |
---
title: Local Quickstart
weight: 5
aliases:
- quick_start
- quick-start
- quickstart
---
# How to Get Started with Qdrant Locally
In this short example, you will use the Python Client to create a Collection, load data into it and run a basic search query.
<aside role="status">Before you start, please make sure Docker is installed and running on your system.</aside>
## Download and run
First, download the latest Qdrant image from Dockerhub:
```bash
docker pull qdrant/qdrant
```
Then, run the service:
```bash
docker run -p 6333:6333 -p 6334:6334 \
-v $(pwd)/qdrant_storage:/qdrant/storage:z \
qdrant/qdrant
```
Under the default configuration all data will be stored in the `./qdrant_storage` directory. This will also be the only directory that both the Container and the host machine can both see.
Qdrant is now accessible:
- REST API: [localhost:6333](http://localhost:6333)
- Web UI: [localhost:6333/dashboard](http://localhost:6333/dashboard)
- GRPC API: [localhost:6334](http://localhost:6334)
## Initialize the client
```python
from qdrant_client import QdrantClient
client = QdrantClient(url="http://localhost:6333")
```
```typescript
import { QdrantClient } from "@qdrant/js-client-rest";
const client = new QdrantClient({ host: "localhost", port: 6333 });
```
```rust
use qdrant_client::Qdrant;
// The Rust client uses Qdrant's gRPC interface
let client = Qdrant::from_url("http://localhost:6334").build()?;
```
```java
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
// The Java client uses Qdrant's gRPC interface
QdrantClient client = new QdrantClient(
QdrantGrpcClient.newBuilder("localhost", 6334, false).build());
```
```csharp
using Qdrant.Client;
// The C# client uses Qdrant's gRPC interface
var client = new QdrantClient("localhost", 6334);
```
```go
import "github.com/qdrant/go-client/qdrant"
// The Go client uses Qdrant's gRPC interface
client, err := qdrant.NewClient(&qdrant.Config{
Host: "localhost",
Port: 6334,
})
```
<aside role="status">By default, Qdrant starts with no encryption or authentication . This means anyone with network access to your machine can access your Qdrant container instance. Please read <a href="/documentation/security/">Security</a> carefully for details on how to secure your instance.</aside>
## Create a collection
You will be storing all of your vector data in a Qdrant collection. Let's call it `test_collection`. This collection will be using a dot product distance metric to compare vectors.
```python
from qdrant_client.models import Distance, VectorParams
client.create_collection(
collection_name="test_collection",
vectors_config=VectorParams(size=4, distance=Distance.DOT),
)
```
```typescript
await client.createCollection("test_collection", {
vectors: { size: 4, distance: "Dot" },
});
```
```rust
use qdrant_client::qdrant::{CreateCollectionBuilder, VectorParamsBuilder};
client
.create_collection(
CreateCollectionBuilder::new("test_collection")
.vectors_config(VectorParamsBuilder::new(4, Distance::Dot)),
)
.await?;
```
```java
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.VectorParams;
client.createCollectionAsync("test_collection",
VectorParams.newBuilder().setDistance(Distance.Dot).setSize(4).build()).get();
```
```csharp
using Qdrant.Client.Grpc;
await client.CreateCollectionAsync(collectionName: "test_collection", vectorsConfig: new VectorParams
{
Size = 4, Distance = Distance.Dot
});
```
```go
import (
"context"
"github.com/qdrant/go-client/qdrant"
)
client.CreateCollection(context.Background(), &qdrant.CreateCollection{
CollectionName: "{collection_name}",
VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
Size: 4,
Distance: qdrant.Distance_Cosine,
}),
})
```
## Add vectors
Let's now add a few vectors with a payload. Payloads are other data you want to associate with the vector:
```python
from qdrant_client.models import PointStruct
operation_info = client.upsert(
collection_name="test_collection",
wait=True,
points=[
PointStruct(id=1, vector=[0.05, 0.61, 0.76, 0.74], payload={"city": "Berlin"}),
PointStruct(id=2, vector=[0.19, 0.81, 0.75, 0.11], payload={"city": "London"}),
PointStruct(id=3, vector=[0.36, 0.55, 0.47, 0.94], payload={"city": "Moscow"}),
PointStruct(id=4, vector=[0.18, 0.01, 0.85, 0.80], payload={"city": "New York"}),
PointStruct(id=5, vector=[0.24, 0.18, 0.22, 0.44], payload={"city": "Beijing"}),
PointStruct(id=6, vector=[0.35, 0.08, 0.11, 0.44], payload={"city": "Mumbai"}),
],
)
print(operation_info)
```
```typescript
const operationInfo = await client.upsert("test_collection", {
wait: true,
points: [
{ id: 1, vector: [0.05, 0.61, 0.76, 0.74], payload: { city: "Berlin" } },
{ id: 2, vector: [0.19, 0.81, 0.75, 0.11], payload: { city: "London" } },
{ id: 3, vector: [0.36, 0.55, 0.47, 0.94], payload: { city: "Moscow" } },
{ id: 4, vector: [0.18, 0.01, 0.85, 0.80], payload: { city: "New York" } },
{ id: 5, vector: [0.24, 0.18, 0.22, 0.44], payload: { city: "Beijing" } },
{ id: 6, vector: [0.35, 0.08, 0.11, 0.44], payload: { city: "Mumbai" } },
],
});
console.debug(operationInfo);
```
```rust
use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
let points = vec![
PointStruct::new(1, vec![0.05, 0.61, 0.76, 0.74], [("city", "Berlin".into())]),
PointStruct::new(2, vec![0.19, 0.81, 0.75, 0.11], [("city", "London".into())]),
PointStruct::new(3, vec![0.36, 0.55, 0.47, 0.94], [("city", "Moscow".into())]),
// ..truncated
];
let response = client
.upsert_points(UpsertPointsBuilder::new("test_collection", points).wait(true))
.await?;
dbg!(response);
```
```java
import java.util.List;
import java.util.Map;
import static io.qdrant.client.PointIdFactory.id;
import static io.qdrant.client.ValueFactory.value;
import static io.qdrant.client.VectorsFactory.vectors;
import io.qdrant.client.grpc.Points.PointStruct;
import io.qdrant.client.grpc.Points.UpdateResult;
UpdateResult operationInfo =
client
.upsertAsync(
"test_collection",
List.of(
PointStruct.newBuilder()
.setId(id(1))
.setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
.putAllPayload(Map.of("city", value("Berlin")))
.build(),
PointStruct.newBuilder()
.setId(id(2))
.setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f))
.putAllPayload(Map.of("city", value("London")))
.build(),
PointStruct.newBuilder()
.setId(id(3))
.setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f))
.putAllPayload(Map.of("city", value("Moscow")))
.build()))
// Truncated
.get();
System.out.println(operationInfo);
```
```csharp
using Qdrant.Client.Grpc;
var operationInfo = await client.UpsertAsync(collectionName: "test_collection", points: new List<PointStruct>
{
new()
{
Id = 1,
Vectors = new float[]
{
0.05f, 0.61f, 0.76f, 0.74f
},
Payload = {
["city"] = "Berlin"
}
},
new()
{
Id = 2,
Vectors = new float[]
{
0.19f, 0.81f, 0.75f, 0.11f
},
Payload = {
["city"] = "London"
}
},
new()
{
Id = 3,
Vectors = new float[]
{
0.36f, 0.55f, 0.47f, 0.94f
},
Payload = {
["city"] = "Moscow"
}
},
// Truncated
});
Console.WriteLine(operationInfo);
```
```go
import (
"context"
"fmt"
"github.com/qdrant/go-client/qdrant"
)
operationInfo, err := client.Upsert(context.Background(), &qdrant.UpsertPoints{
CollectionName: "test_collection",
Points: []*qdrant.PointStruct{
{
Id: qdrant.NewIDNum(1),
Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74),
Payload: qdrant.NewValueMap(map[string]any{"city": "Berlin"}),
},
{
Id: qdrant.NewIDNum(2),
Vectors: qdrant.NewVectors(0.19, 0.81, 0.75, 0.11),
Payload: qdrant.NewValueMap(map[string]any{"city": "London"}),
},
{
Id: qdrant.NewIDNum(3),
Vectors: qdrant.NewVectors(0.36, 0.55, 0.47, 0.94),
Payload: qdrant.NewValueMap(map[string]any{"city": "Moscow"}),
},
// Truncated
},
})
if err != nil {
panic(err)
}
fmt.Println(operationInfo)
```
**Response:**
```python
operation_id=0 status=<UpdateStatus.COMPLETED: 'completed'>
```
```typescript
{ operation_id: 0, status: 'completed' }
```
```rust
PointsOperationResponse {
result: Some(
UpdateResult {
operation_id: Some(
0,
),
status: Completed,
},
),
time: 0.00094027,
}
```
```java
operation_id: 0
status: Completed
```
```csharp
{ "operationId": "0", "status": "Completed" }
```
```go
operation_id:0 status:Acknowledged
```
## Run a query
Let's ask a basic question - Which of our stored vectors are most similar to the query vector `[0.2, 0.1, 0.9, 0.7]`?
```python
search_result = client.query_points(
collection_name="test_collection", query=[0.2, 0.1, 0.9, 0.7], limit=3
).points
print(search_result)
```
```typescript
let searchResult = await client.query(
"test_collection", {
query: [0.2, 0.1, 0.9, 0.7],
limit: 3
});
console.debug(searchResult.points);
```
```rust
use qdrant_client::qdrant::QueryPointsBuilder;
let search_result = client
.query(
QueryPointsBuilder::new("test_collection")
.query(vec![0.2, 0.1, 0.9, 0.7])
)
.await?;
dbg!(search_result);
```
```java
import java.util.List;
import io.qdrant.client.grpc.Points.ScoredPoint;
import io.qdrant.client.grpc.Points.QueryPoints;
import static io.qdrant.client.QueryFactory.nearest;
List<ScoredPoint> searchResult =
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("test_collection")
.setLimit(3)
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.build()).get();
System.out.println(searchResult);
```
```csharp
var searchResult = await client.QueryAsync(
collectionName: "test_collection",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
limit: 3,
);
Console.WriteLine(searchResult);
```
```go
import (
"context"
"fmt"
"github.com/qdrant/go-client/qdrant"
)
searchResult, err := client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "test_collection",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
})
if err != nil {
panic(err)
}
fmt.Println(searchResult)
```
**Response:**
```json
[
{
"id": 4,
"version": 0,
"score": 1.362,
"payload": null,
"vector": null
},
{
"id": 1,
"version": 0,
"score": 1.273,
"payload": null,
"vector": null
},
{
"id": 3,
"version": 0,
"score": 1.208,
"payload": null,
"vector": null
}
]
```
The results are returned in decreasing similarity order. Note that payload and vector data is missing in these results by default.
See [payload and vector in the result](../concepts/search/#payload-and-vector-in-the-result) on how to enable it.
## Add a filter
We can narrow down the results further by filtering by payload. Let's find the closest results that include "London".
```python
from qdrant_client.models import Filter, FieldCondition, MatchValue
search_result = client.query_points(
collection_name="test_collection",
query=[0.2, 0.1, 0.9, 0.7],
query_filter=Filter(
must=[FieldCondition(key="city", match=MatchValue(value="London"))]
),
with_payload=True,
limit=3,
).points
print(search_result)
```
```typescript
searchResult = await client.query("test_collection", {
query: [0.2, 0.1, 0.9, 0.7],
filter: {
must: [{ key: "city", match: { value: "London" } }],
},
with_payload: true,
limit: 3,
});
console.debug(searchResult);
```
```rust
use qdrant_client::qdrant::{Condition, Filter, QueryPointsBuilder};
let search_result = client
.query(
QueryPointsBuilder::new("test_collection")
.query(vec![0.2, 0.1, 0.9, 0.7])
.filter(Filter::must([Condition::matches(
"city",
"London".to_string(),
)]))
.with_payload(true),
)
.await?;
dbg!(search_result);
```
```java
import static io.qdrant.client.ConditionFactory.matchKeyword;
List<ScoredPoint> searchResult =
client.queryAsync(QueryPoints.newBuilder()
.setCollectionName("test_collection")
.setLimit(3)
.setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London")))
.setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
.setWithPayload(enable(true))
.build()).get();
System.out.println(searchResult);
```
```csharp
using static Qdrant.Client.Grpc.Conditions;
var searchResult = await client.QueryAsync(
collectionName: "test_collection",
query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
filter: MatchKeyword("city", "London"),
limit: 3,
payloadSelector: true
);
Console.WriteLine(searchResult);
```
```go
import (
"context"
"fmt"
"github.com/qdrant/go-client/qdrant"
)
searchResult, err := client.Query(context.Background(), &qdrant.QueryPoints{
CollectionName: "test_collection",
Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
Filter: &qdrant.Filter{
Must: []*qdrant.Condition{
qdrant.NewMatch("city", "London"),
},
},
WithPayload: qdrant.NewWithPayload(true),
})
if err != nil {
panic(err)
}
fmt.Println(searchResult)
```
**Response:**
```json
[
{
"id": 2,
"version": 0,
"score": 0.871,
"payload": {
"city": "London"
},
"vector": null
}
]
```
<aside role="status">To make filtered search fast on real datasets, we highly recommend to create <a href="../concepts/indexing/#payload-index">payload indexes</a>!</aside>
You have just conducted vector search. You loaded vectors into a database and queried the database with a vector of your own. Qdrant found the closest results and presented you with a similarity score.
## Next steps
Now you know how Qdrant works. Getting started with [Qdrant Cloud](../cloud/quickstart-cloud/) is just as easy. [Create an account](https://qdrant.to/cloud) and use our SaaS completely free. We will take care of infrastructure maintenance and software updates.
To move onto some more complex examples of vector search, read our [Tutorials](../tutorials/) and create your own app with the help of our [Examples](../examples/).
**Note:** There is another way of running Qdrant locally. If you are a Python developer, we recommend that you try Local Mode in [Qdrant Client](https://github.com/qdrant/qdrant-client), as it only takes a few moments to get setup.
| documentation/quickstart.md |
---
title: Qdrant Cloud API
weight: 10
---
# Qdrant Cloud API
The Qdrant Cloud API lets you manage Cloud accounts and their respective Qdrant clusters. You can use this API to manage your clusters, authentication methods, and cloud configurations.
| REST API | Documentation |
| -------- | ------------------------------------------------------------------------------------ |
| v.0.1.0 | [OpenAPI Specification](https://cloud.qdrant.io/pa/v1/docs) |
**Note:** This is not the Qdrant REST API. For core product APIs & SDKs, see our list of [interfaces](/documentation/interfaces/)
## Authentication: Connecting to Cloud API
To interact with the Qdrant Cloud API, you must authenticate using an API key. Each request to the API must include the API key in the **Authorization** header. The API key acts as a bearer token and grants access to your account’s resources.
You can create a Cloud API key in the Cloud Console UI. Go to **Access Management** > **Qdrant Cloud API Keys**.
![Authentication](/documentation/cloud/authentication.png)
**Note:** Ensure that the API key is kept secure and not exposed in public repositories or logs. Once authenticated, the API allows you to manage clusters, collections, and perform other operations available to your account.
## Sample API Request
Here's an example of a basic request to **list all clusters** in your Qdrant Cloud account:
```bash
curl -X 'GET' \
'https://cloud.qdrant.io/pa/v1/accounts/<YOUR_ACCOUNT_ID>/clusters' \
-H 'accept: application/json' \
-H 'Authorization: <YOUR_API_KEY>'
```
This request will return a list of clusters associated with your account in JSON format.
## Cluster Management
Use these endpoints to create and manage your Qdrant database clusters. The API supports fine-grained control over cluster resources (CPU, RAM, disk), node configurations, tolerations, and other operational characteristics across all cloud providers (AWS, GCP, Azure) and their respective regions in Qdrant Cloud, as well as Hybrid Cloud.
- **Get Cluster by ID**: Retrieve detailed information about a specific cluster using the cluster ID and associated account ID.
- **Delete Cluster**: Remove a cluster, with optional deletion of backups.
- **Update Cluster**: Apply modifications to a cluster's configuration.
- **List Clusters**: Get all clusters associated with a specific account, filtered by region or other criteria.
- **Create Cluster**: Add new clusters to the account with configurable parameters such as nodes, cloud provider, and regions.
- **Get Booking**: Manage hosting across various cloud providers (AWS, GCP, Azure) and their respective regions.
## Cluster Authentication Management
Use these endpoints to manage your cluster API keys.
- **List API Keys**: Retrieve all API keys associated with an account.
- **Create API Key**: Generate a new API key for programmatic access.
- **Delete API Key**: Revoke access by deleting a specific API key.
- **Update API Key**: Modify attributes of an existing API key.
| documentation/qdrant-cloud-api.md |
---
#Delimiter files are used to separate the list of documentation pages into sections.
title: "Getting Started"
type: delimiter
weight: 1 # Change this weight to change order of sections
sitemapExclude: True
_build:
publishResources: false
render: never
--- | documentation/0-dl.md |
---
#Delimiter files are used to separate the list of documentation pages into sections.
title: "Integrations"
type: delimiter
weight: 14 # Change this weight to change order of sections
sitemapExclude: True
_build:
publishResources: false
render: never
--- | documentation/2-dl.md |
---
title: Roadmap
weight: 32
draft: true
---
# Qdrant 2023 Roadmap
Goals of the release:
* **Maintain easy upgrades** - we plan to keep backward compatibility for at least one major version back.
* That means that you can upgrade Qdrant without any downtime and without any changes in your client code within one major version.
* Storage should be compatible between any two consequent versions, so you can upgrade Qdrant with automatic data migration between consecutive versions.
* **Make billion-scale serving cheap** - qdrant already can serve billions of vectors, but we want to make it even more affordable.
* **Easy scaling** - our plan is to make it easy to dynamically scale Qdrant, so you could go from 1 to 1B vectors seamlessly.
* **Various similarity search scenarios** - we want to support more similarity search scenarios, e.g. sparse search, grouping requests, diverse search, etc.
## Milestones
* :atom_symbol: Quantization support
* [ ] Scalar quantization f32 -> u8 (4x compression)
* [ ] Advanced quantization (8x and 16x compression)
* [ ] Support for binary vectors
---
* :arrow_double_up: Scalability
* [ ] Automatic replication factor adjustment
* [ ] Automatic shard distribution on cluster scaling
* [ ] Repartitioning support
---
* :eyes: Search scenarios
* [ ] Diversity search - search for vectors that are different from each other
* [ ] Sparse vectors search - search for vectors with a small number of non-zero values
* [ ] Grouping requests - search within payload-defined groups
* [ ] Different scenarios for recommendation API
---
* Additionally
* [ ] Extend full-text filtering support
* [ ] Support for phrase queries
* [ ] Support for logical operators
* [ ] Simplify update of collection parameters
| documentation/roadmap.md |
---
#Delimiter files are used to separate the list of documentation pages into sections.
title: "Managed Services"
type: delimiter
weight: 7 # Change this weight to change order of sections
sitemapExclude: True
_build:
publishResources: false
render: never
--- | documentation/4-dl.md |
---
#Delimiter files are used to separate the list of documentation pages into sections.
title: "Examples"
type: delimiter
weight: 17 # Change this weight to change order of sections
sitemapExclude: True
_build:
publishResources: false
render: never
--- | documentation/3-dl.md |
---
title: Practice Datasets
weight: 23
---
# Common Datasets in Snapshot Format
You may find that creating embeddings from datasets is a very resource-intensive task.
If you need a practice dataset, feel free to pick one of the ready-made snapshots on this page.
These snapshots contain pre-computed vectors that you can easily import into your Qdrant instance.
## Available datasets
Our snapshots are usually generated from publicly available datasets, which are often used for
non-commercial or academic purposes. The following datasets are currently available. Please click
on a dataset name to see its detailed description.
| Dataset | Model | Vector size | Documents | Size | Qdrant snapshot | HF Hub |
|--------------------------------------------|-----------------------------------------------------------------------------|-------------|-----------|--------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|
| [Arxiv.org titles](#arxivorg-titles) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 7.1 GB | [Download](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) |
| [Arxiv.org abstracts](#arxivorg-abstracts) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 8.4 GB | [Download](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-abstracts-instructorxl-embeddings) |
| [Wolt food](#wolt-food) | [clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32) | 512 | 1.7M | 7.9 GB | [Download](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/wolt-food-clip-ViT-B-32-embeddings) |
Once you download a snapshot, you need to [restore it](/documentation/concepts/snapshots/#restore-snapshot)
using the Qdrant CLI upon startup or through the API.
## Qdrant on Hugging Face
<p align="center">
<a href="https://huggingface.co/Qdrant">
<img style="width: 500px; max-width: 100%;" src="/content/images/hf-logo-with-title.svg" alt="HuggingFace" title="HuggingFace">
</a>
</p>
[Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and
datasets. [Qdrant](https://huggingface.co/Qdrant) is one of the organizations there! We aim to
provide you with datasets containing neural embeddings that you can use to practice with Qdrant
and build your applications based on semantic search. **Please let us know if you'd like to see
a specific dataset!**
If you are not familiar with [Hugging Face datasets](https://huggingface.co/docs/datasets/index),
or would like to know how to combine it with Qdrant, please refer to the [tutorial](/documentation/tutorials/huggingface-datasets/).
## Arxiv.org
[Arxiv.org](https://arxiv.org) is a highly-regarded open-access repository of electronic preprints in multiple
fields. Operated by Cornell University, arXiv allows researchers to share their findings with
the scientific community and receive feedback before they undergo peer review for formal
publication. Its archives host millions of scholarly articles, making it an invaluable resource
for those looking to explore the cutting edge of scientific research. With a high frequency of
daily submissions from scientists around the world, arXiv forms a comprehensive, evolving dataset
that is ripe for mining, analysis, and the development of future innovations.
<aside role="status">
Arxiv.org snapshots were created using precomputed embeddings exposed by <a href="https://alex.macrocosm.so/download">the Alexandria Index</a>.
</aside>
### Arxiv.org titles
This dataset contains embeddings generated from the paper titles only. Each vector has a
payload with the title used to create it, along with the DOI (Digital Object Identifier).
```json
{
"title": "Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities",
"DOI": "1612.05191"
}
```
The embeddings generated with InstructorXL model have been generated using the following
instruction:
> Represent the Research Paper title for retrieval; Input:
The following code snippet shows how to generate embeddings using the InstructorXL model:
```python
from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR("hkunlp/instructor-xl")
sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
instruction = "Represent the Research Paper title for retrieval; Input:"
embeddings = model.encode([[instruction, sentence]])
```
The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot).
#### Importing the dataset
The easiest way to use the provided dataset is to recover it via the API by passing the
URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following
code snippet shows how to create a new collection and fill it with the snapshot data:
```http request
PUT /collections/{collection_name}/snapshots/recover
{
"location": "https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot"
}
```
### Arxiv.org abstracts
This dataset contains embeddings generated from the paper abstracts. Each vector has a
payload with the abstract used to create it, along with the DOI (Digital Object Identifier).
```json
{
"abstract": "Recently Cole and Gkatzelis gave the first constant factor approximation\nalgorithm for the problem of allocating indivisible items to agents, under\nadditive valuations, so as to maximize the Nash Social Welfare. We give\nconstant factor algorithms for a substantial generalization of their problem --\nto the case of separable, piecewise-linear concave utility functions. We give\ntwo such algorithms, the first using market equilibria and the second using the\ntheory of stable polynomials.\n In AGT, there is a paucity of methods for the design of mechanisms for the\nallocation of indivisible goods and the result of Cole and Gkatzelis seemed to\nbe taking a major step towards filling this gap. Our result can be seen as\nanother step in this direction.\n",
"DOI": "1612.05191"
}
```
The embeddings generated with InstructorXL model have been generated using the following
instruction:
> Represent the Research Paper abstract for retrieval; Input:
The following code snippet shows how to generate embeddings using the InstructorXL model:
```python
from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR("hkunlp/instructor-xl")
sentence = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train."
instruction = "Represent the Research Paper abstract for retrieval; Input:"
embeddings = model.encode([[instruction, sentence]])
```
The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot).
#### Importing the dataset
The easiest way to use the provided dataset is to recover it via the API by passing the
URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following
code snippet shows how to create a new collection and fill it with the snapshot data:
```http request
PUT /collections/{collection_name}/snapshots/recover
{
"location": "https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot"
}
```
## Wolt food
Our [Food Discovery demo](https://food-discovery.qdrant.tech/) relies on the dataset of
food images from the Wolt app. Each point in the collection represents a dish with a single
image. The image is represented as a vector of 512 float numbers. There is also a JSON
payload attached to each point, which looks similar to this:
```json
{
"cafe": {
"address": "VGX7+6R2 Vecchia Napoli, Valletta",
"categories": ["italian", "pasta", "pizza", "burgers", "mediterranean"],
"location": {"lat": 35.8980154, "lon": 14.5145106},
"menu_id": "610936a4ee8ea7a56f4a372a",
"name": "Vecchia Napoli Is-Suq Tal-Belt",
"rating": 9,
"slug": "vecchia-napoli-skyparks-suq-tal-belt"
},
"description": "Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli",
"image": "https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg",
"name": "L'Amatriciana"
}
```
The embeddings generated with clip-ViT-B-32 model have been generated using the following
code snippet:
```python
from PIL import Image
from sentence_transformers import SentenceTransformer
image_path = "5dbfd216-5cce-11eb-8122-de94874ad1c8_ns_takeaway_seelachs_ei_baguette.jpeg"
model = SentenceTransformer("clip-ViT-B-32")
embedding = model.encode(Image.open(image_path))
```
The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot).
#### Importing the dataset
The easiest way to use the provided dataset is to recover it via the API by passing the
URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following
code snippet shows how to create a new collection and fill it with the snapshot data:
```http request
PUT /collections/{collection_name}/snapshots/recover
{
"location": "https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot"
}
```
| documentation/datasets.md |
---
#Delimiter files are used to separate the list of documentation pages into sections.
title: "User Manual"
type: delimiter
weight: 10 # Change this weight to change order of sections
sitemapExclude: True
_build:
publishResources: false
render: never
--- | documentation/1-dl.md |
---
#Delimiter files are used to separate the list of documentation pages into sections.
title: "Support"
type: delimiter
weight: 21 # Change this weight to change order of sections
sitemapExclude: True
_build:
publishResources: false
render: never
--- | documentation/5-dl.md |
---
title: Home
weight: 2
hideTOC: true
---
# Documentation
Qdrant is an AI-native vector dabatase and a semantic search engine. You can use it to extract meaningful information from unstructured data. Want to see how it works? [Clone this repo now](https://github.com/qdrant/qdrant_demo/) and build a search engine in five minutes.
|||
|-:|:-|
|[Cloud Quickstart](/documentation/quickstart-cloud/)|[Local Quickstart](/documentation/quick-start/)|
## Ready to start developing?
***<p style="text-align: center;">Qdrant is open-source and can be self-hosted. However, the quickest way to get started is with our [free tier](https://qdrant.to/cloud) on Qdrant Cloud. It scales easily and provides an UI where you can interact with data.</p>***
[![Hybrid Cloud](/docs/homepage/cloud-cta.png)](https://qdrant.to/cloud)
## Qdrant's most popular features:
||||
|:-|:-|:-|
|[Filtrable HNSW](/documentation/filtering/) </br> Single-stage payload filtering | [Recommendations & Context Search](/documentation/concepts/explore/#explore-the-data) </br> Exploratory advanced search| [Pure-Vector Hybrid Search](/documentation/hybrid-queries/)</br>Full text and semantic search in one|
|[Multitenancy](/documentation/guides/multiple-partitions/) </br> Payload-based partitioning|[Custom Sharding](/documentation/guides/distributed_deployment/#sharding) </br> For data isolation and distribution|[Role Based Access Control](/documentation/guides/security/?q=jwt#granular-access-control-with-jwt)</br>Secure JWT-based access |
|[Quantization](/documentation/guides/quantization/) </br> Compress data for drastic speedups|[Multivector Support](/documentation/concepts/vectors/?q=multivect#multivectors) </br> For ColBERT late interaction |[Built-in IDF](/documentation/concepts/indexing/?q=inverse+docu#idf-modifier) </br> Cutting-edge similarity calculation| | documentation/_index.md |
---
title: Contribution Guidelines
weight: 35
draft: true
---
# How to contribute
If you are a Qdrant user - Data Scientist, ML Engineer, or MLOps, the best contribution would be the feedback on your experience with Qdrant.
Let us know whenever you have a problem, face an unexpected behavior, or see a lack of documentation.
You can do it in any convenient way - create an [issue](https://github.com/qdrant/qdrant/issues), start a [discussion](https://github.com/qdrant/qdrant/discussions), or drop up a [message](https://discord.gg/tdtYvXjC4h).
If you use Qdrant or Metric Learning in your projects, we'd love to hear your story! Feel free to share articles and demos in our community.
For those familiar with Rust - check out our [contribution guide](https://github.com/qdrant/qdrant/blob/master/CONTRIBUTING.md).
If you have problems with code or architecture understanding - reach us at any time.
Feeling confident and want to contribute more? - Come to [work with us](https://qdrant.join.com/)! | documentation/contribution-guidelines.md |
---
title: Bubble
aliases: [ ../frameworks/bubble/ ]
---
# Bubble
[Bubble](https://bubble.io/) is a software development platform that enables anyone to build and launch fully functional web applications without writing code.
You can use the [Qdrant Bubble plugin](https://bubble.io/plugin/qdrant-1716804374179x344999530386685950) to interface with Qdrant in your workflows.
## Prerequisites
1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
2. An account at [Bubble.io](https://bubble.io/) and an app set up.
## Setting up the plugin
Navigate to your app's workflows. Select `"Install more plugins actions"`.
![Install New Plugin](/documentation/frameworks/bubble/install-bubble-plugin.png)
You can now search for the Qdrant plugin and install it. Ensure all the categories are selected to perform a full search.
![Qdrant Plugin Search](/documentation/frameworks/bubble/qdrant-plugin-search.png)
The Qdrant plugin can now be found in the installed plugins section of your workflow. Enter the API key of your Qdrant instance for authentication.
![Qdrant Plugin Home](/documentation/frameworks/bubble/qdrant-plugin-home.png)
The plugin provides actions for upserting, searching, updating and deleting points from your Qdrant collection with dynamic and static values from your Bubble workflow.
## Further Reading
- [Bubble Academy](https://bubble.io/academy).
- [Bubble Manual](https://manual.bubble.io/)
| documentation/platforms/bubble.md |
---
title: Make.com
aliases: [ ../frameworks/make/ ]
---
# Make.com
[Make](https://www.make.com/) is a platform for anyone to design, build, and automate anything—from tasks and workflows to apps and systems without code.
Find the comprehensive list of available Make apps [here](https://www.make.com/en/integrations).
Qdrant is available as an [app](https://www.make.com/en/integrations/qdrant) within Make to add to your scenarios.
![Qdrant Make hero](/documentation/frameworks/make/hero-page.png)
## Prerequisites
Before you start, make sure you have the following:
1. A Qdrant instance to connect to. You can get free cloud instance [cloud.qdrant.io](https://cloud.qdrant.io/).
2. An account at Make.com. You can register yourself [here](https://www.make.com/en/register).
## Setting up a connection
Navigate to your scenario on the Make dashboard and select a Qdrant app module to start a connection.
![Qdrant Make connection](/documentation/frameworks/make/connection.png)
You can now establish a connection to Qdrant using your [instance credentials](/documentation/cloud/authentication/).
![Qdrant Make form](/documentation/frameworks/make/connection-form.png)
## Modules
Modules represent actions that Make performs with an app.
The Qdrant Make app enables you to trigger the following app modules.
![Qdrant Make modules](/documentation/frameworks/make/modules.png)
The modules support mapping to connect the data retrieved by one module to another module to perform the desired action. You can read more about the data processing options available for the modules in the [Make reference](https://www.make.com/en/help/modules).
## Next steps
- Find a list of Make workflow templates to connect with Qdrant [here](https://www.make.com/en/templates).
- Make scenario reference docs can be found [here](https://www.make.com/en/help/scenarios). | documentation/platforms/make.md |
---
title: Portable.io
aliases: [ ../frameworks/portable/ ]
---
# Portable
[Portable](https://portable.io/) is an ELT platform that builds connectors on-demand for data teams. It enables connecting applications to your data warehouse with no code.
You can avail the [Qdrant connector](https://portable.io/connectors/qdrant) to build data pipelines from your collections.
![Qdrant Connector](/documentation/frameworks/portable/home.png)
## Prerequisites
1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
2. A [Portable account](https://app.portable.io/).
## Setting up the connector
Navigate to the Portable dashboard. Search for `"Qdrant"` in the sources section.
![Install New Source](/documentation/frameworks/portable/install.png)
Configure the connector with your Qdrant instance credentials.
![Configure connector](/documentation/frameworks/portable/configure.png)
You can now build your flows using data from Qdrant by selecting a [destination](https://app.portable.io/destinations) and scheduling it.
## Further Reading
- [Portable API Reference](https://developer.portable.io/api-reference/introduction).
- [Portable Academy](https://portable.io/learn)
| documentation/platforms/portable.md |
---
title: BuildShip
aliases: [ ../frameworks/buildship/ ]
---
# BuildShip
[BuildShip](https://buildship.com/) is a low-code visual builder to create APIs, scheduled jobs, and backend workflows with AI assitance.
You can use the [Qdrant integration](https://buildship.com/integrations/qdrant) to development workflows with semantic-search capabilites.
## Prerequisites
1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
2. A [BuildsShip](https://buildship.app/) for developing workflows.
## Nodes
Nodes are are fundamental building blocks of BuildShip. Each responsible for an operation in your workflow.
The Qdrant integration includes the following nodes with extensibility if required.
### Add Point
![Add Point](/documentation/frameworks/buildship/add.png)
### Retrieve Points
![Retrieve Points](/documentation/frameworks/buildship/get.png)
### Delete Points
![Delete Points](/documentation/frameworks/buildship/delete.png)
### Search Points
![Search Points](/documentation/frameworks/buildship/search.png)
## Further Reading
- [BuildShip Docs](https://docs.buildship.com/basics/node).
- [BuildShip Integrations](https://buildship.com/integrations)
| documentation/platforms/buildship.md |
---
title: Apify
aliases: [ ../frameworks/apify/ ]
---
# Apify
[Apify](https://apify.com/) is a web scraping and browser automation platform featuring an [app store](https://apify.com/store) with over 1,500 pre-built micro-apps known as Actors. These serverless cloud programs, which are essentially dockers under the hood, are designed for various web automation applications, including data collection.
One such Actor, built especially for AI and RAG applications, is [Website Content Crawler](https://apify.com/apify/website-content-crawler).
It's ideal for this purpose because it has built-in HTML processing and data-cleaning functions. That means you can easily remove fluff, duplicates, and other things on a web page that aren't relevant, and provide only the necessary data to the language model.
The Markdown can then be used to feed Qdrant to train AI models or supply them with fresh web content.
Qdrant is available as an [official integration](https://apify.com/apify/qdrant-integration) to load Apify datasets into a collection.
You can refer to the [Apify documentation](https://docs.apify.com/platform/integrations/qdrant) to set up the integration via the Apify UI.
## Programmatic Usage
Apify also supports programmatic access to integrations via the [Apify Python SDK](https://docs.apify.com/sdk/python/).
1. Install the Apify Python SDK by running the following command:
```sh
pip install apify-client
```
2. Create a Python script and import all the necessary modules:
```python
from apify_client import ApifyClient
APIFY_API_TOKEN = "YOUR-APIFY-TOKEN"
OPENAI_API_KEY = "YOUR-OPENAI-API-KEY"
# COHERE_API_KEY = "YOUR-COHERE-API-KEY"
QDRANT_URL = "YOUR-QDRANT-URL"
QDRANT_API_KEY = "YOUR-QDRANT-API-KEY"
client = ApifyClient(APIFY_API_TOKEN)
```
3. Call the [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor to crawl the Qdrant documentation and extract text content from the web pages:
```python
actor_call = client.actor("apify/website-content-crawler").call(
run_input={"startUrls": [{"url": "https://qdrant.tech/documentation/"}]}
)
```
4. Call the Qdrant integration and store all data in the Qdrant Vector Database:
```python
qdrant_integration_inputs = {
"qdrantUrl": QDRANT_URL,
"qdrantApiKey": QDRANT_API_KEY,
"qdrantCollectionName": "apify",
"qdrantAutoCreateCollection": True,
"datasetId": actor_call["defaultDatasetId"],
"datasetFields": ["text"],
"enableDeltaUpdates": True,
"deltaUpdatesPrimaryDatasetFields": ["url"],
"expiredObjectDeletionPeriodDays": 30,
"embeddingsProvider": "OpenAI", # "Cohere"
"embeddingsApiKey": OPENAI_API_KEY,
"performChunking": True,
"chunkSize": 1000,
"chunkOverlap": 0,
}
actor_call = client.actor("apify/qdrant-integration").call(run_input=qdrant_integration_inputs)
```
Upon running the script, the data from <https://qdrant.tech/documentation/> will be scraped, transformed into vector embeddings and stored in the Qdrant collection.
## Further Reading
- Apify [Documentation](https://docs.apify.com/)
- Apify [Templates](https://apify.com/templates)
- Integration [Source Code](https://github.com/apify/actor-vector-database-integrations)
| documentation/platforms/apify.md |
---
title: PrivateGPT
aliases: [ ../integrations/privategpt/, ../frameworks/privategpt/ ]
---
# PrivateGPT
[PrivateGPT](https://docs.privategpt.dev/) is a production-ready AI project that allows you to inquire about your documents using Large Language Models (LLMs) with offline support.
PrivateGPT uses Qdrant as the default vectorstore for ingesting and retrieving documents.
## Configuration
Qdrant settings can be configured by setting values to the qdrant property in the `settings.yaml` file. By default, Qdrant tries to connect to an instance at http://localhost:3000.
Example:
```yaml
qdrant:
url: "https://xyz-example.eu-central.aws.cloud.qdrant.io:6333"
api_key: "<your-api-key>"
```
The available [configuration options](https://docs.privategpt.dev/manual/storage/vector-stores#qdrant-configuration) are:
| Field | Description |
|--------------|-------------|
| location | If `:memory:` - use in-memory Qdrant instance.<br>If `str` - use it as a `url` parameter.|
| url | Either host or str of `Optional[scheme], host, Optional[port], Optional[prefix]`.<br> Eg. `http://localhost:6333` |
| port | Port of the REST API interface. Default: `6333` |
| grpc_port | Port of the gRPC interface. Default: `6334` |
| prefer_grpc | If `true` - use gRPC interface whenever possible in custom methods. |
| https | If `true` - use HTTPS(SSL) protocol.|
| api_key | API key for authentication in Qdrant Cloud.|
| prefix | If set, add `prefix` to the REST URL path.<br>Example: `service/v1` will result in `http://localhost:6333/service/v1/{qdrant-endpoint}` for REST API.|
| timeout | Timeout for REST and gRPC API requests.<br>Default: 5.0 seconds for REST and unlimited for gRPC |
| host | Host name of Qdrant service. If url and host are not set, defaults to 'localhost'.|
| path | Persistence path for QdrantLocal. Eg. `local_data/private_gpt/qdrant`|
| force_disable_check_same_thread | Force disable check_same_thread for QdrantLocal sqlite connection.|
## Next steps
Find the PrivateGPT docs [here](https://docs.privategpt.dev/).
| documentation/platforms/privategpt.md |
---
title: Pipedream
aliases: [ ../frameworks/pipedream/ ]
---
# Pipedream
[Pipedream](https://pipedream.com/) is a development platform that allows developers to connect many different applications, data sources, and APIs in order to build automated cross-platform workflows. It also offers code-level control with Node.js, Python, Go, or Bash if required.
You can use the [Qdrant app](https://pipedream.com/apps/qdrant) in Pipedream to add vector search capabilities to your workflows.
## Prerequisites
1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
2. A [Pipedream project](https://pipedream.com/) to develop your workflows.
## Setting Up
Search for the Qdrant app in your workflow apps.
![Qdrant Pipedream App](/documentation/frameworks/pipedream/qdrant-app.png)
The Qdrant app offers extensible API interface and pre-built actions.
![Qdrant App Features](/documentation/frameworks/pipedream/app-features.png)
Select any of the actions of the app to set up a connection.
![Qdrant Connect Account](/documentation/frameworks/pipedream/app-upsert-action.png)
Configure connection with the credentials of your Qdrant instance.
![Qdrant Connection Credentials](/documentation/frameworks/pipedream/app-connection.png)
You can verify your credentials using the "Test Connection" button.
Once a connection is set up, you can use the app to build workflows with the [2000+ apps supported by Pipedream](https://pipedream.com/apps/).
## Further Reading
- [Pipedream Documentation](https://pipedream.com/docs).
- [Qdrant Cloud Authentication](https://qdrant.tech/documentation/cloud/authentication/).
- [Source Code](https://github.com/PipedreamHQ/pipedream/tree/master/components/qdrant)
| documentation/platforms/pipedream.md |
---
title: Ironclad Rivet
aliases: [ ../frameworks/rivet/ ]
---
# Ironclad Rivet
[Rivet](https://rivet.ironcladapp.com/) is an Integrated Development Environment (IDE) and library designed for creating AI agents using a visual, graph-based interface.
Qdrant is available as a [plugin](https://github.com/qdrant/rivet-plugin-qdrant) for building vector-search powered workflows in Rivet.
## Installation
- Open the plugins overlay at the top of the screen.
- Search for the official Qdrant plugin.
- Click the "Add" button to install it in your current project.
![Rivet plugin installation](/documentation/frameworks/rivet/installation.png)
## Setting up the connection
You can configure your Qdrant instance credentials in the Rivet settings after installing the plugin.
![Rivet plugin connection](/documentation/frameworks/rivet/connection.png)
Once you've configured your credentials, you can right-click on your workspace to add nodes from the plugin and get building!
![Rivet plugin nodes](/documentation/frameworks/rivet/node.png)
## Further Reading
- Rivet [Tutorial](https://rivet.ironcladapp.com/docs/tutorial).
- Rivet [Documentation](https://rivet.ironcladapp.com/docs).
- Plugin [Source Code](https://github.com/qdrant/rivet-plugin-qdrant)
| documentation/platforms/rivet.md |
---
title: DocsGPT
aliases: [ ../frameworks/docsgpt/ ]
---
# DocsGPT
[DocsGPT](https://docsgpt.arc53.com/) is an open-source documentation assistant that enables you to build conversational user experiences on top of your data.
Qdrant is supported as a vectorstore in DocsGPT to ingest and semantically retrieve documents.
## Configuration
Learn how to setup DocsGPT in their [Quickstart guide](https://docs.docsgpt.co.uk/Deploying/Quickstart).
You can configure DocsGPT with environment variables in a `.env` file.
To configure DocsGPT to use Qdrant as the vector store, set `VECTOR_STORE` to `"qdrant"`.
```bash
echo "VECTOR_STORE=qdrant" >> .env
```
DocsGPT includes a list of the Qdrant configuration options that you can set as environment variables [here](https://github.com/arc53/DocsGPT/blob/00dfb07b15602319bddb95089e3dab05fac56240/application/core/settings.py#L46-L59).
## Further reading
- [DocsGPT Reference](https://github.com/arc53/DocsGPT)
| documentation/platforms/docsgpt.md |
---
title: Platforms
weight: 15
---
## Platform Integrations
| Platform | Description |
| ------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| [Apify](./apify/) | Platform to build web scrapers and automate web browser tasks. |
| [Bubble](./bubble) | Development platform for application development with a no-code interface |
| [BuildShip](./buildship) | Low-code visual builder to create APIs, scheduled jobs, and backend workflows. |
| [DocsGPT](./docsgpt/) | Tool for ingesting documentation sources and enabling conversations and queries. |
| [Make](./make/) | Cloud platform to build low-code workflows by integrating various software applications. |
| [N8N](./n8n/) | Platform for node-based, low-code workflow automation. |
| [Pipedream](./pipedream/) | Platform for connecting apps and developing event-driven automation. |
| [Portable.io](./portable/) | Cloud platform for developing and deploying ELT transformations. |
| [PrivateGPT](./privategpt/) | Tool to ask questions about your documents using local LLMs emphasising privacy. |
| [Rivet](./rivet/) | A visual programming environment for building AI agents with LLMs. |
| documentation/platforms/_index.md |
---
title: N8N
aliases: [ ../frameworks/n8n/ ]
---
# N8N
[N8N](https://n8n.io/) is an automation platform that allows you to build flexible workflows focused on deep data integration.
Qdrant is available as a vectorstore node in N8N for building AI-powered functionality within your workflows.
## Prerequisites
1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
2. A running N8N instance. You can learn more about using the N8N cloud or self-hosting [here](https://docs.n8n.io/choose-n8n/).
## Setting up the vectorstore
Select the Qdrant vectorstore from the list of nodes in your workflow editor.
![Qdrant n8n node](/documentation/frameworks/n8n/node.png)
You can now configure the vectorstore node according to your workflow requirements. The configuration options reference can be found [here](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#node-parameters).
![Qdrant Config](/documentation/frameworks/n8n/config.png)
Create a connection to Qdrant using your [instance credentials](/documentation/cloud/authentication/).
![Qdrant Credentials](/documentation/frameworks/n8n/credentials.png)
The vectorstore supports the following operations:
- Get Many - Get the top-ranked documents for a query.
- Insert documents - Add documents to the vectorstore.
- Retrieve documents - Retrieve documents for use with AI nodes.
## Further Reading
- N8N vectorstore [reference](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/).
- N8N AI-based workflows [reference](https://n8n.io/integrations/basic-llm-chain/).
- [Source Code](https://github.com/n8n-io/n8n/tree/master/packages/@n8n/nodes-langchain/nodes/vector_store/VectorStoreQdrant) | documentation/platforms/n8n.md |
---
title: Semantic Querying with Airflow and Astronomer
weight: 36
aliases:
- /documentation/examples/qdrant-airflow-astronomer/
---
# Semantic Querying with Airflow and Astronomer
| Time: 45 min | Level: Intermediate | | |
| ------------ | ------------------- | --- | --- |
In this tutorial, you will use Qdrant as a [provider](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html) in [Apache Airflow](https://airflow.apache.org/), an open-source tool that lets you setup data-engineering workflows.
You will write the pipeline as a DAG (Directed Acyclic Graph) in Python. With this, you can leverage the powerful suite of Python's capabilities and libraries to achieve almost anything your data pipeline needs.
[Astronomer](https://www.astronomer.io/) is a managed platform that simplifies the process of developing and deploying Airflow projects via its easy-to-use CLI and extensive automation capabilities.
Airflow is useful when running operations in Qdrant based on data events or building parallel tasks for generating vector embeddings. By using Airflow, you can set up monitoring and alerts for your pipelines for full observability.
## Prerequisites
Please make sure you have the following ready:
- A running Qdrant instance. We'll be using a free instance from <https://cloud.qdrant.io>
- The Astronomer CLI. Find the installation instructions [here](https://docs.astronomer.io/astro/cli/install-cli).
- A [HuggingFace token](https://huggingface.co/docs/hub/en/security-tokens) to generate embeddings.
## Implementation
We'll be building a DAG that generates embeddings in parallel for our data corpus and performs semantic retrieval based on user input.
### Set up the project
The Astronomer CLI makes it very straightforward to set up the Airflow project:
```console
mkdir qdrant-airflow-tutorial && cd qdrant-airflow-tutorial
astro dev init
```
This command generates all of the project files you need to run Airflow locally. You can find a directory called `dags`, which is where we can place our Python DAG files.
To use Qdrant within Airflow, install the Qdrant Airflow provider by adding the following to the `requirements.txt` file
```text
apache-airflow-providers-qdrant
```
### Configure credentials
We can set up provider connections using the Airflow UI, environment variables or the `airflow_settings.yml` file.
Add the following to the `.env` file in the project. Replace the values as per your credentials.
```env
HUGGINGFACE_TOKEN="<YOUR_HUGGINGFACE_ACCESS_TOKEN>"
AIRFLOW_CONN_QDRANT_DEFAULT='{
"conn_type": "qdrant",
"host": "xyz-example.eu-central.aws.cloud.qdrant.io:6333",
"password": "<YOUR_QDRANT_API_KEY>"
}'
```
### Add the data corpus
Let's add some sample data to work with. Paste the following content into a file called `books.txt` file within the `include` directory.
```text
1 | To Kill a Mockingbird (1960) | fiction | Harper Lee's Pulitzer Prize-winning novel explores racial injustice and moral growth through the eyes of young Scout Finch in the Deep South.
2 | Harry Potter and the Sorcerer's Stone (1997) | fantasy | J.K. Rowling's magical tale follows Harry Potter as he discovers his wizarding heritage and attends Hogwarts School of Witchcraft and Wizardry.
3 | The Great Gatsby (1925) | fiction | F. Scott Fitzgerald's classic novel delves into the glitz, glamour, and moral decay of the Jazz Age through the eyes of narrator Nick Carraway and his enigmatic neighbour, Jay Gatsby.
4 | 1984 (1949) | dystopian | George Orwell's dystopian masterpiece paints a chilling picture of a totalitarian society where individuality is suppressed and the truth is manipulated by a powerful regime.
5 | The Catcher in the Rye (1951) | fiction | J.D. Salinger's iconic novel follows disillusioned teenager Holden Caulfield as he navigates the complexities of adulthood and society's expectations in post-World War II America.
6 | Pride and Prejudice (1813) | romance | Jane Austen's beloved novel revolves around the lively and independent Elizabeth Bennet as she navigates love, class, and societal expectations in Regency-era England.
7 | The Hobbit (1937) | fantasy | J.R.R. Tolkien's adventure follows Bilbo Baggins, a hobbit who embarks on a quest with a group of dwarves to reclaim their homeland from the dragon Smaug.
8 | The Lord of the Rings (1954-1955) | fantasy | J.R.R. Tolkien's epic fantasy trilogy follows the journey of Frodo Baggins to destroy the One Ring and defeat the Dark Lord Sauron in the land of Middle-earth.
9 | The Alchemist (1988) | fiction | Paulo Coelho's philosophical novel follows Santiago, an Andalusian shepherd boy, on a journey of self-discovery and spiritual awakening as he searches for a hidden treasure.
10 | The Da Vinci Code (2003) | mystery/thriller | Dan Brown's gripping thriller follows symbologist Robert Langdon as he unravels clues hidden in art and history while trying to solve a murder mystery with far-reaching implications.
```
Now, the hacking part - writing our Airflow DAG!
### Write the dag
We'll add the following content to a `books_recommend.py` file within the `dags` directory. Let's go over what it does for each task.
```python
import os
import requests
from airflow.decorators import dag, task
from airflow.models.baseoperator import chain
from airflow.models.param import Param
from airflow.providers.qdrant.hooks.qdrant import QdrantHook
from airflow.providers.qdrant.operators.qdrant import QdrantIngestOperator
from pendulum import datetime
from qdrant_client import models
QDRANT_CONNECTION_ID = "qdrant_default"
DATA_FILE_PATH = "include/books.txt"
COLLECTION_NAME = "airflow_tutorial_collection"
EMBEDDING_MODEL_ID = "sentence-transformers/all-MiniLM-L6-v2"
EMBEDDING_DIMENSION = 384
SIMILARITY_METRIC = models.Distance.COSINE
def embed(text: str) -> list:
HUGGINFACE_URL = f"https://api-inference.huggingface.co/pipeline/feature-extraction/{EMBEDDING_MODEL_ID}"
response = requests.post(
HUGGINFACE_URL,
headers={"Authorization": f"Bearer {os.getenv('HUGGINGFACE_TOKEN')}"},
json={"inputs": [text], "options": {"wait_for_model": True}},
)
return response.json()[0]
@dag(
dag_id="books_recommend",
start_date=datetime(2023, 10, 18),
schedule=None,
catchup=False,
params={"preference": Param("Something suspenseful and thrilling.", type="string")},
)
def recommend_book():
@task
def import_books(text_file_path: str) -> list:
data = []
with open(text_file_path, "r") as f:
for line in f:
_, title, genre, description = line.split("|")
data.append(
{
"title": title.strip(),
"genre": genre.strip(),
"description": description.strip(),
}
)
return data
@task
def init_collection():
hook = QdrantHook(conn_id=QDRANT_CONNECTION_ID)
if not hook.conn..collection_exists(COLLECTION_NAME):
hook.conn.create_collection(
COLLECTION_NAME,
vectors_config=models.VectorParams(
size=EMBEDDING_DIMENSION, distance=SIMILARITY_METRIC
),
)
@task
def embed_description(data: dict) -> list:
return embed(data["description"])
books = import_books(text_file_path=DATA_FILE_PATH)
embeddings = embed_description.expand(data=books)
qdrant_vector_ingest = QdrantIngestOperator(
conn_id=QDRANT_CONNECTION_ID,
task_id="qdrant_vector_ingest",
collection_name=COLLECTION_NAME,
payload=books,
vectors=embeddings,
)
@task
def embed_preference(**context) -> list:
user_mood = context["params"]["preference"]
response = embed(text=user_mood)
return response
@task
def search_qdrant(
preference_embedding: list,
) -> None:
hook = QdrantHook(conn_id=QDRANT_CONNECTION_ID)
result = hook.conn.query_points(
collection_name=COLLECTION_NAME,
query=preference_embedding,
limit=1,
with_payload=True,
).points
print("Book recommendation: " + result[0].payload["title"])
print("Description: " + result[0].payload["description"])
chain(
init_collection(),
qdrant_vector_ingest,
search_qdrant(embed_preference()),
)
recommend_book()
```
`import_books`: This task reads a text file containing information about the books (like title, genre, and description), and then returns the data as a list of dictionaries.
`init_collection`: This task initializes a collection in the Qdrant database, where we will store the vector representations of the book descriptions.
`embed_description`: This is a dynamic task that creates one mapped task instance for each book in the list. The task uses the `embed` function to generate vector embeddings for each description. To use a different embedding model, you can adjust the `EMBEDDING_MODEL_ID`, `EMBEDDING_DIMENSION` values.
`embed_user_preference`: Here, we take a user's input and convert it into a vector using the same pre-trained model used for the book descriptions.
`qdrant_vector_ingest`: This task ingests the book data into the Qdrant collection using the [QdrantIngestOperator](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/1.0.0/), associating each book description with its corresponding vector embeddings.
`search_qdrant`: Finally, this task performs a search in the Qdrant database using the vectorized user preference. It finds the most relevant book in the collection based on vector similarity.
### Run the DAG
Head over to your terminal and run
```astro dev start```
A local Airflow container should spawn. You can now access the Airflow UI at <http://localhost:8080>. Visit our DAG by clicking on `books_recommend`.
![DAG](/documentation/examples/airflow/demo-dag.png)
Hit the PLAY button on the right to run the DAG. You'll be asked for input about your preference, with the default value already filled in.
![Preference](/documentation/examples/airflow/preference-input.png)
After your DAG run completes, you should be able to see the output of your search in the logs of the `search_qdrant` task.
![Output](/documentation/examples/airflow/output.png)
There you have it, an Airflow pipeline that interfaces with Qdrant! Feel free to fiddle around and explore Airflow. There are references below that might come in handy.
## Further reading
- [Introduction to Airflow](https://docs.astronomer.io/learn/intro-to-airflow)
- [Airflow Concepts](https://docs.astronomer.io/learn/category/airflow-concepts)
- [Airflow Reference](https://airflow.apache.org/docs/)
- [Astronomer Documentation](https://docs.astronomer.io/)
| documentation/send-data/qdrant-airflow-astronomer.md |
---
title: Qdrant on Databricks
weight: 36
aliases:
- /documentation/examples/databricks/
---
# Qdrant on Databricks
| Time: 30 min | Level: Intermediate | [Complete Notebook](https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4750876096379825/93425612168199/6949977306828869/latest.html) |
| ------------ | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
[Databricks](https://www.databricks.com/) is a unified analytics platform for working with big data and AI. It's built around Apache Spark, a powerful open-source distributed computing system well-suited for processing large-scale datasets and performing complex analytics tasks.
Apache Spark is designed to scale horizontally, meaning it can handle expensive operations like generating vector embeddings by distributing computation across a cluster of machines. This scalability is crucial when dealing with large datasets.
In this example, we will demonstrate how to vectorize a dataset with dense and sparse embeddings using Qdrant's [FastEmbed](https://qdrant.github.io/fastembed/) library. We will then load this vectorized data into a Qdrant cluster using the [Qdrant Spark connector](/documentation/frameworks/spark/) on Databricks.
### Setting up a Databricks project
- Set up a **[Databricks cluster](https://docs.databricks.com/en/compute/configure.html)** following the official documentation guidelines.
- Install the **[Qdrant Spark connector](/documentation/frameworks/spark/)** as a library:
- Navigate to the `Libraries` section in your cluster dashboard.
- Click on `Install New` at the top-right to open the library installation modal.
- Search for `io.qdrant:spark:VERSION` in the Maven packages and click on `Install`.
![Install the library](/documentation/examples/databricks/library-install.png)
- Create a new **[Databricks notebook](https://docs.databricks.com/en/notebooks/index.html)** on your cluster to begin working with your data and libraries.
### Download a dataset
- **Install the required dependencies:**
```python
%pip install fastembed datasets
```
- **Download the dataset:**
```python
from datasets import load_dataset
dataset_name = "tasksource/med"
dataset = load_dataset(dataset_name, split="train")
# We'll use the first 100 entries from this dataset and exclude some unused columns.
dataset = dataset.select(range(100)).remove_columns(["gold_label", "genre"])
```
- **Convert the dataset into a Spark dataframe:**
```python
dataset.to_parquet("/dbfs/pq.pq")
dataset_df = spark.read.parquet("file:/dbfs/pq.pq")
```
### Vectorizing the data
In this section, we'll be generating both dense and sparse vectors for our rows using [FastEmbed](https://qdrant.github.io/fastembed/). We'll create a user-defined function (UDF) to handle this step.
#### Creating the vectorization function
```python
from fastembed import TextEmbedding, SparseTextEmbedding
def vectorize(partition_data):
# Initialize dense and sparse models
dense_model = TextEmbedding(model_name="BAAI/bge-small-en-v1.5")
sparse_model = SparseTextEmbedding(model_name="Qdrant/bm25")
for row in partition_data:
# Generate dense and sparse vectors
dense_vector = next(dense_model.embed(row.sentence1))
sparse_vector = next(sparse_model.embed(row.sentence2))
yield [
row.sentence1, # 1st column: original text
row.sentence2, # 2nd column: original text
dense_vector.tolist(), # 3rd column: dense vector
sparse_vector.indices.tolist(), # 4th column: sparse vector indices
sparse_vector.values.tolist(), # 5th column: sparse vector values
]
```
We're using the [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) model for dense embeddings and [BM25](https://huggingface.co/Qdrant/bm25) for sparse embeddings.
#### Applying the UDF on our dataframe
Next, let's apply our `vectorize` UDF on our Spark dataframe to generate embeddings.
```python
embeddings = dataset_df.rdd.mapPartitions(vectorize)
```
The `mapPartitions()` method returns a [Resilient Distributed Dataset (RDD)](https://www.databricks.com/glossary/what-is-rdd) which should then be converted back to a Spark dataframe.
#### Building the new Spark dataframe with the vectorized data
We'll now create a new Spark dataframe (`embeddings_df`) with the vectorized data using the specified schema.
```python
from pyspark.sql.types import StructType, StructField, StringType, ArrayType, FloatType, IntegerType
# Define the schema for the new dataframe
schema = StructType([
StructField("sentence1", StringType()),
StructField("sentence2", StringType()),
StructField("dense_vector", ArrayType(FloatType())),
StructField("sparse_vector_indices", ArrayType(IntegerType())),
StructField("sparse_vector_values", ArrayType(FloatType()))
])
# Create the new dataframe with the vectorized data
embeddings_df = spark.createDataFrame(data=embeddings, schema=schema)
```
### Uploading the data to Qdrant
- **Create a Qdrant collection:**
- [Follow the documentation](/documentation/concepts/collections/#create-a-collection) to create a collection with the appropriate configurations. Here's an example request to support both dense and sparse vectors:
```json
PUT /collections/{collection_name}
{
"vectors": {
"dense": {
"size": 384,
"distance": "Cosine"
}
},
"sparse_vectors": {
"sparse": {}
}
}
```
- **Upload the dataframe to Qdrant:**
```python
options = {
"qdrant_url": "<QDRANT_GRPC_URL>",
"api_key": "<QDRANT_API_KEY>",
"collection_name": "<QDRANT_COLLECTION_NAME>",
"vector_fields": "dense_vector",
"vector_names": "dense",
"sparse_vector_value_fields": "sparse_vector_values",
"sparse_vector_index_fields": "sparse_vector_indices",
"sparse_vector_names": "sparse",
"schema": embeddings_df.schema.json(),
}
embeddings_df.write.format("io.qdrant.spark.Qdrant").options(**options).mode(
"append"
).save()
```
<aside role="status">
<p>You can find the list of the Spark connector configuration options <a href="/documentation/frameworks/spark/#configuration-options" target="_blank">here</a>.</p>
</aside>
Ensure to replace the placeholder values (`<QDRANT_GRPC_URL>`, `<QDRANT_API_KEY>`, `<QDRANT_COLLECTION_NAME>`) with your actual values. If the `id_field` option is not specified, Qdrant Spark connector generates random UUIDs for each point.
The command output you should see is similar to:
```console
Command took 40.37 seconds -- by xxxxx90@xxxxxx.com at 4/17/2024, 12:13:28 PM on fastembed
```
### Conclusion
That wraps up our tutorial! Feel free to explore more functionalities and experiments with different models, parameters, and features available in Databricks, Spark, and Qdrant.
Happy data engineering!
| documentation/send-data/databricks.md |
---
title: How to Setup Seamless Data Streaming with Kafka and Qdrant
weight: 49
aliases:
- /examples/data-streaming-kafka-qdrant/
---
# Setup Data Streaming with Kafka via Confluent
**Author:** [M K Pavan Kumar](https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/) , research scholar at [IIITDM, Kurnool](https://iiitk.ac.in). Specialist in hallucination mitigation techniques and RAG methodologies.
• [GitHub](https://github.com/pavanjava) • [Medium](https://medium.com/@manthapavankumar11)
## Introduction
This guide will walk you through the detailed steps of installing and setting up the [Qdrant Sink Connector](https://github.com/qdrant/qdrant-kafka), building the necessary infrastructure, and creating a practical playground application. By the end of this article, you will have a deep understanding of how to leverage this powerful integration to streamline your data workflows, ultimately enhancing the performance and capabilities of your data-driven real-time semantic search and RAG applications.
In this example, original data will be sourced from Azure Blob Storage and MongoDB.
![1.webp](/documentation/examples/data-streaming-kafka-qdrant/1.webp)
Figure 1: [Real time Change Data Capture (CDC)](https://www.confluent.io/learn/change-data-capture/) with Kafka and Qdrant.
## The Architecture:
## Source Systems
The architecture begins with the **source systems**, represented by MongoDB and Azure Blob Storage. These systems are vital for storing and managing raw data. MongoDB, a popular NoSQL database, is known for its flexibility in handling various data formats and its capability to scale horizontally. It is widely used for applications that require high performance and scalability. Azure Blob Storage, on the other hand, is Microsoft’s object storage solution for the cloud. It is designed for storing massive amounts of unstructured data, such as text or binary data. The data from these sources is extracted using **source connectors**, which are responsible for capturing changes in real-time and streaming them into Kafka.
## Kafka
At the heart of this architecture lies **Kafka**, a distributed event streaming platform capable of handling trillions of events a day. Kafka acts as a central hub where data from various sources can be ingested, processed, and distributed to various downstream systems. Its fault-tolerant and scalable design ensures that data can be reliably transmitted and processed in real-time. Kafka’s capability to handle high-throughput, low-latency data streams makes it an ideal choice for real-time data processing and analytics. The use of **Confluent** enhances Kafka’s functionalities, providing additional tools and services for managing Kafka clusters and stream processing.
## Qdrant
The processed data is then routed to **Qdrant**, a highly scalable vector search engine designed for similarity searches. Qdrant excels at managing and searching through high-dimensional vector data, which is essential for applications involving machine learning and AI, such as recommendation systems, image recognition, and natural language processing. The **Qdrant Sink Connector** for Kafka plays a pivotal role here, enabling seamless integration between Kafka and Qdrant. This connector allows for the real-time ingestion of vector data into Qdrant, ensuring that the data is always up-to-date and ready for high-performance similarity searches.
## Integration and Pipeline Importance
The integration of these components forms a powerful and efficient data streaming pipeline. The **Qdrant Sink Connector** ensures that the data flowing through Kafka is continuously ingested into Qdrant without any manual intervention. This real-time integration is crucial for applications that rely on the most current data for decision-making and analysis. By combining the strengths of MongoDB and Azure Blob Storage for data storage, Kafka for data streaming, and Qdrant for vector search, this pipeline provides a robust solution for managing and processing large volumes of data in real-time. The architecture’s scalability, fault-tolerance, and real-time processing capabilities are key to its effectiveness, making it a versatile solution for modern data-driven applications.
## Installation of Confluent Kafka Platform
To install the Confluent Kafka Platform (self-managed locally), follow these 3 simple steps:
**Download and Extract the Distribution Files:**
- Visit [Confluent Installation Page](https://www.confluent.io/installation/).
- Download the distribution files (tar, zip, etc.).
- Extract the downloaded file using:
```bash
tar -xvf confluent-<version>.tar.gz
```
or
```bash
unzip confluent-<version>.zip
```
**Configure Environment Variables:**
```bash
# Set CONFLUENT_HOME to the installation directory:
export CONFLUENT_HOME=/path/to/confluent-<version>
# Add Confluent binaries to your PATH
export PATH=$CONFLUENT_HOME/bin:$PATH
```
**Run Confluent Platform Locally:**
```bash
# Start the Confluent Platform services:
confluent local start
# Stop the Confluent Platform services:
confluent local stop
```
## Installation of Qdrant:
To install and run Qdrant (self-managed locally), you can use Docker, which simplifies the process. First, ensure you have Docker installed on your system. Then, you can pull the Qdrant image from Docker Hub and run it with the following commands:
```bash
docker pull qdrant/qdrant
docker run -p 6334:6334 -p 6333:6333 qdrant/qdrant
```
This will download the Qdrant image and start a Qdrant instance accessible at `http://localhost:6333`. For more detailed instructions and alternative installation methods, refer to the [Qdrant installation documentation](https://qdrant.tech/documentation/quick-start/).
## Installation of Qdrant-Kafka Sink Connector:
To install the Qdrant Kafka connector using [Confluent Hub](https://www.confluent.io/hub/), you can utilize the straightforward `confluent-hub install` command. This command simplifies the process by eliminating the need for manual configuration file manipulations. To install the Qdrant Kafka connector version 1.1.0, execute the following command in your terminal:
```bash
confluent-hub install qdrant/qdrant-kafka:1.1.0
```
This command downloads and installs the specified connector directly from Confluent Hub into your Confluent Platform or Kafka Connect environment. The installation process ensures that all necessary dependencies are handled automatically, allowing for a seamless integration of the Qdrant Kafka connector with your existing setup. Once installed, the connector can be configured and managed using the Confluent Control Center or the Kafka Connect REST API, enabling efficient data streaming between Kafka and Qdrant without the need for intricate manual setup.
![2.webp](/documentation/examples/data-streaming-kafka-qdrant/2.webp)
*Figure 2: Local Confluent platform showing the Source and Sink connectors after installation.*
Ensure the configuration of the connector once it's installed as below. keep in mind that your `key.converter` and `value.converter` are very important for kafka to safely deliver the messages from topic to qdrant.
```bash
{
"name": "QdrantSinkConnectorConnector_0",
"config": {
"value.converter.schemas.enable": "false",
"name": "QdrantSinkConnectorConnector_0",
"connector.class": "io.qdrant.kafka.QdrantSinkConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"topics": "topic_62,qdrant_kafka.docs",
"errors.deadletterqueue.topic.name": "dead_queue",
"errors.deadletterqueue.topic.replication.factor": "1",
"qdrant.grpc.url": "http://localhost:6334",
"qdrant.api.key": "************"
}
}
```
## Installation of MongoDB
For the Kafka to connect MongoDB as source, your MongoDB instance should be running in a `replicaSet` mode. below is the `docker compose` file which will spin a single node `replicaSet` instance of MongoDB.
```bash
version: "3.8"
services:
mongo1:
image: mongo:7.0
command: ["--replSet", "rs0", "--bind_ip_all", "--port", "27017"]
ports:
- 27017:27017
healthcheck:
test: echo "try { rs.status() } catch (err) { rs.initiate({_id:'rs0',members:[{_id:0,host:'host.docker.internal:27017'}]}) }" | mongosh --port 27017 --quiet
interval: 5s
timeout: 30s
start_period: 0s
start_interval: 1s
retries: 30
volumes:
- "mongo1_data:/data/db"
- "mongo1_config:/data/configdb"
volumes:
mongo1_data:
mongo1_config:
```
Similarly, install and configure source connector as below.
```bash
confluent-hub install mongodb/kafka-connect-mongodb:latest
```
After installing the `MongoDB` connector, connector configuration should look like this:
```bash
{
"name": "MongoSourceConnectorConnector_0",
"config": {
"connector.class": "com.mongodb.kafka.connect.MongoSourceConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"connection.uri": "mongodb://127.0.0.1:27017/?replicaSet=rs0&directConnection=true",
"database": "qdrant_kafka",
"collection": "docs",
"publish.full.document.only": "true",
"topic.namespace.map": "{\"*\":\"qdrant_kafka.docs\"}",
"copy.existing": "true"
}
}
```
## Playground Application
As the infrastructure set is completely done, now it's time for us to create a simple application and check our setup. the objective of our application is the data is inserted to Mongodb and eventually it will get ingested into Qdrant also using [Change Data Capture (CDC)](https://www.confluent.io/learn/change-data-capture/).
`requirements.txt`
```bash
fastembed==0.3.1
pymongo==4.8.0
qdrant_client==1.10.1
```
`project_root_folder/main.py`
This is just sample code. Nevertheless it can be extended to millions of operations based on your use case.
```python
from pymongo import MongoClient
from utils.app_utils import create_qdrant_collection
from fastembed import TextEmbedding
collection_name: str = 'test'
embed_model_name: str = 'snowflake/snowflake-arctic-embed-s'
```
```python
# Step 0: create qdrant_collection
create_qdrant_collection(collection_name=collection_name, embed_model=embed_model_name)
# Step 1: Connect to MongoDB
client = MongoClient('mongodb://127.0.0.1:27017/?replicaSet=rs0&directConnection=true')
# Step 2: Select Database
db = client['qdrant_kafka']
# Step 3: Select Collection
collection = db['docs']
# Step 4: Create a Document to Insert
description = "qdrant is a high available vector search engine"
embedding_model = TextEmbedding(model_name=embed_model_name)
vector = next(embedding_model.embed(documents=description)).tolist()
document = {
"collection_name": collection_name,
"id": 1,
"vector": vector,
"payload": {
"name": "qdrant",
"description": description,
"url": "https://qdrant.tech/documentation"
}
}
# Step 5: Insert the Document into the Collection
result = collection.insert_one(document)
# Step 6: Print the Inserted Document's ID
print("Inserted document ID:", result.inserted_id)
```
`project_root_folder/utils/app_utils.py`
```python
from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333", api_key="<YOUR_KEY>")
dimension_dict = {"snowflake/snowflake-arctic-embed-s": 384}
def create_qdrant_collection(collection_name: str, embed_model: str):
if not client.collection_exists(collection_name=collection_name):
client.create_collection(
collection_name=collection_name,
vectors_config=models.VectorParams(size=dimension_dict.get(embed_model), distance=models.Distance.COSINE)
)
```
Before we run the application, below is the state of MongoDB and Qdrant databases.
![3.webp](/documentation/examples/data-streaming-kafka-qdrant/3.webp)
Figure 3: Initial state: no collection named `test` & `no data` in the `docs` collection of MongodDB.
Once you run the code the data goes into Mongodb and the CDC gets triggered and eventually Qdrant will receive this data.
![4.webp](/documentation/examples/data-streaming-kafka-qdrant/4.webp)
Figure 4: The test Qdrant collection is created automatically.
![5.webp](/documentation/examples/data-streaming-kafka-qdrant/5.webp)
Figure 5: Data is inserted into both MongoDB and Qdrant.
## Conclusion:
In conclusion, the integration of **Kafka** with **Qdrant** using the **Qdrant Sink Connector** provides a seamless and efficient solution for real-time data streaming and processing. This setup not only enhances the capabilities of your data pipeline but also ensures that high-dimensional vector data is continuously indexed and readily available for similarity searches. By following the installation and setup guide, you can easily establish a robust data flow from your **source systems** like **MongoDB** and **Azure Blob Storage**, through **Kafka**, and into **Qdrant**. This architecture empowers modern applications to leverage real-time data insights and advanced search capabilities, paving the way for innovative data-driven solutions. | documentation/send-data/data-streaming-kafka-qdrant.md |
---
title: Send Data to Qdrant
weight: 18
---
## How to Send Your Data to a Qdrant Cluster
| Example | Description | Stack |
|---------------------------------------------------------------------------------|-------------------------------------------------------------------|---------------------------------------------|
| [Pinecone to Qdrant Data Transfer](https://githubtocolab.com/qdrant/examples/blob/master/data-migration/from-pinecone-to-qdrant.ipynb) | Migrate your vector data from Pinecone to Qdrant. | Qdrant, Vector-io |
| [Stream Data to Qdrant with Kafka](../send-data/data-streaming-kafka-qdrant/) | Use Confluent to Stream Data to Qdrant via Managed Kafka. | Qdrant, Kafka |
| [Qdrant on Databricks](../send-data/databricks/) | Learn how to use Qdrant on Databricks using the Spark connector | Qdrant, Databricks, Apache Spark |
| [Qdrant with Airflow and Astronomer](../send-data/qdrant-airflow-astronomer/) | Build a semantic querying system using Airflow and Astronomer | Qdrant, Airflow, Astronomer | | documentation/send-data/_index.md |
---
title: Snowflake Models
weight: 2900
---
# Snowflake
Qdrant supports working with [Snowflake](https://www.snowflake.com/blog/introducing-snowflake-arctic-embed-snowflakes-state-of-the-art-text-embedding-family-of-models/) text embedding models. You can find all the available models on [HuggingFace](https://huggingface.co/Snowflake).
### Setting up the Qdrant and Snowflake models
```python
from qdrant_client import QdrantClient
from fastembed import TextEmbedding
qclient = QdrantClient(":memory:")
embedding_model = TextEmbedding("snowflake/snowflake-arctic-embed-s")
texts = [
"Qdrant is the best vector search engine!",
"Loved by Enterprises and everyone building for low latency, high performance, and scale.",
]
```
```typescript
import {QdrantClient} from '@qdrant/js-client-rest';
import { pipeline } from '@xenova/transformers';
const client = new QdrantClient({ url: 'http://localhost:6333' });
const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-s');
const texts = [
"Qdrant is the best vector search engine!",
"Loved by Enterprises and everyone building for low latency, high performance, and scale.",
]
```
The following example shows how to embed documents with the [`snowflake-arctic-embed-s`](https://huggingface.co/Snowflake/snowflake-arctic-embed-s) model that generates sentence embeddings of size 384.
### Embedding documents
```python
embeddings = embedding_model.embed(texts)
```
```typescript
const embeddings = await extractor(texts, { normalize: true, pooling: 'cls' });
```
### Converting the model outputs to Qdrant points
```python
from qdrant_client.models import PointStruct
points = [
PointStruct(
id=idx,
vector=embedding,
payload={"text": text},
)
for idx, (embedding, text) in enumerate(zip(embeddings, texts))
]
```
```typescript
let points = embeddings.tolist().map((embedding, i) => {
return {
id: i,
vector: embedding,
payload: {
text: texts[i]
}
}
});
```
### Creating a collection to insert the documents
```python
from qdrant_client.models import VectorParams, Distance
COLLECTION_NAME = "example_collection"
qclient.create_collection(
COLLECTION_NAME,
vectors_config=VectorParams(
size=384,
distance=Distance.COSINE,
),
)
qclient.upsert(COLLECTION_NAME, points)
```
```typescript
const COLLECTION_NAME = "example_collection"
await client.createCollection(COLLECTION_NAME, {
vectors: {
size: 384,
distance: 'Cosine',
}
});
await client.upsert(COLLECTION_NAME, {
wait: true,
points
});
```
### Searching for documents with Qdrant
Once the documents are added, you can search for the most relevant documents.
```python
query_embedding = next(embedding_model.query_embed("What is the best to use for vector search scaling?"))
qclient.search(
collection_name=COLLECTION_NAME,
query_vector=query_embedding,
)
```
```typescript
const query_embedding = await extractor("What is the best to use for vector search scaling?", {
normalize: true,
pooling: 'cls'
});
await client.search(COLLECTION_NAME, {
vector: query_embedding.tolist()[0],
});
```
| documentation/embeddings/snowflake.md |
---
title: Watsonx
weight: 3000
aliases:
- /documentation/examples/watsonx-search/
- /documentation/tutorials/watsonx-search/
- /documentation/integrations/watsonx/
---
# Using Watsonx with Qdrant
Watsonx is IBM's platform for AI embeddings, focusing on enterprise-level text and data analytics. These embeddings are suitable for high-precision vector searches in Qdrant.
## Installation
You can install the required package using the following pip command:
```bash
pip install watsonx
```
## Code Example
```python
import qdrant_client
from qdrant_client.models import Batch
from watsonx import Watsonx
# Initialize Watsonx AI model
model = Watsonx("watsonx-model")
# Generate embeddings for enterprise data
text = "Watsonx provides enterprise-level NLP solutions."
embeddings = model.embed(text)
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333)
# Upsert the embedding into Qdrant
qdrant_client.upsert(
collection_name="EnterpriseData",
points=Batch(
ids=[1],
vectors=[embeddings],
)
)
```
| documentation/embeddings/watsonx.md |
---
title: Instruct
weight: 1800
---
# Using Instruct with Qdrant
Instruct is a specialized provider offering detailed embeddings for instructional content, which can be effectively used with Qdrant. With Instruct every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). Unlike encoders from prior work that are more specialized, INSTRUCTOR is a single embedder that can generate text embeddings tailored to different downstream tasks and domains, without any further training.
## Installation
```bash
pip install instruct
```
Below is an example of how to obtain embeddings using Instruct's API and store them in a Qdrant collection:
```python
import qdrant_client
from qdrant_client.models import Batch
from instruct import Instruct
# Initialize Instruct model
model = Instruct("instruct-base")
# Generate embeddings for instructional content
text = "Instruct provides detailed embeddings for learning content."
embeddings = model.embed(text)
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333)
# Upsert the embedding into Qdrant
qdrant_client.upsert(
collection_name="LearningContent",
points=Batch(
ids=[1],
vectors=[embeddings],
)
)
```
| documentation/embeddings/instruct.md |
---
title: GPT4All
weight: 1700
---
# Using GPT4All with Qdrant
GPT4All offers a range of large language models that can be fine-tuned for various applications. GPT4All runs large language models (LLMs) privately on everyday desktops & laptops.
No API calls or GPUs required - you can just download the application and get started. Use GPT4All in Python to program with LLMs implemented with the llama.cpp backend and Nomic's C backend.
## Installation
You can install the required package using the following pip command:
```bash
pip install gpt4all
```
Here is how you might connect to GPT4ALL using Qdrant:
```python
import qdrant_client
from qdrant_client.models import Batch
from gpt4all import GPT4All
# Initialize GPT4All model
model = GPT4All("gpt4all-lora-quantized")
# Generate embeddings for a text
text = "GPT4All enables open-source AI applications."
embeddings = model.embed(text)
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333)
# Upsert the embedding into Qdrant
qdrant_client.upsert(
collection_name="OpenSourceAI",
points=Batch(
ids=[1],
vectors=[embeddings],
)
)
```
| documentation/embeddings/gpt4all.md |
---
title: Voyage AI
weight: 3200
---
# Voyage AI
Qdrant supports working with [Voyage AI](https://voyageai.com/) embeddings. The supported models' list can be found [here](https://docs.voyageai.com/docs/embeddings).
You can generate an API key from the [Voyage AI dashboard](<https://dash.voyageai.com/>) to authenticate the requests.
### Setting up the Qdrant and Voyage clients
```python
from qdrant_client import QdrantClient
import voyageai
VOYAGE_API_KEY = "<YOUR_VOYAGEAI_API_KEY>"
qclient = QdrantClient(":memory:")
vclient = voyageai.Client(api_key=VOYAGE_API_KEY)
texts = [
"Qdrant is the best vector search engine!",
"Loved by Enterprises and everyone building for low latency, high performance, and scale.",
]
```
```typescript
import {QdrantClient} from '@qdrant/js-client-rest';
const VOYAGEAI_BASE_URL = "https://api.voyageai.com/v1/embeddings"
const VOYAGEAI_API_KEY = "<YOUR_VOYAGEAI_API_KEY>"
const client = new QdrantClient({ url: 'http://localhost:6333' });
const headers = {
"Authorization": "Bearer " + VOYAGEAI_API_KEY,
"Content-Type": "application/json"
}
const texts = [
"Qdrant is the best vector search engine!",
"Loved by Enterprises and everyone building for low latency, high performance, and scale.",
]
```
The following example shows how to embed documents with the [`voyage-large-2`](https://docs.voyageai.com/docs/embeddings#model-choices) model that generates sentence embeddings of size 1536.
### Embedding documents
```python
response = vclient.embed(texts, model="voyage-large-2", input_type="document")
```
```typescript
let body = {
"input": texts,
"model": "voyage-large-2",
"input_type": "document",
}
let response = await fetch(VOYAGEAI_BASE_URL, {
method: "POST",
body: JSON.stringify(body),
headers
});
let response_body = await response.json();
```
### Converting the model outputs to Qdrant points
```python
from qdrant_client.models import PointStruct
points = [
PointStruct(
id=idx,
vector=embedding,
payload={"text": text},
)
for idx, (embedding, text) in enumerate(zip(response.embeddings, texts))
]
```
```typescript
let points = response_body.data.map((data, i) => {
return {
id: i,
vector: data.embedding,
payload: {
text: texts[i]
}
}
});
```
### Creating a collection to insert the documents
```python
from qdrant_client.models import VectorParams, Distance
COLLECTION_NAME = "example_collection"
qclient.create_collection(
COLLECTION_NAME,
vectors_config=VectorParams(
size=1536,
distance=Distance.COSINE,
),
)
qclient.upsert(COLLECTION_NAME, points)
```
```typescript
const COLLECTION_NAME = "example_collection"
await client.createCollection(COLLECTION_NAME, {
vectors: {
size: 1536,
distance: 'Cosine',
}
});
await client.upsert(COLLECTION_NAME, {
wait: true,
points
});
```
### Searching for documents with Qdrant
Once the documents are added, you can search for the most relevant documents.
```python
response = vclient.embed(
["What is the best to use for vector search scaling?"],
model="voyage-large-2",
input_type="query",
)
qclient.search(
collection_name=COLLECTION_NAME,
query_vector=response.embeddings[0],
)
```
```typescript
body = {
"input": ["What is the best to use for vector search scaling?"],
"model": "voyage-large-2",
"input_type": "query",
};
response = await fetch(VOYAGEAI_BASE_URL, {
method: "POST",
body: JSON.stringify(body),
headers
});
response_body = await response.json();
await client.search(COLLECTION_NAME, {
vector: response_body.data[0].embedding,
});
```
| documentation/embeddings/voyage.md |
---
title: Together AI
weight: 3000
---
# Using Together AI with Qdrant
Together AI focuses on collaborative AI embeddings that enhance multi-user search scenarios when integrated with Qdrant.
## Installation
You can install the required package using the following pip command:
```bash
pip install togetherai
```
## Integration Example
```python
import qdrant_client
from qdrant_client.models import Batch
from togetherai import TogetherAI
# Initialize Together AI model
model = TogetherAI("togetherai-collab")
# Generate embeddings for collaborative content
text = "Together AI enhances collaborative content search."
embeddings = model.embed(text)
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333)
# Upsert the embedding into Qdrant
qdrant_client.upsert(
collection_name="CollaborativeContent",
points=Batch(
ids=[1],
vectors=[embeddings],
)
)
```
| documentation/embeddings/togetherai.md |
---
title: OpenAI
weight: 2700
aliases: [ ../integrations/openai/ ]
---
# OpenAI
Qdrant supports working with [OpenAI embeddings](https://platform.openai.com/docs/guides/embeddings/embeddings).
There is an official OpenAI Python package that simplifies obtaining them, and it can be installed with pip:
```bash
pip install openai
```
### Setting up the OpenAI and Qdrant clients
```python
import openai
import qdrant_client
openai_client = openai.Client(
api_key="<YOUR_API_KEY>"
)
client = qdrant_client.QdrantClient(":memory:")
texts = [
"Qdrant is the best vector search engine!",
"Loved by Enterprises and everyone building for low latency, high performance, and scale.",
]
```
The following example shows how to embed a document with the `text-embedding-3-small` model that generates sentence embeddings of size 1536. You can find the list of all supported models [here](https://platform.openai.com/docs/models/embeddings).
### Embedding a document
```python
embedding_model = "text-embedding-3-small"
result = openai_client.embeddings.create(input=texts, model=embedding_model)
```
### Converting the model outputs to Qdrant points
```python
from qdrant_client.models import PointStruct
points = [
PointStruct(
id=idx,
vector=data.embedding,
payload={"text": text},
)
for idx, (data, text) in enumerate(zip(result.data, texts))
]
```
### Creating a collection to insert the documents
```python
from qdrant_client.models import VectorParams, Distance
collection_name = "example_collection"
client.create_collection(
collection_name,
vectors_config=VectorParams(
size=1536,
distance=Distance.COSINE,
),
)
client.upsert(collection_name, points)
```
## Searching for documents with Qdrant
Once the documents are indexed, you can search for the most relevant documents using the same model.
```python
client.search(
collection_name=collection_name,
query_vector=openai_client.embeddings.create(
input=["What is the best to use for vector search scaling?"],
model=embedding_model,
)
.data[0]
.embedding,
)
```
## Using OpenAI Embedding Models with Qdrant's Binary Quantization
You can use OpenAI embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much.
|Method|Dimensionality|Test Dataset|Recall|Oversampling|
|-|-|-|-|-|
|OpenAI text-embedding-3-large|3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x|
|OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x|
|OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x|
|OpenAI text-embedding-ada-002|1536|[DbPedia 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) |0.98|4x|
| documentation/embeddings/openai.md |
---
title: AWS Bedrock
weight: 1000
---
# Bedrock Embeddings
You can use [AWS Bedrock](https://aws.amazon.com/bedrock/) with Qdrant. AWS Bedrock supports multiple [embedding model providers](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).
You'll need the following information from your AWS account:
- Region
- Access key ID
- Secret key
To configure your credentials, review the following AWS article: [How do I create an AWS access key](https://repost.aws/knowledge-center/create-access-key).
With the following code sample, you can generate embeddings using the [Titan Embeddings G1 - Text model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html) which produces sentence embeddings of size 1536.
```python
# Install the required dependencies
# pip install boto3 qdrant_client
import json
import boto3
from qdrant_client import QdrantClient, models
session = boto3.Session()
bedrock_client = session.client(
"bedrock-runtime",
region_name="<YOUR_AWS_REGION>",
aws_access_key_id="<YOUR_AWS_ACCESS_KEY_ID>",
aws_secret_access_key="<YOUR_AWS_SECRET_KEY>",
)
qdrant_client = QdrantClient(url="http://localhost:6333")
qdrant_client.create_collection(
"{collection_name}",
vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE),
)
body = json.dumps({"inputText": "Some text to generate embeddings for"})
response = bedrock_client.invoke_model(
body=body,
modelId="amazon.titan-embed-text-v1",
accept="application/json",
contentType="application/json",
)
response_body = json.loads(response.get("body").read())
qdrant_client.upsert(
"{collection_name}",
points=[models.PointStruct(id=1, vector=response_body["embedding"])],
)
```
```javascript
// Install the required dependencies
// npm install @aws-sdk/client-bedrock-runtime @qdrant/js-client-rest
import {
BedrockRuntimeClient,
InvokeModelCommand,
} from "@aws-sdk/client-bedrock-runtime";
import { QdrantClient } from '@qdrant/js-client-rest';
const main = async () => {
const bedrockClient = new BedrockRuntimeClient({
region: "<YOUR_AWS_REGION>",
credentials: {
accessKeyId: "<YOUR_AWS_ACCESS_KEY_ID>",,
secretAccessKey: "<YOUR_AWS_SECRET_KEY>",
},
});
const qdrantClient = new QdrantClient({ url: 'http://localhost:6333' });
await qdrantClient.createCollection("{collection_name}", {
vectors: {
size: 1536,
distance: 'Cosine',
}
});
const response = await bedrockClient.send(
new InvokeModelCommand({
modelId: "amazon.titan-embed-text-v1",
body: JSON.stringify({
inputText: "Some text to generate embeddings for",
}),
contentType: "application/json",
accept: "application/json",
})
);
const body = new TextDecoder().decode(response.body);
await qdrantClient.upsert("{collection_name}", {
points: [
{
id: 1,
vector: JSON.parse(body).embedding,
},
],
});
}
main();
```
| documentation/embeddings/bedrock.md |
---
title: Aleph Alpha
weight: 900
aliases:
- /documentation/examples/aleph-alpha-search/
- /documentation/tutorials/aleph-alpha-search/
- /documentation/integrations/aleph-alpha/
---
# Using Aleph Alpha Embeddings with Qdrant
Aleph Alpha is a multimodal and multilingual embeddings' provider. Their API allows creating the embeddings for text and images, both
in the same latent space. They maintain an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that might be
installed with pip:
```bash
pip install aleph-alpha-client
```
There is both synchronous and asynchronous client available. Obtaining the embeddings for an image and storing it into Qdrant might
be done in the following way:
```python
import qdrant_client
from qdrant_client.models import Batch
from aleph_alpha_client import (
Prompt,
AsyncClient,
SemanticEmbeddingRequest,
SemanticRepresentation,
ImagePrompt
)
aa_token = "<< your_token >>"
model = "luminous-base"
qdrant_client = qdrant_client.QdrantClient()
async with AsyncClient(token=aa_token) as client:
prompt = ImagePrompt.from_file("./path/to/the/image.jpg")
prompt = Prompt.from_image(prompt)
query_params = {
"prompt": prompt,
"representation": SemanticRepresentation.Symmetric,
"compress_to_size": 128,
}
query_request = SemanticEmbeddingRequest(**query_params)
query_response = await client.semantic_embed(
request=query_request, model=model
)
qdrant_client.upsert(
collection_name="MyCollection",
points=Batch(
ids=[1],
vectors=[query_response.embedding],
)
)
```
If we wanted to create text embeddings with the same model, we wouldn't use `ImagePrompt.from_file`, but simply provide the input
text into the `Prompt.from_text` method.
| documentation/embeddings/aleph-alpha.md |
---
title: Ollama
weight: 2600
---
# Using Ollama with Qdrant
Ollama provides specialized embeddings for niche applications. Ollama supports a variety of embedding models, making it possible to build retrieval augmented generation (RAG) applications that combine text prompts with existing documents or other data in specialized areas.
## Installation
You can install the required package using the following pip command:
```bash
pip install ollama
```
## Integration Example
```python
import qdrant_client
from qdrant_client.models import Batch
from ollama import Ollama
# Initialize Ollama model
model = Ollama("ollama-unique")
# Generate embeddings for niche applications
text = "Ollama excels in niche applications with specific embeddings."
embeddings = model.embed(text)
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333)
# Upsert the embedding into Qdrant
qdrant_client.upsert(
collection_name="NicheApplications",
points=Batch(
ids=[1],
vectors=[embeddings],
)
)
```
| documentation/embeddings/ollama.md |
---
title: OpenCLIP
weight: 2750
---
# Using OpenCLIP with Qdrant
OpenCLIP is an open-source implementation of the CLIP model, allowing for open source generation of multimodal embeddings that link text and images.
```python
import qdrant_client
from qdrant_client.models import Batch
import open_clip
# Load the OpenCLIP model and tokenizer
model, preprocess = open_clip.create_model_and_transforms('ViT-B-32', pretrained='openai')
tokenizer = open_clip.get_tokenizer('ViT-B-32')
# Generate embeddings for a text
text = "A photo of a cat"
text_inputs = tokenizer([text])
with torch.no_grad():
text_features = model.encode_text(text_inputs)
# Convert tensor to a list
embeddings = text_features[0].cpu().numpy().tolist()
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333)
# Upsert the embedding into Qdrant
qdrant_client.upsert(
collection_name="OpenCLIPEmbeddings",
points=Batch(
ids=[1],
vectors=[embeddings],
)
)
```
| documentation/embeddings/openclip.md |
---
title: Databricks Embeddings
weight: 1500
---
# Using Databricks Embeddings with Qdrant
Databricks offers an advanced platform for generating embeddings, especially within large-scale data environments. You can use the following Python code to integrate Databricks-generated embeddings with Qdrant.
```python
import qdrant_client
from qdrant_client.models import Batch
from databricks import sql
# Connect to Databricks SQL endpoint
connection = sql.connect(server_hostname='your_hostname',
http_path='your_http_path',
access_token='your_access_token')
# Execute a query to get embeddings
query = "SELECT embedding FROM your_table WHERE id = 1"
cursor = connection.cursor()
cursor.execute(query)
embedding = cursor.fetchone()[0]
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333)
# Upsert the embedding into Qdrant
qdrant_client.upsert(
collection_name="DatabricksEmbeddings",
points=Batch(
ids=[1], # Unique ID for the data point
vectors=[embedding], # Embedding fetched from Databricks
)
)
```
| documentation/embeddings/databricks.md |
---
title: Cohere
weight: 1400
aliases: [ ../integrations/cohere/ ]
---
# Cohere
Qdrant is compatible with Cohere [co.embed API](https://docs.cohere.ai/reference/embed) and its official Python SDK that
might be installed as any other package:
```bash
pip install cohere
```
The embeddings returned by co.embed API might be used directly in the Qdrant client's calls:
```python
import cohere
import qdrant_client
from qdrant_client.models import Batch
cohere_client = cohere.Client("<< your_api_key >>")
qdrant_client = qdrant_client.QdrantClient()
qdrant_client.upsert(
collection_name="MyCollection",
points=Batch(
ids=[1],
vectors=cohere_client.embed(
model="large",
texts=["The best vector database"],
).embeddings,
),
)
```
If you are interested in seeing an end-to-end project created with co.embed API and Qdrant, please check out the
"[Question Answering as a Service with Cohere and Qdrant](/articles/qa-with-cohere-and-qdrant/)" article.
## Embed v3
Embed v3 is a new family of Cohere models, released in November 2023. The new models require passing an additional
parameter to the API call: `input_type`. It determines the type of task you want to use the embeddings for.
- `input_type="search_document"` - for documents to store in Qdrant
- `input_type="search_query"` - for search queries to find the most relevant documents
- `input_type="classification"` - for classification tasks
- `input_type="clustering"` - for text clustering
While implementing semantic search applications, such as RAG, you should use `input_type="search_document"` for the
indexed documents and `input_type="search_query"` for the search queries. The following example shows how to index
documents with the Embed v3 model:
```python
import cohere
import qdrant_client
from qdrant_client.models import Batch
cohere_client = cohere.Client("<< your_api_key >>")
client = qdrant_client.QdrantClient()
client.upsert(
collection_name="MyCollection",
points=Batch(
ids=[1],
vectors=cohere_client.embed(
model="embed-english-v3.0", # New Embed v3 model
input_type="search_document", # Input type for documents
texts=["Qdrant is the a vector database written in Rust"],
).embeddings,
),
)
```
Once the documents are indexed, you can search for the most relevant documents using the Embed v3 model:
```python
client.search(
collection_name="MyCollection",
query_vector=cohere_client.embed(
model="embed-english-v3.0", # New Embed v3 model
input_type="search_query", # Input type for search queries
texts=["The best vector database"],
).embeddings[0],
)
```
<aside role="status">
According to Cohere's documentation, all v3 models can use dot product, cosine similarity,
and Euclidean distance as the similarity metric, as all metrics return identical rankings.
</aside>
| documentation/embeddings/cohere.md |
---
title: Clip
weight: 1300
---
# Using Clip with Qdrant
CLIP (Contrastive Language-Image Pre-Training) provides advanced AI capabilities including natural language processing and computer vision. CLIP is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3.
## Installation
You can install the required package using the following pip command:
```bash
pip install clip-client
```
## Integration Example
```python
import qdrant_client
from qdrant_client.models import Batch
from transformers import CLIPProcessor, CLIPModel
from PIL import Image
# Load the CLIP model and processor
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
# Load and process the image
image = Image.open("path/to/image.jpg")
inputs = processor(images=image, return_tensors="pt")
# Generate embeddings
with torch.no_grad():
embeddings = model.get_image_features(**inputs).numpy().tolist()
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333)
# Upsert the embedding into Qdrant
qdrant_client.upsert(
collection_name="ImageEmbeddings",
points=Batch(
ids=[1],
vectors=embeddings,
)
)
```
| documentation/embeddings/clip.md |
---
title: Clarifai
weight: 1200
---
# Using Clarifai Embeddings with Qdrant
Clarifai is a leading provider of visual embeddings, which are particularly strong in image and video analysis. Clarifai offers an API that allows you to create embeddings for various media types, which can be integrated into Qdrant for efficient vector search and retrieval.
You can install the Clarifai Python client with pip:
```bash
pip install clarifai-client
```
## Integration Example
```python
import qdrant_client
from qdrant_client.models import Batch
from clarifai.rest import ClarifaiApp
# Initialize Clarifai client
clarifai_app = ClarifaiApp(api_key="<< your_api_key >>")
# Choose the model for embeddings
model = clarifai_app.public_models.general_embedding_model
# Upload and get embeddings for an image
image_path = "./path/to/the/image.jpg"
response = model.predict_by_filename(image_path)
# Extract the embedding from the response
embedding = response['outputs'][0]['data']['embeddings'][0]['vector']
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient()
# Upsert the embedding into Qdrant
qdrant_client.upsert(
collection_name="MyCollection",
points=Batch(
ids=[1],
vectors=[embedding],
)
)
```
| documentation/embeddings/clarifai.md |
---
title: Mistral
weight: 2100
---
| Time: 10 min | Level: Beginner | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/mistral-getting-started/mistral-embed-getting-started/mistral_qdrant_getting_started.ipynb) |
| --- | ----------- | ----------- |
# Mistral
Qdrant is compatible with the new released Mistral Embed and its official Python SDK that can be installed as any other package:
## Setup
### Install the client
```bash
pip install mistralai
```
And then we set this up:
```python
from mistralai.client import MistralClient
from qdrant_client import QdrantClient
from qdrant_client.models import PointStruct, VectorParams, Distance
collection_name = "example_collection"
MISTRAL_API_KEY = "your_mistral_api_key"
client = QdrantClient(":memory:")
mistral_client = MistralClient(api_key=MISTRAL_API_KEY)
texts = [
"Qdrant is the best vector search engine!",
"Loved by Enterprises and everyone building for low latency, high performance, and scale.",
]
```
Let's see how to use the Embedding Model API to embed a document for retrieval.
The following example shows how to embed a document with the `models/embedding-001` with the `retrieval_document` task type:
## Embedding a document
```python
result = mistral_client.embeddings(
model="mistral-embed",
input=texts,
)
```
The returned result has a data field with a key: `embedding`. The value of this key is a list of floats representing the embedding of the document.
### Converting this into Qdrant Points
```python
points = [
PointStruct(
id=idx,
vector=response.embedding,
payload={"text": text},
)
for idx, (response, text) in enumerate(zip(result.data, texts))
]
```
## Create a collection and Insert the documents
```python
client.create_collection(collection_name, vectors_config=VectorParams(
size=1024,
distance=Distance.COSINE,
)
)
client.upsert(collection_name, points)
```
## Searching for documents with Qdrant
Once the documents are indexed, you can search for the most relevant documents using the same model with the `retrieval_query` task type:
```python
client.search(
collection_name=collection_name,
query_vector=mistral_client.embeddings(
model="mistral-embed", input=["What is the best to use for vector search scaling?"]
).data[0].embedding,
)
```
## Using Mistral Embedding Models with Binary Quantization
You can use Mistral Embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much.
At an oversampling of 3 and a limit of 100, we've a 95% recall against the exact nearest neighbors with rescore enabled.
| Oversampling | | 1 | 1 | 2 | 2 | 3 | 3 |
|--------------|---------|----------|----------|----------|----------|----------|--------------|
| | **Rescore** | False | True | False | True | False | True |
| **Limit** | | | | | | | |
| 10 | | 0.53444 | 0.857778 | 0.534444 | 0.918889 | 0.533333 | 0.941111 |
| 20 | | 0.508333 | 0.837778 | 0.508333 | 0.903889 | 0.508333 | 0.927778 |
| 50 | | 0.492222 | 0.834444 | 0.492222 | 0.903556 | 0.492889 | 0.940889 |
| 100 | | 0.499111 | 0.845444 | 0.498556 | 0.918333 | 0.497667 | **0.944556** |
That's it! You can now use Mistral Embedding Models with Qdrant!
| documentation/embeddings/mistral.md |
---
title: "Nomic"
weight: 2300
---
# Nomic
The `nomic-embed-text-v1` model is an open source [8192 context length](https://github.com/nomic-ai/contrastors) text encoder.
While you can find it on the [Hugging Face Hub](https://huggingface.co/nomic-ai/nomic-embed-text-v1),
you may find it easier to obtain them through the [Nomic Text Embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text).
Once installed, you can configure it with the official Python client, FastEmbed or through direct HTTP requests.
<aside role="status">Using Nomic Embeddings via the Nomic API/SDK requires configuring the <a href="https://atlas.nomic.ai/cli-login">Nomic API token</a>.</aside>
You can use Nomic embeddings directly in Qdrant client calls. There is a difference in the way the embeddings
are obtained for documents and queries.
#### Upsert using [Nomic SDK](https://github.com/nomic-ai/nomic)
The `task_type` parameter defines the embeddings that you get.
For documents, set the `task_type` to `search_document`:
```python
from qdrant_client import QdrantClient, models
from nomic import embed
output = embed.text(
texts=["Qdrant is the best vector database!"],
model="nomic-embed-text-v1",
task_type="search_document",
)
client = QdrantClient()
client.upsert(
collection_name="my-collection",
points=models.Batch(
ids=[1],
vectors=output["embeddings"],
),
)
```
#### Upsert using [FastEmbed](https://github.com/qdrant/fastembed)
```python
from fastembed import TextEmbedding
from client import QdrantClient, models
model = TextEmbedding("nomic-ai/nomic-embed-text-v1")
output = model.embed(["Qdrant is the best vector database!"])
client = QdrantClient()
client.upsert(
collection_name="my-collection",
points=models.Batch(
ids=[1],
vectors=[embeddings.tolist() for embeddings in output],
),
)
```
#### Search using [Nomic SDK](https://github.com/nomic-ai/nomic)
To query the collection, set the `task_type` to `search_query`:
```python
output = embed.text(
texts=["What is the best vector database?"],
model="nomic-embed-text-v1",
task_type="search_query",
)
client.search(
collection_name="my-collection",
query_vector=output["embeddings"][0],
)
```
#### Search using [FastEmbed](https://github.com/qdrant/fastembed)
```python
output = next(model.embed("What is the best vector database?"))
client.search(
collection_name="my-collection",
query_vector=output.tolist(),
)
```
For more information, see the Nomic documentation on [Text embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text).
| documentation/embeddings/nomic.md |
---
title: Nvidia
weight: 2400
---
# Nvidia
Qdrant supports working with [Nvidia embeddings](https://build.nvidia.com/explore/retrieval).
You can generate an API key to authenticate the requests from the [Nvidia Playground](<https://build.nvidia.com/nvidia/embed-qa-4>).
### Setting up the Qdrant client and Nvidia session
```python
import requests
from qdrant_client import QdrantClient
NVIDIA_BASE_URL = "https://ai.api.nvidia.com/v1/retrieval/nvidia/embeddings"
NVIDIA_API_KEY = "<YOUR_API_KEY>"
nvidia_session = requests.Session()
client = QdrantClient(":memory:")
headers = {
"Authorization": f"Bearer {NVIDIA_API_KEY}",
"Accept": "application/json",
}
texts = [
"Qdrant is the best vector search engine!",
"Loved by Enterprises and everyone building for low latency, high performance, and scale.",
]
```
```typescript
import { QdrantClient } from '@qdrant/js-client-rest';
const NVIDIA_BASE_URL = "https://ai.api.nvidia.com/v1/retrieval/nvidia/embeddings"
const NVIDIA_API_KEY = "<YOUR_API_KEY>"
const client = new QdrantClient({ url: 'http://localhost:6333' });
const headers = {
"Authorization": "Bearer " + NVIDIA_API_KEY,
"Accept": "application/json",
"Content-Type": "application/json"
}
const texts = [
"Qdrant is the best vector search engine!",
"Loved by Enterprises and everyone building for low latency, high performance, and scale.",
]
```
The following example shows how to embed documents with the `embed-qa-4` model that generates sentence embeddings of size 1024.
### Embedding documents
```python
payload = {
"input": texts,
"input_type": "passage",
"model": "NV-Embed-QA",
}
response_body = nvidia_session.post(
NVIDIA_BASE_URL, headers=headers, json=payload
).json()
```
```typescript
let body = {
"input": texts,
"input_type": "passage",
"model": "NV-Embed-QA"
}
let response = await fetch(NVIDIA_BASE_URL, {
method: "POST",
body: JSON.stringify(body),
headers
});
let response_body = await response.json()
```
### Converting the model outputs to Qdrant points
```python
from qdrant_client.models import PointStruct
points = [
PointStruct(
id=idx,
vector=data["embedding"],
payload={"text": text},
)
for idx, (data, text) in enumerate(zip(response_body["data"], texts))
]
```
```typescript
let points = response_body.data.map((data, i) => {
return {
id: i,
vector: data.embedding,
payload: {
text: texts[i]
}
}
})
```
### Creating a collection to insert the documents
```python
from qdrant_client.models import VectorParams, Distance
collection_name = "example_collection"
client.create_collection(
collection_name,
vectors_config=VectorParams(
size=1024,
distance=Distance.COSINE,
),
)
client.upsert(collection_name, points)
```
```typescript
const COLLECTION_NAME = "example_collection"
await client.createCollection(COLLECTION_NAME, {
vectors: {
size: 1024,
distance: 'Cosine',
}
});
await client.upsert(COLLECTION_NAME, {
wait: true,
points
})
```
## Searching for documents with Qdrant
Once the documents are added, you can search for the most relevant documents.
```python
payload = {
"input": "What is the best to use for vector search scaling?",
"input_type": "query",
"model": "NV-Embed-QA",
}
response_body = nvidia_session.post(
NVIDIA_BASE_URL, headers=headers, json=payload
).json()
client.search(
collection_name=collection_name,
query_vector=response_body["data"][0]["embedding"],
)
```
```typescript
body = {
"input": "What is the best to use for vector search scaling?",
"input_type": "query",
"model": "NV-Embed-QA",
}
response = await fetch(NVIDIA_BASE_URL, {
method: "POST",
body: JSON.stringify(body),
headers
});
response_body = await response.json()
await client.search(COLLECTION_NAME, {
vector: response_body.data[0].embedding,
});
```
| documentation/embeddings/nvidia.md |
---
title: Prem AI
weight: 2800
---
# Prem AI
[PremAI](https://premai.io/) is a unified generative AI development platform for fine-tuning deploying, and monitoring AI models.
Qdrant is compatible with PremAI APIs.
### Installing the SDKs
```bash
pip install premai qdrant-client
```
To install the npm package:
```bash
npm install @premai/prem-sdk @qdrant/js-client-rest
```
### Import all required packages
```python
from premai import Prem
from qdrant_client import QdrantClient
from qdrant_client.models import Distance, VectorParams
```
```typescript
import Prem from '@premai/prem-sdk';
import { QdrantClient } from '@qdrant/js-client-rest';
```
### Define all the constants
We need to define the project ID and the embedding model to use. You can learn more about obtaining these in the PremAI [docs](https://docs.premai.io/quick-start).
```python
PROJECT_ID = 123
EMBEDDING_MODEL = "text-embedding-3-large"
COLLECTION_NAME = "prem-collection-py"
QDRANT_SERVER_URL = "http://localhost:6333"
DOCUMENTS = [
"This is a sample python document",
"We will be using qdrant and premai python sdk"
]
```
```typescript
const PROJECT_ID = 123;
const EMBEDDING_MODEL = "text-embedding-3-large";
const COLLECTION_NAME = "prem-collection-js";
const SERVER_URL = "http://localhost:6333"
const DOCUMENTS = [
"This is a sample javascript document",
"We will be using qdrant and premai javascript sdk"
];
```
### Set up PremAI and Qdrant clients
```python
prem_client = Prem(api_key="xxxx-xxx-xxx")
qdrant_client = QdrantClient(url=QDRANT_SERVER_URL)
```
```typescript
const premaiClient = new Prem({
apiKey: "xxxx-xxx-xxx"
})
const qdrantClient = new QdrantClient({ url: SERVER_URL });
```
### Generating Embeddings
```python
from typing import Union, List
def get_embeddings(
project_id: int,
embedding_model: str,
documents: Union[str, List[str]]
) -> List[List[float]]:
"""
Helper function to get the embeddings from premai sdk
Args
project_id (int): The project id from prem saas platform.
embedding_model (str): The embedding model alias to choose
documents (Union[str, List[str]]): Single texts or list of texts to embed
Returns:
List[List[int]]: A list of list of integers that represents different
embeddings
"""
embeddings = []
documents = [documents] if isinstance(documents, str) else documents
for embedding in prem_client.embeddings.create(
project_id=project_id,
model=embedding_model,
input=documents
).data:
embeddings.append(embedding.embedding)
return embeddings
```
```typescript
async function getEmbeddings(projectID, embeddingModel, documents) {
const response = await premaiClient.embeddings.create({
project_id: projectID,
model: embeddingModel,
input: documents
});
return response;
}
```
### Converting Embeddings to Qdrant Points
```python
from qdrant_client.models import PointStruct
embeddings = get_embeddings(
project_id=PROJECT_ID,
embedding_model=EMBEDDING_MODEL,
documents=DOCUMENTS
)
points = [
PointStruct(
id=idx,
vector=embedding,
payload={"text": text},
) for idx, (embedding, text) in enumerate(zip(embeddings, DOCUMENTS))
]
```
```typescript
function convertToQdrantPoints(embeddings, texts) {
return embeddings.data.map((data, i) => {
return {
id: i,
vector: data.embedding,
payload: {
text: texts[i]
}
};
});
}
const embeddings = await getEmbeddings(PROJECT_ID, EMBEDDING_MODEL, DOCUMENTS);
const points = convertToQdrantPoints(embeddings, DOCUMENTS);
```
### Set up a Qdrant Collection
```python
qdrant_client.create_collection(
collection_name=COLLECTION_NAME,
vectors_config=VectorParams(size=3072, distance=Distance.DOT)
)
```
```typescript
await qdrantClient.createCollection(COLLECTION_NAME, {
vectors: {
size: 3072,
distance: 'Cosine'
}
})
```
### Insert Documents into the Collection
```python
doc_ids = list(range(len(embeddings)))
qdrant_client.upsert(
collection_name=COLLECTION_NAME,
points=points
)
```
```typescript
await qdrantClient.upsert(COLLECTION_NAME, {
wait: true,
points
});
```
### Perform a Search
```python
query = "what is the extension of python document"
query_embedding = get_embeddings(
project_id=PROJECT_ID,
embedding_model=EMBEDDING_MODEL,
documents=query
)
qdrant_client.search(collection_name=COLLECTION_NAME, query_vector=query_embedding[0])
```
```typescript
const query = "what is the extension of javascript document"
const query_embedding_response = await getEmbeddings(PROJECT_ID, EMBEDDING_MODEL, query)
await qdrantClient.search(COLLECTION_NAME, {
vector: query_embedding_response.data[0].embedding
});
```
| documentation/embeddings/premai.md |
---
title: GradientAI
weight: 1750
---
# Using GradientAI with Qdrant
GradientAI provides state-of-the-art models for generating embeddings, which are highly effective for vector search tasks in Qdrant.
## Installation
You can install the required packages using the following pip command:
```bash
pip install gradientai python-dotenv qdrant-client
```
## Code Example
```python
from dotenv import load_dotenv
import qdrant_client
from qdrant_client.models import Batch
from gradientai import Gradient
load_dotenv()
def main() -> None:
# Initialize GradientAI client
gradient = Gradient()
# Retrieve the embeddings model
embeddings_model = gradient.get_embeddings_model(slug="bge-large")
# Generate embeddings for your data
generate_embeddings_response = embeddings_model.generate_embeddings(
inputs=[
"Multimodal brain MRI is the preferred method to evaluate for acute ischemic infarct and ideally should be obtained within 24 hours of symptom onset, and in most centers will follow a NCCT",
"CTA has a higher sensitivity and positive predictive value than magnetic resonance angiography (MRA) for detection of intracranial stenosis and occlusion and is recommended over time-of-flight (without contrast) MRA",
"Echocardiographic strain imaging has the advantage of detecting early cardiac involvement, even before thickened walls or symptoms are apparent",
],
)
# Initialize Qdrant client
client = qdrant_client.QdrantClient(url="http://localhost:6333")
# Upsert the embeddings into Qdrant
for i, embedding in enumerate(generate_embeddings_response.embeddings):
client.upsert(
collection_name="MedicalRecords",
points=Batch(
ids=[i + 1], # Unique ID for each embedding
vectors=[embedding.embedding],
)
)
print("Embeddings successfully upserted into Qdrant.")
gradient.close()
if __name__ == "__main__":
main()
``` | documentation/embeddings/gradientai.md |
---
title: Gemini
weight: 1600
---
| Time: 10 min | Level: Beginner | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/gemini-getting-started/gemini-getting-started/gemini-getting-started.ipynb) |
| --- | ----------- | ----------- |
# Gemini
Qdrant is compatible with Gemini Embedding Model API and its official Python SDK that can be installed as any other package:
Gemini is a new family of Google PaLM models, released in December 2023. The new embedding models succeed the previous Gecko Embedding Model.
In the latest models, an additional parameter, `task_type`, can be passed to the API call. This parameter serves to designate the intended purpose for the embeddings utilized.
The Embedding Model API supports various task types, outlined as follows:
1. `retrieval_query`: query in a search/retrieval setting
2. `retrieval_document`: document from the corpus being searched
3. `semantic_similarity`: semantic text similarity
4. `classification`: embeddings to be used for text classification
5. `clustering`: the generated embeddings will be used for clustering
6. `task_type_unspecified`: Unset value, which will default to one of the other values.
If you're building a semantic search application, such as RAG, you should use `task_type="retrieval_document"` for the indexed documents and `task_type="retrieval_query"` for the search queries.
The following example shows how to do this with Qdrant:
## Setup
```bash
pip install google-generativeai
```
Let's see how to use the Embedding Model API to embed a document for retrieval.
The following example shows how to embed a document with the `models/embedding-001` with the `retrieval_document` task type:
## Embedding a document
```python
import google.generativeai as gemini_client
from qdrant_client import QdrantClient
from qdrant_client.models import Distance, PointStruct, VectorParams
collection_name = "example_collection"
GEMINI_API_KEY = "YOUR GEMINI API KEY" # add your key here
client = QdrantClient(url="http://localhost:6333")
gemini_client.configure(api_key=GEMINI_API_KEY)
texts = [
"Qdrant is a vector database that is compatible with Gemini.",
"Gemini is a new family of Google PaLM models, released in December 2023.",
]
results = [
gemini_client.embed_content(
model="models/embedding-001",
content=sentence,
task_type="retrieval_document",
title="Qdrant x Gemini",
)
for sentence in texts
]
```
## Creating Qdrant Points and Indexing documents with Qdrant
### Creating Qdrant Points
```python
points = [
PointStruct(
id=idx,
vector=response['embedding'],
payload={"text": text},
)
for idx, (response, text) in enumerate(zip(results, texts))
]
```
### Create Collection
```python
client.create_collection(collection_name, vectors_config=
VectorParams(
size=768,
distance=Distance.COSINE,
)
)
```
### Add these into the collection
```python
client.upsert(collection_name, points)
```
## Searching for documents with Qdrant
Once the documents are indexed, you can search for the most relevant documents using the same model with the `retrieval_query` task type:
```python
client.search(
collection_name=collection_name,
query_vector=gemini_client.embed_content(
model="models/embedding-001",
content="Is Qdrant compatible with Gemini?",
task_type="retrieval_query",
)["embedding"],
)
```
## Using Gemini Embedding Models with Binary Quantization
You can use Gemini Embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much.
In this table, you can see the results of the search with the `models/embedding-001` model with Binary Quantization in comparison with the original model:
At an oversampling of 3 and a limit of 100, we've a 95% recall against the exact nearest neighbors with rescore enabled.
| Oversampling | | 1 | 1 | 2 | 2 | 3 | 3 |
|--------------|---------|----------|----------|----------|----------|----------|----------|
| | **Rescore** | False | True | False | True | False | True |
| **Limit** | | | | | | | |
| 10 | | 0.523333 | 0.831111 | 0.523333 | 0.915556 | 0.523333 | 0.950000 |
| 20 | | 0.510000 | 0.836667 | 0.510000 | 0.912222 | 0.510000 | 0.937778 |
| 50 | | 0.489111 | 0.841556 | 0.489111 | 0.913333 | 0.488444 | 0.947111 |
| 100 | | 0.485778 | 0.846556 | 0.485556 | 0.929000 | 0.486000 | **0.956333** |
That's it! You can now use Gemini Embedding Models with Qdrant!
| documentation/embeddings/gemini.md |
---
title: OCI (Oracle Cloud Infrastructure)
weight: 2500
---
# Using OCI (Oracle Cloud Infrastructure) with Qdrant
OCI provides robust cloud-based embeddings for various media types. The Generative AI Embedding Models convert textual input - ranging from phrases and sentences to entire paragraphs - into a structured format known as embeddings. Each piece of text input is transformed into a numerical array consisting of 1024 distinct numbers.
## Installation
You can install the required package using the following pip command:
```bash
pip install oci
```
## Code Example
Below is an example of how to obtain embeddings using OCI (Oracle Cloud Infrastructure)'s API and store them in a Qdrant collection:
```python
import qdrant_client
from qdrant_client.models import Batch
import oci
# Initialize OCI client
config = oci.config.from_file()
ai_client = oci.ai_language.AIServiceLanguageClient(config)
# Generate embeddings using OCI's AI service
text = "OCI provides cloud-based AI services."
response = ai_client.batch_detect_language_entities(text)
embeddings = response.data[0].entities[0].embedding
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333)
# Upsert the embedding into Qdrant
qdrant_client.upsert(
collection_name="CloudAI",
points=Batch(
ids=[1],
vectors=[embeddings],
)
)
```
| documentation/embeddings/oci.md |
---
title: Jina Embeddings
weight: 1900
aliases:
- /documentation/embeddings/jina-emebddngs/
- ../integrations/jina-embeddings/
---
# Jina Embeddings
Qdrant can also easily work with [Jina embeddings](https://jina.ai/embeddings/) which allow for model input lengths of up to 8192 tokens.
To call their endpoint, all you need is an API key obtainable [here](https://jina.ai/embeddings/). By the way, our friends from **Jina AI** provided us with a code (**QDRANT**) that will grant you a **10% discount** if you plan to use Jina Embeddings in production.
```python
import qdrant_client
import requests
from qdrant_client.models import Distance, VectorParams, Batch
# Provide Jina API key and choose one of the available models.
# You can get a free trial key here: https://jina.ai/embeddings/
JINA_API_KEY = "jina_xxxxxxxxxxx"
MODEL = "jina-embeddings-v2-base-en" # or "jina-embeddings-v2-base-en"
EMBEDDING_SIZE = 768 # 512 for small variant
# Get embeddings from the API
url = "https://api.jina.ai/v1/embeddings"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {JINA_API_KEY}",
}
data = {
"input": ["Your text string goes here", "You can send multiple texts"],
"model": MODEL,
}
response = requests.post(url, headers=headers, json=data)
embeddings = [d["embedding"] for d in response.json()["data"]]
# Index the embeddings into Qdrant
client = qdrant_client.QdrantClient(":memory:")
client.create_collection(
collection_name="MyCollection",
vectors_config=VectorParams(size=EMBEDDING_SIZE, distance=Distance.DOT),
)
qdrant_client.upsert(
collection_name="MyCollection",
points=Batch(
ids=list(range(len(embeddings))),
vectors=embeddings,
),
)
```
| documentation/embeddings/jina-embeddings.md |
---
title: Upstage
weight: 3100
---
# Upstage
Qdrant supports working with the Solar Embeddings API from [Upstage](https://upstage.ai/).
[Solar Embeddings](https://developers.upstage.ai/docs/apis/embeddings) API features dual models for user queries and document embedding, within a unified vector space, designed for performant text processing.
You can generate an API key to authenticate the requests from the [Upstage Console](<https://console.upstage.ai/api-keys>).
### Setting up the Qdrant client and Upstage session
```python
import requests
from qdrant_client import QdrantClient
UPSTAGE_BASE_URL = "https://api.upstage.ai/v1/solar/embeddings"
UPSTAGE_API_KEY = "<YOUR_API_KEY>"
upstage_session = requests.Session()
client = QdrantClient(url="http://localhost:6333")
headers = {
"Authorization": f"Bearer {UPSTAGE_API_KEY}",
"Accept": "application/json",
}
texts = [
"Qdrant is the best vector search engine!",
"Loved by Enterprises and everyone building for low latency, high performance, and scale.",
]
```
```typescript
import { QdrantClient } from '@qdrant/js-client-rest';
const UPSTAGE_BASE_URL = "https://api.upstage.ai/v1/solar/embeddings"
const UPSTAGE_API_KEY = "<YOUR_API_KEY>"
const client = new QdrantClient({ url: 'http://localhost:6333' });
const headers = {
"Authorization": "Bearer " + UPSTAGE_API_KEY,
"Accept": "application/json",
"Content-Type": "application/json"
}
const texts = [
"Qdrant is the best vector search engine!",
"Loved by Enterprises and everyone building for low latency, high performance, and scale.",
]
```
The following example shows how to embed documents with the recommended `solar-embedding-1-large-passage` and `solar-embedding-1-large-query` models that generates sentence embeddings of size 4096.
### Embedding documents
```python
body = {
"input": texts,
"model": "solar-embedding-1-large-passage",
}
response_body = upstage_session.post(
UPSTAGE_BASE_URL, headers=headers, json=body
).json()
```
```typescript
let body = {
"input": texts,
"model": "solar-embedding-1-large-passage",
}
let response = await fetch(UPSTAGE_BASE_URL, {
method: "POST",
body: JSON.stringify(body),
headers
});
let response_body = await response.json()
```
### Converting the model outputs to Qdrant points
```python
from qdrant_client.models import PointStruct
points = [
PointStruct(
id=idx,
vector=data["embedding"],
payload={"text": text},
)
for idx, (data, text) in enumerate(zip(response_body["data"], texts))
]
```
```typescript
let points = response_body.data.map((data, i) => {
return {
id: i,
vector: data.embedding,
payload: {
text: texts[i]
}
}
})
```
### Creating a collection to insert the documents
```python
from qdrant_client.models import VectorParams, Distance
collection_name = "example_collection"
client.create_collection(
collection_name,
vectors_config=VectorParams(
size=4096,
distance=Distance.COSINE,
),
)
client.upsert(collection_name, points)
```
```typescript
const COLLECTION_NAME = "example_collection"
await client.createCollection(COLLECTION_NAME, {
vectors: {
size: 4096,
distance: 'Cosine',
}
});
await client.upsert(COLLECTION_NAME, {
wait: true,
points
})
```
## Searching for documents with Qdrant
Once all the documents are added, you can search for the most relevant documents.
```python
body = {
"input": "What is the best to use for vector search scaling?",
"model": "solar-embedding-1-large-query",
}
response_body = upstage_session.post(
UPSTAGE_BASE_URL, headers=headers, json=body
).json()
client.search(
collection_name=collection_name,
query_vector=response_body["data"][0]["embedding"],
)
```
```typescript
body = {
"input": "What is the best to use for vector search scaling?",
"model": "solar-embedding-1-large-query",
}
response = await fetch(UPSTAGE_BASE_URL, {
method: "POST",
body: JSON.stringify(body),
headers
});
response_body = await response.json()
await client.search(COLLECTION_NAME, {
vector: response_body.data[0].embedding,
});
```
| documentation/embeddings/upstage.md |
---
title: John Snow Labs
weight: 2000
---
# Using John Snow Labs with Qdrant
John Snow Labs offers a variety of models, particularly in the healthcare domain. They have pre-trained models that can generate embeddings for medical text data.
## Installation
You can install the required package using the following pip command:
```bash
pip install johnsnowlabs
```
Here is an example of how you might obtain embeddings using John Snow Labs's API and store them in a Qdrant collection:
```python
import qdrant_client
from qdrant_client.models import Batch
from johnsnowlabs import nlp
# Load the pre-trained model, for example, a named entity recognition (NER) model
model = nlp.load_model("ner_jsl")
# Sample text to generate embeddings
text = "John Snow Labs provides state-of-the-art healthcare NLP solutions."
# Generate embeddings for the text
document = nlp.DocumentAssembler().setInput(text)
embeddings = model.transform(document).collectEmbeddings()
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333)
# Upsert the embeddings into Qdrant
qdrant_client.upsert(
collection_name="HealthcareNLP",
points=Batch(
ids=[1], # This would be your unique ID for the data point
vectors=[embeddings],
)
)
```
| documentation/embeddings/johnsnow.md |
---
title: Embeddings
weight: 15
---
# Supported Embedding Providers & Models
Qdrant supports all available text and multimodal dense vector embedding models as well as vector embedding services without any limitations.
## Some of the Embeddings you can use with Qdrant:
SentenceTransformers, BERT, SBERT, Clip, OpenClip, Open AI, Vertex AI, Azure AI, AWS Bedrock, Jina AI, Upstage AI, Mistral AI, Cohere AI, Voyage AI, Aleph Alpha, Baidu Qianfan, BGE, Instruct, Watsonx Embeddings, Snowflake Embeddings, NVIDIA NeMo, Nomic, OCI Embeddings, Ollama Embeddings, MixedBread, Together AI, Clarifai, Databricks Embeddings, GPT4All Embeddings, John Snow Labs Embeddings.
Additionally, [any open-source embeddings from HuggingFace](https://huggingface.co/spaces/mteb/leaderboard) can be used with Qdrant.
## Code samples:
| Embeddings Providers | Description |
| ----------------------------- | ----------- |
| [Aleph Alpha](./aleph-alpha/) | Multilingual embeddings focused on European languages. |
| [Azure](./azure/) | Microsoft's embedding model selection. |
| [Bedrock](./bedrock/) | AWS managed service for foundation models and embeddings. |
| [Clarifai](./clarifai/) | Embeddings for image and video recognition. |
| [Clip](./clip/) | Aligns images and text, created by OpenAI. |
| [Cohere](./cohere/) | Language model embeddings for NLP tasks. |
| [Databricks](./databricks/) | Scalable embeddings integrated with Apache Spark. |
| [Gemini](./gemini/) | Google’s multimodal embeddings for text and vision. |
| [GPT4All](./gpt4all/) | Open-source, local embeddings for privacy-focused use. |
| [GradientAI](./gradient/) | AI Models for custom enterprise tasks.|
| [Instruct](./instruct/) | Embeddings tuned for following instructions. |
| [Jina AI](./jina-embeddings/) | Customizable embeddings for neural search. |
| [John Snow Labs](./johnsnow/) | Medical and clinical embeddings. |
| [Mistral](./mistral/) | Open-source, efficient language model embeddings. |
| [MixedBread](./mixedbread/) | Lightweight embeddings for constrained environments. |
| [Nomic](./nomic/) | Embeddings for data visualization. |
| [Nvidia](./nvidia/) | GPU-optimized embeddings from Nvidia. |
| [OCI](./oci/) | Oracle Cloud’s AI service with embeddings. |
| [Ollama](./ollama/) | Embeddings for conversational AI. |
| [OpenAI](./openai/) | Industry-leading embeddings for NLP. |
| [OpenCLIP](./openclip/) | OS implementation of CLIP for image and text. |
| [Prem AI](./premai/) | Precise language embeddings. |
| [Snowflake](./snowflake/) | Scalable embeddings for big data. |
| [Together AI](./togetherai/) | Community-driven, open-source embeddings. |
| [Upstage](./upstage/) | Embeddings for speech and language tasks. |
| [Voyage AI](./voyage/) | Navigation and spatial understanding embeddings. |
| [Watsonx](./watsonx/) | IBM's enterprise-grade embeddings. |
| documentation/embeddings/_index.md |
---
title: MixedBread
weight: 2200
---
# Using MixedBread with Qdrant
MixedBread is a unique provider offering embeddings across multiple domains. Their models are versatile for various search tasks when integrated with Qdrant. MixedBread is creating state-of-the-art models and tools that make search smarter, faster, and more relevant. Whether you're building a next-gen search engine or RAG (Retrieval Augmented Generation) systems, or whether you're enhancing your existing search solution, they've got the ingredients to make it happen.
## Installation
You can install the required package using the following pip command:
```bash
pip install mixedbread
```
## Integration Example
Below is an example of how to obtain embeddings using MixedBread's API and store them in a Qdrant collection:
```python
import qdrant_client
from qdrant_client.models import Batch
from mixedbread import MixedBreadModel
# Initialize MixedBread model
model = MixedBreadModel("mixedbread-variant")
# Generate embeddings
text = "MixedBread provides versatile embeddings for various domains."
embeddings = model.embed(text)
# Initialize Qdrant client
qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333)
# Upsert the embedding into Qdrant
qdrant_client.upsert(
collection_name="VersatileEmbeddings",
points=Batch(
ids=[1],
vectors=[embeddings],
)
)
```
| documentation/embeddings/mixedbread.md |
---
title: Azure OpenAI
weight: 950
---
# Using Azure OpenAI with Qdrant
Azure OpenAI is Microsoft's platform for AI embeddings, focusing on powerful text and data analytics. These embeddings are suitable for high-precision vector searches in Qdrant.
## Installation
You can install the required packages using the following pip command:
```bash
pip install openai azure-identity python-dotenv qdrant-client
```
## Code Example
```python
import os
import openai
import dotenv
import qdrant_client
from qdrant_client.models import Batch
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
dotenv.load_dotenv()
# Set to True if using Azure Active Directory for authentication
use_azure_active_directory = False
# Qdrant client setup
qdrant_client = qdrant_client.QdrantClient(url="http://localhost:6333")
# Azure OpenAI Authentication
if not use_azure_active_directory:
endpoint = os.environ["AZURE_OPENAI_ENDPOINT"]
api_key = os.environ["AZURE_OPENAI_API_KEY"]
client = openai.AzureOpenAI(
azure_endpoint=endpoint,
api_key=api_key,
api_version="2023-09-01-preview"
)
else:
endpoint = os.environ["AZURE_OPENAI_ENDPOINT"]
client = openai.AzureOpenAI(
azure_endpoint=endpoint,
azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"),
api_version="2023-09-01-preview"
)
# Deployment name of the model in Azure OpenAI Studio
deployment = "your-deployment-name" # Replace with your deployment name
# Generate embeddings using the Azure OpenAI client
text_input = "The food was delicious and the waiter..."
embeddings_response = client.embeddings.create(
model=deployment,
input=text_input
)
# Extract the embedding vector from the response
embedding_vector = embeddings_response.data[0].embedding
# Insert the embedding into Qdrant
qdrant_client.upsert(
collection_name="MyCollection",
points=Batch(
ids=[1], # This ID can be dynamically assigned or managed
vectors=[embedding_vector],
)
)
print("Embedding successfully upserted into Qdrant.")
``` | documentation/embeddings/azure.md |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 65