inference: false
language: sv
license:
- cc-by-sa-3.0
- gfdl
library_name: txtai
tags:
- sentence-similarity
datasets:
- burgerbee/wikipedia-sv-20240220
Wikipedia txtai embeddings index
This is a txtai embeddings index for the Swedish edition of Wikipedia.
This index is built from the Wikipedia Februari 2024 dataset. Only the first two paragraph from each article is included.
It also uses Wikipedia Page Views data to add a percentile
field. The percentile
field can be used
to only match commonly visited pages.
txtai must be installed to use this model.
Example
from txtai.embeddings import Embeddings
import json
# Load the index from the HF Hub
embeddings = Embeddings()
embeddings.load(provider="huggingface-hub", container="burgerbee/txtai-sv-wikipedia")
# Run a search
for x in embeddings.search("I vilken stad ligger Liseberg?", 1):
print(json.dumps(x, indent=2))
# Run a search and filter on popular results (page views).
for x in embeddings.search("SELECT id, text, score, percentile FROM txtai WHERE similar('I vilken stad ligger Liseberg?') AND percentile >= 0.99", 1):
print(json.dumps(x, indent=2))
Use Cases
An embeddings index generated by txtai is a fully encapsulated index format. It doesn't require a database server or dependencies outside of the Python install.
The Wikipedia index works well as a fact-based context source for retrieval augmented generation (RAG). In other words, search results from this model can be passed to LLM prompts as the context in which to answer questions.
See this article for additional examples on how to use this model.
Source
https://dumps.wikimedia.org/svwiki/20240220/dumpstatus.json
https://dumps.wikimedia.org/other/pageview_complete/monthly/2024/2024-02/pageviews-202402-user.bz2