The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    DataFilesNotFoundError
Message:      No (supported) data files found in jankovicsandras/nowiki-faiss-sbert-202309
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1873, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1854, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1245, in get_module
                  module_name, default_builder_kwargs = infer_module_for_data_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 595, in infer_module_for_data_files
                  raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
              datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in jankovicsandras/nowiki-faiss-sbert-202309

Need help to make the dataset viewer work? Open a discussion for direct support.

This is a FAISS vectordb from a Norwegian Wikipedia dump from 2023-09 embedded with NbAiLab/nb-sbert-base.

This can be used to augment a chatbot with RAG (norwegian bokmål language).

Only the article abstracts are processed, but they seemed detailed enough. The 'url' in the metadata points to the original article. Each abstract is embedded as a 768 dimensional vector with this model: NbAiLab/nb-sbert-base

Example usage (Python):


from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import HuggingFaceEmbeddings
import time

embedder = HuggingFaceEmbeddings(model_name='NbAiLab/nb-sbert-base')

qs = [
  'Kven er Beyonce?',
  'Hva skjedde i 2012?',
  'Hvilke musikkfestivalar kan du anbefale?',
]

db = FAISS.load_local('nowiki_faiss_sbert_all', embedder, allow_dangerous_deserialization=True)

starttime=time.time()

for q in qs :
  print('----\n',q)
  r = db.similarity_search_with_score(q)
  print(r)

print('questions took ',time.time()-starttime,' s. ')

More info about the Wikipedia source:

https://dumps.wikimedia.org/

https://dumps.wikimedia.org/other/enterprise_html/

License and guidelines:

https://dumps.wikimedia.org/legal.html

https://foundation.wikimedia.org/wiki/Legal:Developer_app_guidelines

Embedder model:

https://huggingface.co/NbAiLab/nb-sbert-base

FAISS vectordb:

https://python.langchain.com/docs/integrations/vectorstores/faiss


license: other license_name: wikimedia license_link: https://dumps.wikimedia.org/legal.html

Downloads last month
2