The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find a dataset script at /src/services/worker/oshizo/japanese-wikipedia-paragraphs-embeddings/japanese-wikipedia-paragraphs-embeddings.py or any data file in the same directory. Couldn't find 'oshizo/japanese-wikipedia-paragraphs-embeddings' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in oshizo/japanese-wikipedia-paragraphs-embeddings. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1507, in dataset_module_factory
                  raise FileNotFoundError(
              FileNotFoundError: Couldn't find a dataset script at /src/services/worker/oshizo/japanese-wikipedia-paragraphs-embeddings/japanese-wikipedia-paragraphs-embeddings.py or any data file in the same directory. Couldn't find 'oshizo/japanese-wikipedia-paragraphs-embeddings' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in oshizo/japanese-wikipedia-paragraphs-embeddings.

Need help to make the dataset viewer work? Open a discussion for direct support.

The following data set was vectorized with the intfloat/multilingual-e5-base model and an index file created by faiss.

oshizo/japanese-wikipedia-paragraphs

Usage

First, download index_me5-base_IVF2048_PQ192.faiss from this repository.

import faiss
import datasets
from sentence_transformers import SentenceTransformer

ds = datasets.load_dataset("oshizo/japanese-wikipedia-paragraphs", split="train")

index = faiss.read_index("./index_me5-base_IVF2048_PQ192.faiss")

model = SentenceTransformer("intfloat/multilingual-e5-base")

question = "日本で二番目に高い山は?"
emb = model.encode(["query: " + question])
scores, indexes = index.search(emb, 10)
scores = scores[0]
indexes = indexes[0]

results = []
for idx, score in zip(indexes, scores):
    idx = int(idx)
    passage = ds[idx]
    passage["score"] = score
    results.append((passage))
Downloads last month
3