license: apache-2.0 | |
tags: | |
- word2vec | |
datasets: | |
- wikipedia | |
language: | |
- nl | |
## Information | |
Pretrained Word2vec in Dutch. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/). | |
## How to use? | |
``` | |
from gensim.models import KeyedVectors | |
from huggingface_hub import hf_hub_download | |
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_nlwiki_20180420_300d", filename="nlwiki_20180420_300d.txt")) | |
model.most_similar("your_word") | |
``` | |
## Citation | |
``` | |
@inproceedings{yamada2020wikipedia2vec, | |
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia", | |
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji}, | |
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, | |
year = {2020}, | |
publisher = {Association for Computational Linguistics}, | |
pages = {23--30} | |
} | |
``` | |