File size: 2,175 Bytes
7cea7cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- ro
---

## Sentiment Analysis Data for the Romanian Language

**Dataset Description:**
This dataset contains a sentiment analysis dataset from Tache et al. (2021).

**Data Structure:**
The data was used for the project on [improving word embeddings with graph knowledge for Low Resource Languages](https://github.com/pyRis/retrofitting-embeddings-lrls?tab=readme-ov-file).

**Citation:**
```bibtex
@inproceedings{tache-etal-2021-clustering,
    title = "Clustering Word Embeddings with Self-Organizing Maps. Application on {L}a{R}o{S}e{D}a - A Large {R}omanian Sentiment Data Set",
    author = "Tache, Anca  and
      Mihaela, Gaman  and
      Ionescu, Radu Tudor",
    booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
    month = apr,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2021.eacl-main.81",
    pages = "949--956",
    abstract = "Romanian is one of the understudied languages in computational linguistics, with few resources available for the development of natural language processing tools. In this paper, we introduce LaRoSeDa, a Large Romanian Sentiment Data Set, which is composed of 15,000 positive and negative reviews collected from the largest Romanian e-commerce platform. We employ two sentiment classification methods as baselines for our new data set, one based on low-level features (character n-grams) and one based on high-level features (bag-of-word-embeddings generated by clustering word embeddings with k-means). As an additional contribution, we replace the k-means clustering algorithm with self-organizing maps (SOMs), obtaining better results because the generated clusters of word embeddings are closer to the Zipf{'}s law distribution, which is known to govern natural language. We also demonstrate the generalization capacity of using SOMs for the clustering of word embeddings on another recently-introduced Romanian data set, for text categorization by topic.",
}
```