holylovenia commited on
Commit
19adf7c
1 Parent(s): 99f66e2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -14,7 +14,7 @@ tags:
14
  - word-list
15
  ---
16
 
17
- WEATHub is a dataset containing 24 languages. It contains words organized into groups of (target1, target2, attribute1, attribute2) to measure the association target1:target2 :: attribute1:attribute2. For example target1 can be insects, target2 can be flowers. And we might be trying to measure whether we find insects or flowers pleasant or unpleasant. The measurement of word associations is quantified using the WEAT metric from their paper. It is a metric that calculates an effect size (Cohen's d) and also provides a p-value (to measure statistical significance of the results). In their paper, they use word embeddings from language models to perform these tests and understand biased associations in language models across different languages.
18
 
19
 
20
  ## Languages
@@ -24,25 +24,25 @@ tha, tgl, vie, cmn, eng
24
  ## Supported Tasks
25
 
26
  Word List
27
-
28
  ## Dataset Usage
29
  ### Using `datasets` library
30
  ```
31
- from datasets import load_dataset
32
- dset = datasets.load_dataset("SEACrowd/weathub", trust_remote_code=True)
33
  ```
34
  ### Using `seacrowd` library
35
  ```import seacrowd as sc
36
  # Load the dataset using the default config
37
- dset = sc.load_dataset("weathub", schema="seacrowd")
38
  # Check all available subsets (config names) of the dataset
39
- print(sc.available_config_names("weathub"))
40
  # Load the dataset using a specific config
41
- dset = sc.load_dataset_by_config_name(config_name="<config_name>")
42
  ```
43
-
44
- More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
45
-
46
 
47
  ## Dataset Homepage
48
 
 
14
  - word-list
15
  ---
16
 
17
+ WEATHub is a dataset containing 24 languages. It contains words organized into groups of (target1, target2, attribute1, attribute2) to measure the association target1:target2 :: attribute1:attribute2. For example target1 can be insects, target2 can be flowers. And we might be trying to measure whether we find insects or flowers pleasant or unpleasant. The measurement of word associations is quantified using the WEAT metric from their paper. It is a metric that calculates an effect size (Cohen's d) and also provides a p-value (to measure statistical significance of the results). In their paper, they use word embeddings from language models to perform these tests and understand biased associations in language models across different languages.
18
 
19
 
20
  ## Languages
 
24
  ## Supported Tasks
25
 
26
  Word List
27
+
28
  ## Dataset Usage
29
  ### Using `datasets` library
30
  ```
31
+ from datasets import load_dataset
32
+ dset = datasets.load_dataset("SEACrowd/weathub", trust_remote_code=True)
33
  ```
34
  ### Using `seacrowd` library
35
  ```import seacrowd as sc
36
  # Load the dataset using the default config
37
+ dset = sc.load_dataset("weathub", schema="seacrowd")
38
  # Check all available subsets (config names) of the dataset
39
+ print(sc.available_config_names("weathub"))
40
  # Load the dataset using a specific config
41
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
42
  ```
43
+
44
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
45
+
46
 
47
  ## Dataset Homepage
48