imvladikon commited on
Commit
046bfaf
·
1 Parent(s): 3ed6f6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -9
README.md CHANGED
@@ -121,16 +121,10 @@ size_categories:
121
  ---
122
  # Large Weak Labelled NER corpus
123
 
124
- ## Dataset Description
125
-
126
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
- - **Size of downloaded dataset files:** 657.1 MB
128
- - **Size of the generated dataset:** 4 GB
129
- - **Total amount of disk used:** 4 GB
130
 
131
  ### Dataset Summary
132
 
133
- The dataset is generated through weak labelling of the scraped news corpus (bloomberg's news). so, only to research purpose.
134
  In order of the tokenization, news were splitted into sentences using `nltk.PunktSentenceTokenizer` (so, sometimes, tokenization might be not perfect)
135
 
136
  ### Usage
@@ -138,8 +132,8 @@ In order of the tokenization, news were splitted into sentences using `nltk.Punk
138
  ```python
139
  from datasets import load_dataset
140
 
141
- articles_ds = load_dataset("imvladikon/bloomberg_news_weak_ner", "articles") # just articles with metadata
142
- entities_ds = load_dataset("imvladikon/bloomberg_news_weak_ner", "entities")
143
  ```
144
 
145
 
 
121
  ---
122
  # Large Weak Labelled NER corpus
123
 
 
 
 
 
 
 
124
 
125
  ### Dataset Summary
126
 
127
+ The dataset is generated through weak labelling of the scraped and preprocessed news corpus (bloomberg's news). so, only to research purpose.
128
  In order of the tokenization, news were splitted into sentences using `nltk.PunktSentenceTokenizer` (so, sometimes, tokenization might be not perfect)
129
 
130
  ### Usage
 
132
  ```python
133
  from datasets import load_dataset
134
 
135
+ articles_ds = load_dataset("imvladikon/english_news_weak_ner", "articles") # just articles with metadata
136
+ entities_ds = load_dataset("imvladikon/english_news_weak_ner", "entities")
137
  ```
138
 
139