lmcinnes commited on
Commit
e8b2a7d
·
verified ·
1 Parent(s): 8f0cf31

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md CHANGED
@@ -21,3 +21,65 @@ configs:
21
  - split: train
22
  path: data/train-*
23
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  - split: train
22
  path: data/train-*
23
  ---
24
+
25
+ # Dataset Card for 20-Newsgroups Embedded
26
+
27
+ This provides a subset of 20-Newsgroup posts, along with sentence embeddings, and a dimension reduced 2D data map.
28
+ This provides a basic setup for experimentation with various neural topic modelling approaches.
29
+
30
+ ## Dataset Details
31
+
32
+ ### Dataset Description
33
+
34
+ This is a dataset containing posts from the classic 20-Newsgroups dataset, along with sentence embeddings, and a dimension reduced 2D data map.
35
+
36
+ Per the [source](http://qwone.com/~jason/20Newsgroups/):
37
+ > The 20-Newsgroups dataset is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups.
38
+ > As far as in known it was originally collected by Ken Lang, probably for his Newsweeder: Learning to filter netnews paper.
39
+ > The 20 newsgroups collection has become a popular data set for experiments in text applications of machine learning techniques,
40
+ > such as text classification and text clustering.
41
+
42
+ This has then been enriched with sentence embeddings via sentence-transformers using the `all-mpnet-base-v2` model. Further enrichment is
43
+ provided in the form of a 2D representation of the sentence embeddings generated using UMAP.
44
+
45
+ - **Curated by:** Leland McInnes
46
+ - **Language(s) (NLP):** English
47
+ - **License:** Public Domain
48
+
49
+ ### Dataset Sources
50
+
51
+ The post and newsgroup data was collected using the `sckit-learn` function `fetch_20newsgroups` and then processed to exclude very short
52
+ and excessively long posts in the following manner:
53
+
54
+ ```python
55
+ newsgroups = sklearn.datasets.fetch_20newsgroups(subset="all", remove=("headers", "footers", "quotes"))
56
+ useable_content = np.asarray([len(x) > 8 and len(x) < 16384 for x in newsgroups.data])
57
+ documents = [
58
+ doc
59
+ for doc, worth_keeping in zip(newsgroups.data, useable_content)
60
+ if worth_keeping
61
+ ]
62
+ newsgroup_categories = [
63
+ newsgroups.target_names[newsgroup_id]
64
+ for newsgroup_id, worth_keeping in zip(newsgroups.target, useable_content)
65
+ if worth_keeping
66
+ ]
67
+ ```
68
+
69
+ - **Repository:** The original source datasets can be found at [http://qwone.com/~jason/20Newsgroups/](http://qwone.com/~jason/20Newsgroups/)
70
+
71
+ ## Uses
72
+
73
+ This datasets is intended to be used for simple experiments and demonstrations of topic modelling and related tasks.
74
+
75
+
76
+ #### Personal and Sensitive Information
77
+
78
+ This data may contain personal information that was posted publicly to NNTP servers in the mid 1990's. It is not believed to contain
79
+ any senstive information.
80
+
81
+ ## Bias, Risks, and Limitations
82
+
83
+ This dataset is a product of public discussion forums in the 1990s. As such it contains debate, potentially inflammatory and/or
84
+ derogatory language, etc. It does not provide a representative sampling of opinion from the era. This data should only be used
85
+ for experiments or demonstration and educational purposes.