lmcinnes's picture
Update README.md
e8b2a7d verified
metadata
dataset_info:
  features:
    - name: post
      dtype: string
    - name: newsgroup
      dtype: string
    - name: embedding
      sequence: float64
    - name: map
      sequence: float64
  splits:
    - name: train
      num_bytes: 129296327
      num_examples: 18170
  download_size: 102808058
  dataset_size: 129296327
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for 20-Newsgroups Embedded

This provides a subset of 20-Newsgroup posts, along with sentence embeddings, and a dimension reduced 2D data map. This provides a basic setup for experimentation with various neural topic modelling approaches.

Dataset Details

Dataset Description

This is a dataset containing posts from the classic 20-Newsgroups dataset, along with sentence embeddings, and a dimension reduced 2D data map.

Per the source:

The 20-Newsgroups dataset is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. As far as in known it was originally collected by Ken Lang, probably for his Newsweeder: Learning to filter netnews paper. The 20 newsgroups collection has become a popular data set for experiments in text applications of machine learning techniques, such as text classification and text clustering.

This has then been enriched with sentence embeddings via sentence-transformers using the all-mpnet-base-v2 model. Further enrichment is provided in the form of a 2D representation of the sentence embeddings generated using UMAP.

  • Curated by: Leland McInnes
  • Language(s) (NLP): English
  • License: Public Domain

Dataset Sources

The post and newsgroup data was collected using the sckit-learn function fetch_20newsgroups and then processed to exclude very short and excessively long posts in the following manner:

newsgroups = sklearn.datasets.fetch_20newsgroups(subset="all", remove=("headers", "footers", "quotes"))
useable_content = np.asarray([len(x) > 8 and len(x) < 16384 for x in newsgroups.data])
documents = [
    doc
    for doc, worth_keeping in zip(newsgroups.data, useable_content)
    if worth_keeping
]
newsgroup_categories = [
    newsgroups.target_names[newsgroup_id]
    for newsgroup_id, worth_keeping in zip(newsgroups.target, useable_content)
    if worth_keeping
]

Uses

This datasets is intended to be used for simple experiments and demonstrations of topic modelling and related tasks.

Personal and Sensitive Information

This data may contain personal information that was posted publicly to NNTP servers in the mid 1990's. It is not believed to contain any senstive information.

Bias, Risks, and Limitations

This dataset is a product of public discussion forums in the 1990s. As such it contains debate, potentially inflammatory and/or derogatory language, etc. It does not provide a representative sampling of opinion from the era. This data should only be used for experiments or demonstration and educational purposes.