Datasets:

Modalities:
Text
Formats:
json
ArXiv:
DOI:
Libraries:
Datasets
Dask
paloma / README.md
osanseviero's picture
Add paper url
c1918a7
|
raw
history blame
25.4 kB
---
extra_gated_prompt: "Access to this dataset is automatically granted upon accepting the [**AI2 ImpACT License – Low Risk Artifacts (“LR Agreement”)**](https://allenai.org/licenses/impact-lr) and completing all fields below. All data subsets in this dataset are licensed under the LR Agreement, except for those as listed in the 'License' section of the Dataset Card."
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the low risk artifact(s): text
I AGREE to the terms and conditions of the LR Agreement above: checkbox
I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox
I CERTIFY that the information I have provided is true and accurate: checkbox
dataset_info:
- config_name: 4chan_meta_sep
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: source
dtype: string
- name: metadata
struct:
- name: original_ids
sequence: int64
- name: original_times
sequence: int64
- name: semantic_url
dtype: string
- name: truncated_portion
dtype: string
- config_name: c4_100_domains
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: source
dtype: string
- name: subdomain
dtype: string
- config_name: c4_en
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: source
dtype: string
- name: metadata
struct:
- name: url
dtype: string
- name: date
dtype: string
- name: truncated_portion
dtype: string
- config_name: dolma-v1_5
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: source
dtype: string
- name: subdomain
dtype: string
- name: metadata
dtype: struct
- config_name: dolma_100_programming_languages_no_attributes
features:
- name: text
dtype: string
- name: id
dtype: string
- name: added
dtype: string
- name: source
dtype: string
- name: subdomain
dtype: string
- name: metadata
dtype: struct
- name: timestamp
dtype: timestamp[s]
configs:
- config_name: 4chan_meta_sep
data_files:
- split: val
path: "4chan_meta_sep/val/*"
- split: test
path: "4chan_meta_sep/test/*"
- config_name: c4_100_domains
data_files:
- split: val
path: "c4_100_domains/val/*"
- split: test
path: "c4_100_domains/test/*"
- config_name: c4_en
data_files:
- split: val
path: "c4_en/val/*"
- split: test
path: "c4_en/test/*"
- config_name: dolma-v1_5
data_files:
- split: val
path: "dolma-v1_5/val/*"
- split: test
path: "dolma-v1_5/test/*"
- config_name: dolma_100_programming_languages_no_attributes
data_files:
- split: val
path: "dolma_100_programming_languages_no_attributes/val/*"
- split: test
path: "dolma_100_programming_languages_no_attributes/test/*"
- config_name: dolma_100_subreddits
data_files:
- split: val
path: "dolma_100_subreddits/val/*"
- split: test
path: "dolma_100_subreddits/test/*"
- config_name: falcon-refinedweb
data_files:
- split: val
path: "falcon-refinedweb/val/*"
- split: test
path: "falcon-refinedweb/test/*"
- config_name: gab
data_files:
- split: val
path: "gab/val/*"
- split: test
path: "gab/test/*"
- config_name: m2d2_s2orc_unsplit
data_files:
- split: val
path: "m2d2_s2orc_unsplit/val/*"
- split: test
path: "m2d2_s2orc_unsplit/test/*"
- config_name: m2d2_wikipedia_unsplit
data_files:
- split: val
path: "m2d2_wikipedia_unsplit/val/*"
- split: test
path: "m2d2_wikipedia_unsplit/test/*"
- config_name: manosphere_meta_sep
data_files:
- split: val
path: "manosphere_meta_sep/val/*"
- split: test
path: "manosphere_meta_sep/test/*"
- config_name: mc4
data_files:
- split: val
path: "mc4/val/*"
- split: test
path: "mc4/test/*"
- config_name: ptb
data_files:
- split: val
path: "ptb/val/*"
- split: test
path: "ptb/test/*"
- config_name: redpajama
data_files:
- split: val
path: "redpajama/val/*"
- split: test
path: "redpajama/test/*"
- config_name: twitterAAE_HELM_fixed
data_files:
- split: val
path: "twitterAAE_HELM_fixed/val/*"
- split: test
path: "twitterAAE_HELM_fixed/test/*"
- config_name: wikitext_103
data_files:
- split: val
path: "wikitext_103/val/*"
- split: test
path: "wikitext_103/test/*"
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
Language models (LMs) commonly report perplexity on monolithic data held out from the training distribution.
Implicitly or explicitly, this data is composed of domains—variations in the distribution of language. Rather than assuming perplexity on one distribution extrapolates to others, Perplexity Analysis for Language Model Assessment (Paloma) measures LM fit to 585 text domains, ranging from NY Times to r/depression on Reddit.
## Dataset Details
### Benchmark Inference and Submissions
We invite submissions to our benchmark and organize results by comparability based on compliance with guidelines such as the removal of benchmark contamination from pretraining. Standardized inference code for running comprable evaluations and details about making submissions to the Paloma benchmark can be found at the following link.
[How to evaluate and how to submit](https://github.com/allenai/ai2-olmo-eval/blob/main/paloma/README.md)
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
Paloma is for examining relative differences in LM fit on domains. We take these relative differences as a proxy of model fit to the shared knowledge, values, and social context that position the humans producing language in a domain. While we expect contemporary LMs to have a limited fit to the most complex of these latent factors of domains, improving fit to all factors is necessary both to improve perplexity and for any actual use of the LM. For example, better perplexity on a particular dialect of English suggests that that model will make a better chatbot for people that speak that dialect.
The sources of evaluation data in Paloma were selected based on the following desiderata: 1) including known resources, 2) including fine-grained domains, 3) including domains representing specific communities of interest. Different lines of research will require different selections of domains; Paloma aims to enable research on differences in LM fit over the hundreds of domains that are readily available in existing metadata.
Note that we are not able to re-host 2 of the 18 sources in Paloma comprising 39 domains. These are The Pile and ICE. The ICE corpus is available on request to the original authors following the instructions [here](https://www.ice-corpora.uzh.ch/en/access.html).
**Curated by:** Ian Magnusson, Akshita Bhagia, Valentin Hofmann, Luca Soldaini, Ananya Harsh Jha, Oyvind Tafjord, Dustin Schwenk, Evan Pete Walsh, Yanai Elazar, Kyle Lo, Dirk Groeneveld, Iz Beltagy, Hannaneh Hajishirzi, Noah A. Smith, Kyle Richardson, and Jesse Dodge
**Languages:** We elect to focus just on the language modeling of English and code data.
**License:** The data subsets are licensed under the AI2 ImpACT License - Low Risk Artifacts, except as listed below.
- Wikitext-103 - CC BY-SA
- TwitterAAE - for research purposes only
- Red Pajama - see license details
- M2D2 - CC BY-NC
**Paper:** https://arxiv.org/abs/2312.10523
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
<!-- - [Paper]() -- (TODO update when paper is preprinted) -->
<!-- - [Website](paloma.allen.ai) -->
- [Code](https://github.com/allenai/ai2-olmo-eval/blob/main/paloma/README.md)
- Paloma 1B Baseline Models: [Dolma](https://huggingface.co/allenai/paloma-1b-baseline-dolma), [Pile](https://huggingface.co/allenai/paloma-1b-baseline-pile), [RedPajama](https://huggingface.co/allenai/paloma-1b-baseline-redpajama), [C4](https://huggingface.co/allenai/paloma-1b-baseline-c4), [mC4-en](https://huggingface.co/allenai/paloma-1b-baseline-mc4), [Falcon-RefinedWeb](https://huggingface.co/allenai/paloma-1b-baseline-falcon-refinedweb)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
This benchmark is intended for use in evaluating language model fit to fine-grained domains.
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
This dataset should be used for evaluating the likilihood of text from a given domain by a language model.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
Note that the sources contained in this benchmark include varying licenses with differing restrictions (see [License](#dataset-description))
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The sources in this dataset are each organized into their own subcorpus. This consists of a `val` and `test` split. Data within this is organized as files with lines separated JSON data where each line represents a document and its associated metadata. The type of metadata available varies from source to source, but each line contains at least a field `'text'` which contains the text of the document.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
Perplexity is conventionally reported on held out data from a model's training distribution or a small number of traditional test sets. Such monolithic evaluation ignores potential variation of model fit across different domains that LMs implicitly learn to model. We curate sources of fine-grained textual domains in Paloma to enable evaluation of language model fit to specific domains of text. Paloma is inspired by and incorporates previous work that curates corpora with marked domains (The Pile, M2D2, C4 100 Domains, ICE, TwitterAAE). We conduct a stratified subsample over domains where we set a minimum subsample size based on emperical estimation of the variance over subsamples.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Standard language modeling benchmarks
Though it is common practice to evaluate on held out data from the pretraining corpus of a given model, we evaluate *across* several major pretraining corpora and standard language modeling benchmarks. We also break down performance per domain within the datasets that have multiple domains. Note that although the Paloma benchmark analysis in our paper describes results on the Pile, we are not able to re-host this data.
| Source | Citation | Description |
|-------------------|-----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| c4-en | Raffel et al (2019) via Dodge et al (2021) | Standard contemporary LM pretraining corpus automatically filtered from the April 2019 Common Crawl scrape |
| mc4-en | Xue et al (2021) | The English language portion of a pretraining corpus automatically filtered from 71 Common Crawl scrapes |
| Pile | Gao et al (2020) | Standard contemporary LM benchmark from curated multi-source data including large scale non-webscraped sources |
| Wikitext-103 | Merity et al (2016) | A standard collection of verified “Good” and “Featured” articles on Wikipedia |
| Penn Tree Bank | Marcus et al (1999) via Nunes, Davide. (2020) | Classic Wall Street Journal benchmark with linguistic structure annotations omitted |
| RedPajama | Together Computer (2023) | A publicly available reproduction of the LLaMA (Touvron et al., 2023) pretraining source mixture, combining large amounts of webscraped text with smaller curated sources |
| Falcon-RefinedWeb | Penedo et al. (2023) | A corpus of English sampled from all Common Crawl scrapes until June 2023, more aggressively filtered and deduplicated than c4 and mc4-en |
| Dolma v1.5 | Soldaini et al. (2023) | A three trillion token corpus that samples sources commonly used to train LMs in order to enable open research on pretraining data |
#### Fine-grained domain benchmarks
Where typical pretraining corpora offer at most tens of labeled domains usually based on where the data is sourced, we examine datasets with up to an order of magnitude more domains. Existing datasets (M2D2 and c4 100 Domains) and datasets we curate from Dolma v1.5 use metadata to define hundreds of domains over Wikipedia, Semantic Scholar, Common Crawl, Reddit, and Github data. These include diverse domains from *Culture and the arts: Performing arts*, a topic on Wikipedia, to *r/depression*, a forum on Reddit for mental health support.
| Source | Citation | Description |
|---------------------------------|--------------------------------------------------|-----------------------------------------------------------------------------------|
| M2D2 S2ORC | Reid et al (2022) | Papers from Semantic Scholar grouped by hierarchical academic field categories |
| M2D2 Wiki | Reid et al (2022) | Wikipedia articles grouped by hierarchical categories in the Wikipedia ontology |
| c4 100 Domains | Chronopoulou et al (2021) | Balanced samples of the top 100 URL domains in C4 |
| Dolma 100 Subreddits | Soldaini et al. (2023) | Balanced samples of the top 100 Subreddits from the Dolma Reddit subset |
| Dolma 100 Programming Languages | Kocetkov et al. (2022) via Soldaini et al. (2023) | Balanced samples of the top 100 programming languages from the Dolma Stack subset |
#### Disparities between speech communities
Some communities are known to be underserved by existing models. Following HELM, We measure disparities in performance on corpora of African American English and White aligned English from TwitterAAE, as well as nine corpora of English from different countries with the ICE dataset. Note that although the Paloma benchmark analysis in our paper describes results on ICE, we are not able to re-host this data.
| Source | Citation | Description |
|------------|----------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ICE | Greenbaum and Nelson (1996) via Liang et al (2022) | English from around the world curated by local experts, with subsets for Canada, East Africa, Hong Kong, India, Ireland, Jamaica, Philippines, Singapore, and the USA |
| TwitterAAE | Blodgett et al. (2016) via Liang et al (2022) | Balanced sets of tweets classified as African American or White aligned English |
#### Fringe sources previously studied for problematic discourse
Text from some fringe online communities has been shown to contain larger proportions of hate speech and toxicity than more mainstream sources. [Longpre et al. (2023)](https://arxiv.org/abs/2305.13169) have shown that varying amount of toxic content in pretraining data exhibits a tradeoff between non-toxic generation and ability to classify toxicity, indicating that model fit to discourse with toxicity is worth measuring. Measuring perplexity on Manosphere, Gab, and 4chan characterises model familiarity with distinct social contexts in which toxic language arises.
| Source | Citation | Description |
|-------------------|------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|
| Manosphere Corpus | Ribeiro et al (2020) | 9 forums where a set of related masculinist ideologies developed over the 2000s and 2010s |
| Gab Corpus | Zannettou et al (2018) | Data from 2016-18 from an alt-right, free-speech-oriented social media platform shown to contain more hate speech than mainstream platforms |
| 4chan Corpus | Papasavva et al (2020) | Data from 2016-19 from a politics subforum of an anonymity-focused forum found to contain among the highest rates of toxic content |
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
The data in Paloma are sampled from existing sources. Most often perplexity evaluation data is subsampled uniformly over the original distribution of domains in a source, resulting in more or less tokens from each domain in the evaluation data based on how well represented they are in the corpus. We instead employ stratified sampling, in which all sources with marked domains are partitioned by domain and a uniform sample of the same size is taken from each partition. Specifically, documents are sampled from each domain until a target number of tokens is reached. This helps ensure that no domains are lost or very small after subsampling.
In social media domains with additional metadata that is typically displayed along with posts, we format metadata such as timestamps into the document `'text'` field. Where information is available about how threads of posts are connected, documents in that domain contain all posts in a given thread.
Additional details on source specific processing are available in our paper.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Text data from each of the sources curated in Paloma is created by varying sets of original authors. Some sources are collected from users of specific internet fora such as specific subreddits. Other data is collected on the basis of expert or automated classification of demographic groups. Other data is collected from authors of archival material including scientific preprints, Wikipedia, and code repositories. Lastly, data sampled from standard pretraining corpora comes from authors collected through automatic webscrapping and large scale sampling of archival sources, making it difficult to recover much specific information about these authors.
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
No annotation is done on this data.
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
No annotation is done on this data.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
Sources in Paloma may contain personally identifiable information (PII). No attempt is made to measure or remove this information for the following reason: Paloma provides a small subsample of already publicly available data. The small size of this subsample renders this data less useful for aggregation of PII information than the already available public sources which we subsample.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
It is beyond the scope of any one group of researchers to prescribe an exhaustive set of domains that should be examined for a LM. Rather Paloma brings together a substantial selection of domains that are identifiable from already available metadata to demonstrate the kinds of analyses possible with hundreds of domains and rigorous experimental controls.
Different research goals will motivate different definitions and selections of domains, but other researchers can apply the guidelines we detail in our paper to novel fine-grained domains suitable for their research questions. One of the key advantages of evaluating a model by its fit to a collection of text representing a domain is that such domains can be identified not just by researchers who study LMs. We hope future work will identify many more domains that no one discipline would think to look at.
In Paloma, we distinguish sources from domains, although not all cases permit such easy distinction. We use *source* to refer to a selection of data that is characterized by the decisions of the people who curated that data, whether that curation is automatic as in scraping C4 or manual as in selecting the subcorpora of the The Pile. By contrast we use *domain* to refer to a set of documents that belong together because they are originally produced by a group of humans that share a distinct social context. Considered as such, domains may overlap; a document's author may belong to the set of English speakers in Jamaica and the set of AI researchers. Further note, that domains are often latent categorizations which we only approximate because complete metadata does not exist.
Also, some domains in Paloma appear in multiple sources, such as academic papers. Though The Pile and RedPajama process academic papers differently, the subcorpora on academic papers in each source represent different approximations of the same or very similar domains. However for the sake of simplicity, we make the reductive assumption of counting all 585 domains in Paloma as fully distinct.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
In our paper we outline guidelines for evaluating language model fit. We encourage users of Paloma to adopt these experimental controls for metric variance when subsampling, benchmark contamination, differing tokenization, training data order, and evaluation data format.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{paloma,
title={{Paloma}: A Benchmark for Evaluating Language Model Fit},
author={Magnusson, Ian and Bhagia, Akshita and Hofmann, Valentin and Soldaini, Luca and Harsh Jha, Ananya and Tafjord, Oyvind and Schwenk,Dustin and Walsh, Evan Pete and Elazar, Yanai and Lo, Kyle and Groenveld,Dirk and Beltagy,Iz and Hajishirz,Hanneneh and Smith, Noah A. and Richardson,Kyle and Dodge,Jesse},
journal={technical report},
year={2023},
url={https://paloma.allen.ai/}
}
```
<!-- [More Information Needed] -->
## Dataset Card Contact
{ianm,jessed}@allenai.org