language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- question-answering
pretty_name: alignment-research-dataset
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: string
- name: title
dtype: string
- name: text
dtype: large_string
- name: url
dtype: string
- name: date_published
dtype: string
- name: authors
sequence: string
- name: summary
sequence: string
- name: source_type
dtype: string
- name: book_title
dtype: string
- name: karma
dtype: int32
- name: votes
dtype: int32
- name: words
dtype: int32
- name: comment_count
dtype: int32
- name: tags
sequence: string
- name: modified_at
dtype: string
- name: alias
dtype: string
- name: data_last_modified
dtype: string
- name: abstract
dtype: string
- name: author_comment
dtype: string
- name: journal_ref
dtype: string
- name: doi
dtype: string
- name: primary_category
dtype: string
- name: categories
sequence: string
- name: initial_source
dtype: string
- name: bibliography_bib
sequence:
- name: title
dtype: string
config_name: all
splits:
- name: train
num_bytes: 471644446
num_examples: 14271
download_size: 484827959
dataset_size: 471644446
AI Alignment Research Dataset
The AI Alignment Research Dataset is a collection of documents related to AI Alignment and Safety from various books, research papers, and alignment related blog posts. This is a work in progress. Components are still undergoing a cleaning process to be updated more regularly.
Sources
Here are the list of sources along with sample contents:
agisf - recommended readings from AGI Safety Fundamentals
aisafety.info - Stampy's FAQ
arxiv - relevant research papers
blogs - entire websites automatically scraped
- AI Impacts
- AI Safety Camp
- carado.moe
- Cold Takes
- DeepMind technical blogs
- DeepMind AI Safety Research
- EleutherAI
- generative.ink
- Gwern Branwen's blog
- Jack Clark's Import AI
- MIRI
- Jacob Steinhardt's blog
- ML Safety Newsletter
- Transformer Circuits Thread
- Open AI Research
- Victoria Krakovna's blog
- Eliezer Yudkowsky's blog
eaforum - selected posts
lesswrong - selected posts
special_docs - individual documents curated from various resources
- Make a suggestion for sources not already in the dataset
youtube - playlists & channels
Keys
All entries contain the following keys:
id
- string of unique identifiersource
- string of data source listed abovetitle
- string of document title of documentauthors
- list of stringstext
- full text of document contenturl
- string of valid link to text contentdate_published
- in UTC format
Additional keys may be available depending on the source document.
Usage
Execute the following code to download and parse the files:
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset')
To only get the data for a specific source, pass it in as the second argument, e.g.:
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
Limitations and Bias
LessWrong posts have overweighted content on doom and existential risk, so please beware in training or finetuning generative language models on the dataset.
Contributing
The scraper to generate this dataset is open-sourced on GitHub and currently maintained by volunteers at StampyAI / AI Safety Info. Learn more or join us on Discord.
Rebuilding info
This README contains info about the number of rows and their features which should be rebuilt each time datasets get changed. To do so, run:
datasets-cli test ./alignment-research-dataset --save_info --all_configs
Citing the Dataset
For more information, here is the paper and LessWrong post. Please use the following citation when using the dataset:
Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).