File size: 6,001 Bytes
735095c dd378bb 11042be dd378bb 11042be cfbef1e 0fbf0bc 4364273 940520a 0fbf0bc 11042be 2f0e8fd 11042be 83a96a0 0fbf0bc 2f0e8fd 119dc54 cfbef1e 119dc54 cd76c4c cfbef1e cd76c4c b5a8434 cfbef1e cd76c4c cfbef1e cd76c4c cfbef1e cd76c4c cfbef1e cd76c4c b5a8434 cd76c4c cfbef1e cd76c4c 119dc54 5bbd8e9 cfbef1e 5bbd8e9 cfbef1e 5bbd8e9 cfbef1e 5bbd8e9 cfbef1e 5bbd8e9 119dc54 cfbef1e 119dc54 edcafa3 119dc54 cfbef1e 119dc54 2f0e8fd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 |
---
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- question-answering
pretty_name: alignment-research-dataset
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: string
- name: title
dtype: string
- name: text
dtype: large_string
- name: url
dtype: string
- name: date_published
dtype: string
- name: authors
sequence: string
- name: summary
sequence: string
- name: source_type
dtype: string
- name: book_title
dtype: string
- name: karma
dtype: int32
- name: votes
dtype: int32
- name: words
dtype: int32
- name: comment_count
dtype: int32
- name: tags
sequence: string
- name: modified_at
dtype: string
- name: alias
dtype: string
- name: data_last_modified
dtype: string
- name: abstract
dtype: string
- name: author_comment
dtype: string
- name: journal_ref
dtype: string
- name: doi
dtype: string
- name: primary_category
dtype: string
- name: categories
sequence: string
- name: initial_source
dtype: string
- name: bibliography_bib
sequence:
- name: title
dtype: string
config_name: all
splits:
- name: train
num_bytes: 471644446
num_examples: 14271
download_size: 484827959
dataset_size: 471644446
---
# AI Alignment Research Dataset
The AI Alignment Research Dataset is a collection of documents related to AI Alignment and Safety from various books, research papers, and alignment related blog posts. This is a work in progress. Components are still undergoing a cleaning process to be updated more regularly.
## Sources
Here are the list of sources along with sample contents:
- [agentmodel](https://agentmodels.org/)
- [agisf](https://course.aisafetyfundamentals.com/) - recommended readings from AGI Safety Fundamentals
- [aisafety.info](https://aisafety.info/) - Stampy's FAQ
- [alignmentforum](https://www.alignmentforum.org)
- [alignment_newsletter](https://rohinshah.com/alignment-newsletter/)
- [arbital](https://arbital.com/)
- [arxiv](https://arxiv.org/) - relevant research papers
- blogs - entire websites automatically scraped
- [AI Impacts](https://aiimpacts.org/)
- [AI Safety Camp](https://aisafety.camp/)
- [carado.moe](https://carado.moe/)
- [Cold Takes](https://www.cold-takes.com/)
- [DeepMind technical blogs](https://www.deepmind.com/blog-categories/technical-blogs)
- [DeepMind AI Safety Research](https://deepmindsafetyresearch.medium.com/)
- [EleutherAI](https://blog.eleuther.ai/)
- [generative.ink](https://generative.ink/posts/)
- [Gwern Branwen's blog](https://gwern.net/)
- [Jack Clark's Import AI](https://importai.substack.com/)
- [MIRI](https://intelligence.org/)
- [Jacob Steinhardt's blog](https://jsteinhardt.wordpress.com/)
- [ML Safety Newsletter](https://newsletter.mlsafety.org/)
- [Transformer Circuits Thread](https://transformer-circuits.pub/)
- [Open AI Research](https://openai.com/research/)
- [Victoria Krakovna's blog](https://vkrakovna.wordpress.com/)
- [Eliezer Yudkowsky's blog](https://www.yudkowsky.net/)
- [distill](https://distill.pub/)
- [eaforum](https://forum.effectivealtruism.org/) - selected posts
- [lesswrong](https://www.lesswrong.com/) - selected posts
- special_docs - individual documents curated from various resources
- [Make a suggestion](https://bit.ly/ard-suggestion) for sources not already in the dataset
- youtube - playlists & channels
- [AI Alignment playlist](https://www.youtube.com/playlist?list=PLCRVRLd2RhZTpdUdEzJjo3qhmX3y3skWA) and other lists
- [AI Explained](https://www.youtube.com/@aiexplained-official)
- [Evan Hubinger's AI Safety Talks](https://www.youtube.com/@aisafetytalks)
- [AI Safety Reading Group](https://www.youtube.com/@aisafetyreadinggroup/videos)
- [AiTech - TU Delft](https://www.youtube.com/@AiTechTUDelft/)
- [Rob Miles AI](https://www.youtube.com/@RobertMilesAI)
## Keys
All entries contain the following keys:
- `id` - string of unique identifier
- `source` - string of data source listed above
- `title` - string of document title of document
- `authors` - list of strings
- `text` - full text of document content
- `url` - string of valid link to text content
- `date_published` - in UTC format
Additional keys may be available depending on the source document.
## Usage
Execute the following code to download and parse the files:
```python
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset')
```
To only get the data for a specific source, pass it in as the second argument, e.g.:
```python
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
```
## Limitations and Bias
LessWrong posts have overweighted content on doom and existential risk, so please beware in training or finetuning generative language models on the dataset.
## Contributing
The scraper to generate this dataset is open-sourced on [GitHub](https://github.com/StampyAI/alignment-research-dataset) and currently maintained by volunteers at StampyAI / AI Safety Info. [Learn more](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_susRF#_lufSr) or join us on [Discord](https://discord.gg/vjFSCDyMCy).
## Rebuilding info
This README contains info about the number of rows and their features which should be rebuilt each time datasets get changed. To do so, run:
datasets-cli test ./alignment-research-dataset --save_info --all_configs
## Citing the Dataset
For more information, here is the [paper](https://arxiv.org/abs/2206.02841) and [LessWrong](https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai) post. Please use the following citation when using the dataset:
Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022). |