metadata
language:
- en
size_categories:
- 1B<n<10B
A large dataset of LA Times articles, spanning over a century (1914-2024). In total, there are 3.6M full text articles, comprised of 12B characters. Using the Llama-3 tokenzier, this comes out to 2.6B tokens.
Note: 164,116 articles (4.5%) have 'None' as their dateTime. This shouldn't pose much of an issue, as articles with no text were filtered out.