File size: 1,333 Bytes
31e3128
 
140919a
 
 
 
 
 
 
31e3128
140919a
 
 
 
c128605
 
 
 
140919a
 
c128605
 
 
140919a
 
 
1cb0319
140919a
1cb0319
 
3683c66
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
license: mit
task_categories:
- text-classification
- feature-extraction
language:
- en
size_categories:
- 10K<n<100K
---

# A cleaned dataset from [paperswithcode.com](https://paperswithcode.com/)
*Last dataset update: July 2023*

This is a cleaned up dataset optained from [paperswithcode.com](https://paperswithcode.com/) through their [API](https://paperswithcode.com/api/v1/docs/) service. It represents a set of around 56K carefully categorized papers into 3K tasks and 16 areas. The papers contain arXiv and NIPS IDs as well as title, abstract and other meta information.
It can be used for training text classifiers that concentrate on the use of specific AI and ML methods and frameworks.

### Contents
It contains the following tables:

- papers.csv (around 56K)
- papers_train.csv (80% from 56K)
- papers_test.csv (20% from 56K)
- tasks.csv
- areas.csv

### Specials
UUIDs were added to the dataset since the PapersWithCode IDs (pwc_ids) are not distinct enough. These UUIDs may change in the future with new versions of the dataset.
Also, embeddings were calculated for all of the 56K papers using the brilliant model [SciNCL](https://huggingface.co/malteos/scincl) as well as dimensionality-redused 2D coordinates using UMAP.

There is also a simple Python Notebook which was used to optain and refactor the dataset.