-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
You Only Look Once: Unified, Real-Time Object Detection
Paper • 1506.02640 • Published • 1 -
HEp-2 Cell Image Classification with Deep Convolutional Neural Networks
Paper • 1504.02531 • Published -
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Paper • 2401.05566 • Published • 26
Collections
Discover the best community collections!
Collections including paper arxiv:1409.0473
-
Recurrent Neural Network Regularization
Paper • 1409.2329 • Published -
Pointer Networks
Paper • 1506.03134 • Published -
Order Matters: Sequence to sequence for sets
Paper • 1511.06391 • Published -
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism
Paper • 1811.06965 • Published
-
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 108 -
The Platonic Representation Hypothesis
Paper • 2405.07987 • Published • 2 -
The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning
Paper • 2304.05366 • Published • 1 -
Explaining NonLinear Classification Decisions with Deep Taylor Decomposition
Paper • 1512.02479 • Published • 1
-
Recurrent Neural Network Regularization
Paper • 1409.2329 • Published -
Pointer Networks
Paper • 1506.03134 • Published -
Order Matters: Sequence to sequence for sets
Paper • 1511.06391 • Published -
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism
Paper • 1811.06965 • Published
-
SMOTE: Synthetic Minority Over-sampling Technique
Paper • 1106.1813 • Published • 1 -
Scikit-learn: Machine Learning in Python
Paper • 1201.0490 • Published • 1 -
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
Paper • 1406.1078 • Published -
Distributed Representations of Sentences and Documents
Paper • 1405.4053 • Published
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
ImageNet Large Scale Visual Recognition Challenge
Paper • 1409.0575 • Published • 8 -
Sequence to Sequence Learning with Neural Networks
Paper • 1409.3215 • Published • 3 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11