RedPajama: an Open Dataset for Training Large Language Models Paper • 2411.12372 • Published Nov 19, 2024 • 47
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees Paper • 2110.03313 • Published Oct 7, 2021 • 1
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees Paper • 2110.03313 • Published Oct 7, 2021 • 1
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient Paper • 2301.11913 • Published Jan 27, 2023 • 1
A critical look at the evaluation of GNNs under heterophily: Are we really making progress? Paper • 2302.11640 • Published Feb 22, 2023
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model Paper • 2211.05100 • Published Nov 9, 2022 • 27
Petals: Collaborative Inference and Fine-tuning of Large Models Paper • 2209.01188 • Published Sep 2, 2022 • 2
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding Paper • 2402.12374 • Published Feb 19, 2024 • 3
The Hallucinations Leaderboard -- An Open Effort to Measure Hallucinations in Large Language Models Paper • 2404.05904 • Published Apr 8, 2024 • 8
Mind Your Format: Towards Consistent Evaluation of In-Context Learning Improvements Paper • 2401.06766 • Published Jan 12, 2024 • 2
Distributed Inference and Fine-tuning of Large Language Models Over The Internet Paper • 2312.08361 • Published Dec 13, 2023 • 25
Distributed Inference and Fine-tuning of Large Language Models Over The Internet Paper • 2312.08361 • Published Dec 13, 2023 • 25
Hypernymy Understanding Evaluation of Text-to-Image Models via WordNet Hierarchy Paper • 2310.09247 • Published Oct 13, 2023 • 3
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU Paper • 2303.06865 • Published Mar 13, 2023 • 1
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient Paper • 2301.11913 • Published Jan 27, 2023 • 1