Skywork-Reward-Data-Collection Collection Open-source preference datasets used to train the Skywork reward model series • 17 items • Updated Oct 12 • 11
Uni-SMART: Universal Science Multimodal Analysis and Research Transformer Paper • 2403.10301 • Published Mar 15 • 52
In Search of Needles in a 10M Haystack: Recurrent Memory Finds What LLMs Miss Paper • 2402.10790 • Published Feb 16 • 41
Linear Transformers with Learnable Kernel Functions are Better In-Context Models Paper • 2402.10644 • Published Feb 16 • 79
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time Paper • 2310.17157 • Published Oct 26, 2023 • 12
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models Paper • 2308.13137 • Published Aug 25, 2023 • 17
UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition Paper • 2308.03279 • Published Aug 7, 2023 • 21
TPTU: Task Planning and Tool Usage of Large Language Model-based AI Agents Paper • 2308.03427 • Published Aug 7, 2023 • 14
Seeing through the Brain: Image Reconstruction of Visual Perception from Human Brain Signals Paper • 2308.02510 • Published Jul 27, 2023 • 21
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models Paper • 2308.01390 • Published Aug 2, 2023 • 32
Multimodal Neurons in Pretrained Text-Only Transformers Paper • 2308.01544 • Published Aug 3, 2023 • 15
Retentive Network: A Successor to Transformer for Large Language Models Paper • 2307.08621 • Published Jul 17, 2023 • 170