DataComp: In search of the next generation of multimodal datasets Paper • 2304.14108 • Published Apr 27, 2023 • 2
Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias Paper • 2306.15895 • Published Jun 28, 2023
SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality Paper • 2306.14610 • Published Jun 26, 2023
Subclass-balancing Contrastive Learning for Long-tailed Recognition Paper • 2306.15925 • Published Jun 28, 2023
WRENCH: A Comprehensive Benchmark for Weak Supervision Paper • 2109.11377 • Published Sep 23, 2021
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework Paper • 2308.08155 • Published Aug 16, 2023 • 3
When to Learn What: Model-Adaptive Data Augmentation Curriculum Paper • 2309.04747 • Published Sep 9, 2023
Training Language Model Agents without Modifying Language Models Paper • 2402.11359 • Published Feb 17
m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks Paper • 2403.11085 • Published Mar 17
DataComp-LM: In search of the next generation of training sets for language models Paper • 2406.11794 • Published Jun 17 • 50
Adaptive In-conversation Team Building for Language Model Agents Paper • 2405.19425 • Published May 29
TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-Action Paper • 2412.05479 • Published 20 days ago
ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models Paper • 2412.07012 • Published 17 days ago • 1
ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models Paper • 2412.07012 • Published 17 days ago • 1