Large Language Monkeys: Scaling Inference Compute with Repeated Sampling Paper • 2407.21787 • Published Jul 31 • 12
Byte Latent Transformer: Patches Scale Better Than Tokens Paper • 2412.09871 • Published 10 days ago • 71
Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters Paper • 2408.03314 • Published Aug 6 • 48
Euclid: Supercharging Multimodal LLMs with Synthetic High-Fidelity Visual Descriptions Paper • 2412.08737 • Published 12 days ago • 51
Multimodal Latent Language Modeling with Next-Token Diffusion Paper • 2412.08635 • Published 12 days ago • 39
view article Article GPU Poor Savior: Revolutionizing Low-Bit Open Source LLMs and Cost-Effective Edge Computing By NicoNico • May 25 • 10