-
Attention Is All You Need
Paper • 1706.03762 • Published • 54 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 14 -
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper • 2401.17464 • Published • 20 -
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
Paper • 2407.21770 • Published • 22
Justin
jxtngx
AI & ML interests
None yet
Recent Activity
liked
a dataset
about 22 hours ago
yahma/alpaca-cleaned
updated
a collection
3 months ago
Meta papers
updated
a collection
3 months ago
Useful datasets
Organizations
Collections
14
models
None public yet
datasets
None public yet