-
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 28 -
IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models
Paper • 2308.06721 • Published • 29 -
LEDITS++: Limitless Image Editing using Text-to-Image Models
Paper • 2311.16711 • Published • 22
Collections
Discover the best community collections!
Collections including paper arxiv:2211.05100
-
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11 -
OPT: Open Pre-trained Transformer Language Models
Paper • 2205.01068 • Published • 2
-
Nemotron-4 15B Technical Report
Paper • 2402.16819 • Published • 42 -
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 52 -
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 14 -
Reformer: The Efficient Transformer
Paper • 2001.04451 • Published
-
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 28 -
Contrastive Language-Image Pre-training for the Italian Language
Paper • 2108.08688 • Published • 2 -
IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation
Paper • 2203.03759 • Published • 5 -
Spanish Pre-trained BERT Model and Evaluation Data
Paper • 2308.02976 • Published • 3
-
Mistral 7B
Paper • 2310.06825 • Published • 47 -
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 20 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
-
Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements
Paper • 2210.01970 • Published • 11 -
Zephyr: Direct Distillation of LM Alignment
Paper • 2310.16944 • Published • 122 -
Datasets: A Community Library for Natural Language Processing
Paper • 2109.02846 • Published • 10 -
HuggingFace's Transformers: State-of-the-art Natural Language Processing
Paper • 1910.03771 • Published • 16
-
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 28 -
CsFEVER and CTKFacts: Acquiring Czech data for fact verification
Paper • 2201.11115 • Published -
Training language models to follow instructions with human feedback
Paper • 2203.02155 • Published • 16 -
FinGPT: Large Generative Models for a Small Language
Paper • 2311.05640 • Published • 27