Tackling the Generative Learning Trilemma with Denoising Diffusion GANs Paper • 2112.07804 • Published Dec 15, 2021 • 1
ModernBERT Collection Bringing BERT into modernity via both architecture changes and scaling • 3 items • Updated 15 days ago • 111
Continuous Autoregressive Models with Noise Augmentation Avoid Error Accumulation Paper • 2411.18447 • Published Nov 27, 2024 • 1
Scaling Transformers for Low-Bitrate High-Quality Speech Coding Paper • 2411.19842 • Published Nov 29, 2024 • 10
Cosmos Tokenizer Collection A suite of image and video tokenizers • 12 items • Updated 16 days ago • 28
Molmo Collection Artifacts for open multimodal language models. • 5 items • Updated Nov 27, 2024 • 290
Lina-Speech: Gated Linear Attention is a Fast and Parameter-Efficient Learner for text-to-speech synthesis Paper • 2410.23320 • Published Oct 30, 2024 • 8
Jetfire: Efficient and Accurate Transformer Pretraining with INT8 Data Flow and Per-Block Quantization Paper • 2403.12422 • Published Mar 19, 2024 • 1
Simplifying, Stabilizing and Scaling Continuous-Time Consistency Models Paper • 2410.11081 • Published Oct 14, 2024 • 19
BigVGAN: A Universal Neural Vocoder with Large-Scale Training Paper • 2206.04658 • Published Jun 9, 2022 • 3
Moshi v0.1 Release Collection MLX, Candle & PyTorch model checkpoints released as part of the Moshi release from Kyutai. Run inference via: https://github.com/kyutai-labs/moshi • 13 items • Updated Sep 18, 2024 • 225
Parallelizing Linear Transformers with the Delta Rule over Sequence Length Paper • 2406.06484 • Published Jun 10, 2024 • 3
Gated Linear Attention Transformers with Hardware-Efficient Training Paper • 2312.06635 • Published Dec 11, 2023 • 6
Gated Slot Attention for Efficient Linear-Time Sequence Modeling Paper • 2409.07146 • Published Sep 11, 2024 • 19
Ultra-lightweight Neural Differential DSP Vocoder For High Quality Speech Synthesis Paper • 2401.10460 • Published Jan 19, 2024 • 1
EVA-GAN: Enhanced Various Audio Generation via Scalable Generative Adversarial Networks Paper • 2402.00892 • Published Jan 31, 2024 • 13
ByT5: Towards a token-free future with pre-trained byte-to-byte models Paper • 2105.13626 • Published May 28, 2021 • 3
Parler-TTS: fully open-source high-quality TTS Collection If you want to find out more about how these models were trained and even fine-tune them yourself, check-out the Parler-TTS repository on GitHub. • 8 items • Updated Dec 2, 2024 • 49