ZhifengKong
commited on
Commit
•
9e8c151
1
Parent(s):
77da490
update
Browse files
NVIDIA OneWay Noncommercial License (NSCL v1).docx
ADDED
Binary file (21.1 kB). View file
|
|
audio flamingo model card.md
ADDED
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Model Overview
|
2 |
+
|
3 |
+
## Description:
|
4 |
+
Audio Flamingo is a novel audio-understanding language model for
|
5 |
+
|
6 |
+
- understanding audio,
|
7 |
+
- quickly adapting to unseen tasks via in-context learning and retrieval, and
|
8 |
+
- understanding and responding to multi-turn dialogues
|
9 |
+
|
10 |
+
We introduce a series of training techniques, architecture design, and data strategies to enhance our model with these abilities. Extensive evaluations across various audio understanding tasks confirm the efficacy of our method, setting new state-of-the-art benchmarks.
|
11 |
+
|
12 |
+
<center><img src="https://github.com/NVIDIA/audio-flamingo/raw/main/assets/audio_flamingo_arch.png" width="800"></center>
|
13 |
+
|
14 |
+
**This model is ready for non-commercial research-only.**
|
15 |
+
<br>
|
16 |
+
|
17 |
+
|
18 |
+
## References(s):
|
19 |
+
* [Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities](https://arxiv.org/abs/2402.01831) <br>
|
20 |
+
* [Project Page](https://github.com/NVIDIA/audio-flamingo) <br>
|
21 |
+
* [Demo Website](https://audioflamingo.github.io/) <br>
|
22 |
+
|
23 |
+
## Model Architecture:
|
24 |
+
**Architecture Type:** Transformer <br>
|
25 |
+
**Network Architecture:** Audio Flamingo
|
26 |
+
|
27 |
+
Audio Flamingo is a Flamingo-style architecture with frozen audio feature extractor, trainable transformation layers and xattn-dense layers, and language model layers.
|
28 |
+
|
29 |
+
## Input:
|
30 |
+
**Input Types:** Audio, Text <br>
|
31 |
+
**Input Format:** Wav/MP3/Flac, String <br>
|
32 |
+
**Input Parameters:** None <br>
|
33 |
+
**Maximum Audio Input Lengths:** 33.25 seconds <br>
|
34 |
+
**Maximum Text Input Lengths:** 512 tokens <br>
|
35 |
+
|
36 |
+
## Output:
|
37 |
+
**Output Type:** Text <br>
|
38 |
+
**Output Format:** String <br>
|
39 |
+
**Output Parameters:** None <br>
|
40 |
+
|
41 |
+
## Software Integration:
|
42 |
+
**Runtime Engine(s):** PyTorch
|
43 |
+
|
44 |
+
**Supported Hardware Microarchitecture Compatibility:**
|
45 |
+
* NVIDIA Ampere <br>
|
46 |
+
* NVIDIA Hopper <br>
|
47 |
+
|
48 |
+
## Preferred/Supported Operating System(s):
|
49 |
+
* Linux
|
50 |
+
|
51 |
+
|
52 |
+
## Model Version(s):
|
53 |
+
* v1.0
|
54 |
+
|
55 |
+
## Training, Testing, and Evaluation Datasets:
|
56 |
+
|
57 |
+
### Training Dataset:
|
58 |
+
Audio Flamingo is trained with **publicly available** datasets under various licenses, with the most restricted ones being non-commercial/research-only. The dataset contains diverse audio types including speech, environmental sounds, and music.
|
59 |
+
|
60 |
+
|
61 |
+
* [OpenAQA ](https://github.com/YuanGongND/ltu?tab=readme-ov-file): Data collection method - [Human]; Labeling method - [Synthetic]
|
62 |
+
* [Laion630K ](https://github.com/LAION-AI/audio-dataset/blob/main/laion-audio-630k/README.md)
|
63 |
+
* [LP-MusicCaps ](https://github.com/seungheondoh/lp-music-caps)
|
64 |
+
* [SoundDescs ](https://github.com/akoepke/audio-retrieval-benchmark)
|
65 |
+
* [WavCaps](https://github.com/XinhaoMei/WavCaps)
|
66 |
+
* [AudioSet ](https://research.google.com/audioset/download.html)
|
67 |
+
* [AudioSet Strong Labeled ](https://research.google.com/audioset/download_strong.html)
|
68 |
+
* [WavText5K ](https://github.com/microsoft/WavText5K)
|
69 |
+
* [MSP-Podcast ](https://ecs.utdallas.edu/research/researchlabs/msp-lab/MSP-Podcast.html)
|
70 |
+
* [ClothoAQA ](https://zenodo.org/records/6473207)
|
71 |
+
* [Clotho-v2 ](https://github.com/audio-captioning/clotho-dataset/tree/master)
|
72 |
+
* [MACS ](https://zenodo.org/records/5114771)
|
73 |
+
* [FSD50k ](https://zenodo.org/records/4060432)
|
74 |
+
* [CochlScene ](https://github.com/cochlearai/cochlscene)
|
75 |
+
* [NonSpeech 7k ](https://zenodo.org/records/6967442)
|
76 |
+
* [Chime-home ](https://code.soundsoftware.ac.uk/projects/chime-home-dataset-annotation-and-baseline-evaluation-code)
|
77 |
+
* [Sonyc-UST ](https://zenodo.org/records/3966543)
|
78 |
+
* [Emov-DB ](https://github.com/numediart/EmoV-DB)
|
79 |
+
* [JL-Corpus ](https://github.com/tli725/JL-Corpus)
|
80 |
+
* [Tess ](https://www.kaggle.com/datasets/ejlok1/toronto-emotional-speech-set-tess)
|
81 |
+
* [OMGEmotion ](https://github.com/knowledgetechnologyuhh/OMGEmotionChallenge)
|
82 |
+
* [MELD ](https://github.com/declare-lab/MELD)
|
83 |
+
* [MusicAVQA ](https://gewu-lab.github.io/MUSIC-AVQA/)
|
84 |
+
* [MusicQA ](https://github.com/shansongliu/MU-LLaMA?tab=readme-ov-file)
|
85 |
+
* [MusicCaps ](https://www.kaggle.com/datasets/googleai/musiccaps)
|
86 |
+
* [NSynth ](https://magenta.tensorflow.org/datasets/nsynth)
|
87 |
+
* [MTG-Jamendo ](https://github.com/MTG/mtg-jamendo-dataset)
|
88 |
+
* [MusDB-HQ ](https://zenodo.org/records/3338373)
|
89 |
+
* [FMA ](https://github.com/mdeff/fma)
|
90 |
+
|
91 |
+
For all of these datasets, the data collection method is [human]. For OpenAQA, Laion630k, LP-MusicCaps, WavCaps, MusicQA, the data labeling method is [synthetic]. For the rest, the data labeling method is [human].
|
92 |
+
|
93 |
+
### Evaluating Dataset:
|
94 |
+
Audio Flamingo is evaluated on the test split of the following datasets.
|
95 |
+
|
96 |
+
* [ClothoAQA ](https://zenodo.org/records/6473207)
|
97 |
+
* [MusicAVQA ](https://gewu-lab.github.io/MUSIC-AVQA/)
|
98 |
+
* [Clotho-v2 ](https://github.com/audio-captioning/clotho-dataset/tree/master)
|
99 |
+
* [FSD50k ](https://zenodo.org/records/4060432)
|
100 |
+
* [CochlScene ](https://github.com/cochlearai/cochlscene)
|
101 |
+
* [NonSpeech 7k ](https://zenodo.org/records/6967442)
|
102 |
+
* [NSynth ](https://magenta.tensorflow.org/datasets/nsynth)
|
103 |
+
* [AudioCaps ](https://github.com/cdjkim/audiocaps)
|
104 |
+
* [CREMA-D ](https://github.com/CheyneyComputerScience/CREMA-D)
|
105 |
+
* [Ravdess ](https://zenodo.org/records/1188976)
|
106 |
+
* [US8K ](https://urbansounddataset.weebly.com/urbansound8k.html)
|
107 |
+
* [GTZAN ](https://www.tensorflow.org/datasets/catalog/gtzan)
|
108 |
+
* [Medley-solos-DB ](https://zenodo.org/records/3464194)
|
109 |
+
|
110 |
+
For all of these datasets, the data collection method is [human] and the data labeling method is [human].
|
111 |
+
|
112 |
+
## Inference
|
113 |
+
|
114 |
+
**Engine:** HuggingFace Transformers <br>
|
115 |
+
**Test Hardware [Name the specific test hardware model]:** A100 80GB <br>
|