Export ONNX version of model 'pyannote/embedding', on 2024-08-31 00:29:33 CST
Browse files- README.md +33 -0
- model.onnx +3 -0
README.md
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: pyannote/embedding
|
3 |
+
datasets:
|
4 |
+
- voxceleb
|
5 |
+
extra_gated_fields:
|
6 |
+
Company/university: text
|
7 |
+
I plan to use this model for (task, type of audio data, etc): text
|
8 |
+
Website: text
|
9 |
+
extra_gated_prompt: The collected information will help acquire a better knowledge
|
10 |
+
of pyannote.audio userbase and help its maintainers apply for grants to improve
|
11 |
+
it further. If you are an academic researcher, please cite the relevant papers in
|
12 |
+
your own publications using the model. If you work for a company, please consider
|
13 |
+
contributing back to pyannote.audio development (e.g. through unrestricted gifts).
|
14 |
+
We also provide scientific consulting services around speaker diarization and machine
|
15 |
+
listening.
|
16 |
+
inference: false
|
17 |
+
license: mit
|
18 |
+
tags:
|
19 |
+
- pyannote
|
20 |
+
- pyannote-audio
|
21 |
+
- pyannote-audio-model
|
22 |
+
- audio
|
23 |
+
- voice
|
24 |
+
- speech
|
25 |
+
- speaker
|
26 |
+
- speaker-recognition
|
27 |
+
- speaker-verification
|
28 |
+
- speaker-identification
|
29 |
+
- speaker-embedding
|
30 |
+
- onnx
|
31 |
+
---
|
32 |
+
This is the ONNX exported version of [pyannote/embedding](https://huggingface.co/pyannote/embedding).
|
33 |
+
|
model.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:278528694f907ed19a59a76220dde2060c0f61e0f3956ae8d52f4fe617a39ca9
|
3 |
+
size 17631302
|