asahi417 commited on
Commit
855b061
1 Parent(s): 8f9364d
experiment_cache/.DS_Store ADDED
Binary file (8.2 kB). View file
 
experiment_cache/figure/2d.latent_space.clap_general_se.expresso.speaker_id.png CHANGED

Git LFS Details

  • SHA256: 6178345a9fb18e762034b083137f9810345ec7bbba7105e921f7cb1a425ee918
  • Pointer size: 131 Bytes
  • Size of remote file: 513 kB

Git LFS Details

  • SHA256: d6a475816f4a6939f3abf4ace81aa14c8602f0d7b3f124af3f0e93e3597789c3
  • Pointer size: 131 Bytes
  • Size of remote file: 503 kB
experiment_cache/figure/2d.latent_space.clap_general_se.expresso.style.png CHANGED

Git LFS Details

  • SHA256: 1a0288551300c9552f5921f7471aeb60b1d22554f3e5b9ba89f91e49f470ca78
  • Pointer size: 131 Bytes
  • Size of remote file: 881 kB

Git LFS Details

  • SHA256: 535da98b978df54ecd4e545c365c93aa1fa742db568e2d7a364b73ddd0689af0
  • Pointer size: 131 Bytes
  • Size of remote file: 851 kB
experiment_cache/figure/2d.latent_space.clap_general_se.voxceleb1-test-split.speaker_id.png CHANGED

Git LFS Details

  • SHA256: 5abafdce704e9e134f163a9e995f982b184a488e84513985e948355fd8d2918c
  • Pointer size: 132 Bytes
  • Size of remote file: 1.23 MB

Git LFS Details

  • SHA256: a24ac08d1524f2b2883f603da00cf88158e04ee276eaa4b8ef407cd56c49e748
  • Pointer size: 132 Bytes
  • Size of remote file: 1.17 MB
experiment_cache/figure/2d.latent_space.clap_se.expresso.speaker_id.png CHANGED

Git LFS Details

  • SHA256: 453fdc8b8c73af790e7f571860797fcf9fa15968565fe7e3248c927733a51471
  • Pointer size: 131 Bytes
  • Size of remote file: 505 kB

Git LFS Details

  • SHA256: 74e8c753a68ea86d3e7ee567f8d0b829d059d96bd8a7a4cf0197f856494e1319
  • Pointer size: 131 Bytes
  • Size of remote file: 495 kB
experiment_cache/figure/2d.latent_space.clap_se.expresso.style.png CHANGED

Git LFS Details

  • SHA256: e67bee150ff59f3f58b696c01db703b37d1978d9f35de335dd865fadea99253f
  • Pointer size: 131 Bytes
  • Size of remote file: 883 kB

Git LFS Details

  • SHA256: 48c26f122f170e9793f0c95c7b7eae9d43a032f4a01ad29c1712124bcb14f6d2
  • Pointer size: 131 Bytes
  • Size of remote file: 853 kB
experiment_cache/figure/2d.latent_space.clap_se.voxceleb1-test-split.speaker_id.png CHANGED

Git LFS Details

  • SHA256: 350e5573d1ac7e8544aa7f48d8affaf0241a16b968ece127f4f5435c198dab8e
  • Pointer size: 132 Bytes
  • Size of remote file: 1.25 MB

Git LFS Details

  • SHA256: a3db592359a02ef9c215ca93859359456d390e95d5910f2d5b0afa22e7113616
  • Pointer size: 132 Bytes
  • Size of remote file: 1.2 MB
experiment_cache/figure/2d.latent_space.meta_voice_se.expresso.speaker_id.png CHANGED

Git LFS Details

  • SHA256: 00818fb0c1b7707256257b45e13955ca408a3269557d5f49b1cdbdbb1785452b
  • Pointer size: 131 Bytes
  • Size of remote file: 524 kB

Git LFS Details

  • SHA256: e1fcb5a489ec41af8a9e3ab182107ae8249e77ce46ae2cd3b65bec7e1e4fd595
  • Pointer size: 131 Bytes
  • Size of remote file: 514 kB
experiment_cache/figure/2d.latent_space.meta_voice_se.expresso.style.png CHANGED

Git LFS Details

  • SHA256: 49ebe845d9ccd2e7b4ee6d5647561fbe95f93719500ef62f98b21e2c5317f6bd
  • Pointer size: 131 Bytes
  • Size of remote file: 870 kB

Git LFS Details

  • SHA256: 3d2cbc0cdacb743401a11590af18fa36d9586468e25bd5301486cb5a095efc4d
  • Pointer size: 131 Bytes
  • Size of remote file: 838 kB
experiment_cache/figure/2d.latent_space.meta_voice_se.voxceleb1-test-split.speaker_id.png CHANGED

Git LFS Details

  • SHA256: 8bba5d4daafdc226fd0f192c69b4dcaa42ecfee2c0825d22cf626b1b145f0a66
  • Pointer size: 131 Bytes
  • Size of remote file: 785 kB

Git LFS Details

  • SHA256: 9d819a5e084561b041ff0c001f410a9a22cbfec5b70a5c0ce9192ddf0989e726
  • Pointer size: 131 Bytes
  • Size of remote file: 733 kB
experiment_cache/figure/2d.latent_space.pyannote_se.expresso.speaker_id.png CHANGED

Git LFS Details

  • SHA256: 9bd26e1ff6f83d44c94c0a5d42b058109d7c5f5c45b4fcad51f4e2bfc9728f0c
  • Pointer size: 131 Bytes
  • Size of remote file: 484 kB

Git LFS Details

  • SHA256: 7243c9e3c5422b86966194210ef2843c98088376cee677d3e8d5b8b377653452
  • Pointer size: 131 Bytes
  • Size of remote file: 473 kB
experiment_cache/figure/2d.latent_space.pyannote_se.expresso.style.png CHANGED

Git LFS Details

  • SHA256: beba6618ca2af9bd32dbf0f7e9cb0cb749aa9ebe8c78daedc000661fb92de05f
  • Pointer size: 131 Bytes
  • Size of remote file: 878 kB

Git LFS Details

  • SHA256: 725ddeabb0194cbe023839b4186c4cd3552af70f96b6441101e997ed3226b8db
  • Pointer size: 131 Bytes
  • Size of remote file: 847 kB
experiment_cache/figure/2d.latent_space.pyannote_se.voxceleb1-test-split.speaker_id.png CHANGED

Git LFS Details

  • SHA256: 3f35a0521bdfa3d00e3b54b9ecbf6cf14446c36d4364b77e371a4a127f5bb112
  • Pointer size: 131 Bytes
  • Size of remote file: 615 kB

Git LFS Details

  • SHA256: 4d86634a678e7d69eedc81b37b94f1ac839547dc5f8a117cf077d119928353b6
  • Pointer size: 131 Bytes
  • Size of remote file: 566 kB
experiment_cache/figure/2d.latent_space.w2v_bert_se.expresso.speaker_id.png CHANGED

Git LFS Details

  • SHA256: 96193369d1172b8724adb122adaa0ef249728efbe29ecec081a7a52d940b8854
  • Pointer size: 131 Bytes
  • Size of remote file: 896 kB

Git LFS Details

  • SHA256: bdeecba8d662eec6e355636c8866e88c00a26d10839fe4ef4d8a0c08b0836344
  • Pointer size: 131 Bytes
  • Size of remote file: 886 kB
experiment_cache/figure/2d.latent_space.w2v_bert_se.expresso.style.png CHANGED

Git LFS Details

  • SHA256: a7621ebb862988a23fde45cab2111dda51faa7471b38021d639212565d7561fd
  • Pointer size: 132 Bytes
  • Size of remote file: 1.51 MB

Git LFS Details

  • SHA256: 5f83ea0b753f6acc579b7f70d9d49cc1b1d3dae62ee5f0a888a6114c83cac65c
  • Pointer size: 132 Bytes
  • Size of remote file: 1.48 MB
experiment_cache/figure/2d.latent_space.w2v_bert_se.voxceleb1-test-split.speaker_id.png CHANGED

Git LFS Details

  • SHA256: 98d48979dff12bdf25581da4219c50d56d5dbae247a44c5d27845403aa435c06
  • Pointer size: 132 Bytes
  • Size of remote file: 1.59 MB

Git LFS Details

  • SHA256: 9c33c1d7432a36e2796728e4eb573eb7db121cbbb76845cf3f1c6aabefc32b74
  • Pointer size: 132 Bytes
  • Size of remote file: 1.53 MB
experiment_speaker_verification.py CHANGED
@@ -18,6 +18,7 @@ from model_meta_voice import MetaVoiceSE
18
  from model_pyannote_embedding import PyannoteSE
19
  from model_w2v_bert import W2VBertSE
20
  from model_clap import ClapSE, ClapGeneralSE
 
21
 
22
 
23
  def get_embedding(model_class, model_name: str, dataset_name: str, data_split: str):
@@ -97,9 +98,9 @@ def cluster_embedding(model_name, dataset_name, label_name: str):
97
  plt.gca().set_aspect('equal', 'datalim')
98
  plt.legend(handles=scatter.legend_elements(num=len(label_type))[0],
99
  labels=label_type,
100
- bbox_to_anchor=(1.05, 1),
101
  borderaxespad=0,
102
- loc='lower left',
103
  ncol=3 if len(label2id) > 12 else 1)
104
  plt.savefig(figure_path, bbox_inches='tight', dpi=600)
105
 
@@ -115,35 +116,40 @@ def analyze_embedding(model_name: str, dataset_name: str, n_shot: int = 5, n_cro
115
 
116
 
117
  if __name__ == '__main__':
118
- get_embedding(MetaVoiceSE, "meta_voice_se", "asahi417/voxceleb1-test-split", "test")
119
- get_embedding(PyannoteSE, "pyannote_se", "asahi417/voxceleb1-test-split", "test")
120
- get_embedding(W2VBertSE, "w2v_bert_se", "asahi417/voxceleb1-test-split", "test")
121
- get_embedding(ClapSE, "clap_se", "asahi417/voxceleb1-test-split", "test")
122
- get_embedding(ClapGeneralSE, "clap_general_se", "asahi417/voxceleb1-test-split", "test")
123
-
124
- get_embedding(MetaVoiceSE, "meta_voice_se", "ylacombe/expresso", "train")
125
- get_embedding(PyannoteSE, "pyannote_se", "ylacombe/expresso", "train")
126
- get_embedding(W2VBertSE, "w2v_bert_se", "ylacombe/expresso", "train")
127
- get_embedding(ClapSE, "clap_se", "ylacombe/expresso", "train")
128
- get_embedding(ClapGeneralSE, "clap_general_se", "ylacombe/expresso", "train")
129
-
130
- cluster_embedding("meta_voice_se", "asahi417/voxceleb1-test-split", "speaker_id")
131
- cluster_embedding("pyannote_se", "asahi417/voxceleb1-test-split", "speaker_id")
132
- cluster_embedding("w2v_bert_se", "asahi417/voxceleb1-test-split", "speaker_id")
133
- cluster_embedding("clap_se", "asahi417/voxceleb1-test-split", "speaker_id")
134
- cluster_embedding("clap_general_se", "asahi417/voxceleb1-test-split", "speaker_id")
135
-
136
- cluster_embedding("meta_voice_se", "ylacombe/expresso", "speaker_id")
137
- cluster_embedding("pyannote_se", "ylacombe/expresso", "speaker_id")
138
- cluster_embedding("w2v_bert_se", "ylacombe/expresso", "speaker_id")
139
- cluster_embedding("clap_se", "ylacombe/expresso", "speaker_id")
140
- cluster_embedding("clap_general_se", "ylacombe/expresso", "speaker_id")
141
-
142
- cluster_embedding("meta_voice_se", "ylacombe/expresso", "style")
143
- cluster_embedding("pyannote_se", "ylacombe/expresso", "style")
144
- cluster_embedding("w2v_bert_se", "ylacombe/expresso", "style")
145
- cluster_embedding("clap_se", "ylacombe/expresso", "style")
146
- cluster_embedding("clap_general_se", "ylacombe/expresso", "style")
 
 
 
 
 
147
 
148
 
149
 
 
18
  from model_pyannote_embedding import PyannoteSE
19
  from model_w2v_bert import W2VBertSE
20
  from model_clap import ClapSE, ClapGeneralSE
21
+ from model_xls import XLSRSE
22
 
23
 
24
  def get_embedding(model_class, model_name: str, dataset_name: str, data_split: str):
 
98
  plt.gca().set_aspect('equal', 'datalim')
99
  plt.legend(handles=scatter.legend_elements(num=len(label_type))[0],
100
  labels=label_type,
101
+ bbox_to_anchor=(1.04, 1),
102
  borderaxespad=0,
103
+ loc='upper left',
104
  ncol=3 if len(label2id) > 12 else 1)
105
  plt.savefig(figure_path, bbox_inches='tight', dpi=600)
106
 
 
116
 
117
 
118
  if __name__ == '__main__':
119
+ # get_embedding(MetaVoiceSE, "meta_voice_se", "asahi417/voxceleb1-test-split", "test")
120
+ # get_embedding(PyannoteSE, "pyannote_se", "asahi417/voxceleb1-test-split", "test")
121
+ # get_embedding(W2VBertSE, "w2v_bert_se", "asahi417/voxceleb1-test-split", "test")
122
+ # get_embedding(ClapSE, "clap_se", "asahi417/voxceleb1-test-split", "test")
123
+ # get_embedding(ClapGeneralSE, "clap_general_se", "asahi417/voxceleb1-test-split", "test")
124
+ get_embedding(XLSRSE, "xlsr_se", "asahi417/voxceleb1-test-split", "test")
125
+
126
+ # get_embedding(MetaVoiceSE, "meta_voice_se", "ylacombe/expresso", "train")
127
+ # get_embedding(PyannoteSE, "pyannote_se", "ylacombe/expresso", "train")
128
+ # get_embedding(W2VBertSE, "w2v_bert_se", "ylacombe/expresso", "train")
129
+ # get_embedding(ClapSE, "clap_se", "ylacombe/expresso", "train")
130
+ # get_embedding(ClapGeneralSE, "clap_general_se", "ylacombe/expresso", "train")
131
+ get_embedding(XLSRSE, "xlsr_se", "ylacombe/expresso", "train")
132
+
133
+ # cluster_embedding("meta_voice_se", "asahi417/voxceleb1-test-split", "speaker_id")
134
+ # cluster_embedding("pyannote_se", "asahi417/voxceleb1-test-split", "speaker_id")
135
+ # cluster_embedding("w2v_bert_se", "asahi417/voxceleb1-test-split", "speaker_id")
136
+ # cluster_embedding("clap_se", "asahi417/voxceleb1-test-split", "speaker_id")
137
+ # cluster_embedding("clap_general_se", "asahi417/voxceleb1-test-split", "speaker_id")
138
+ cluster_embedding("xlsr_se", "asahi417/voxceleb1-test-split", "speaker_id")
139
+ #
140
+ # cluster_embedding("meta_voice_se", "ylacombe/expresso", "speaker_id")
141
+ # cluster_embedding("pyannote_se", "ylacombe/expresso", "speaker_id")
142
+ # cluster_embedding("w2v_bert_se", "ylacombe/expresso", "speaker_id")
143
+ # cluster_embedding("clap_se", "ylacombe/expresso", "speaker_id")
144
+ # cluster_embedding("clap_general_se", "ylacombe/expresso", "speaker_id")
145
+ cluster_embedding("xlsr_se", "ylacombe/expresso", "speaker_id")
146
+ #
147
+ # cluster_embedding("meta_voice_se", "ylacombe/expresso", "style")
148
+ # cluster_embedding("pyannote_se", "ylacombe/expresso", "style")
149
+ # cluster_embedding("w2v_bert_se", "ylacombe/expresso", "style")
150
+ # cluster_embedding("clap_se", "ylacombe/expresso", "style")
151
+ # cluster_embedding("clap_general_se", "ylacombe/expresso", "style")
152
+ cluster_embedding("xlsr_se", "ylacombe/expresso", "style")
153
 
154
 
155
 
model_xls.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Meta's XLS-R based speaker embedding.
2
+ - feature dimension: 768
3
+ - source: https://huggingface.co/facebook/wav2vec2-large-xlsr-53
4
+ """
5
+ from typing import Optional
6
+
7
+ import torch
8
+ import librosa
9
+ import numpy as np
10
+ from transformers import AutoFeatureExtractor, AutoModelForPreTraining
11
+
12
+
13
+ class XLSRSE:
14
+ def __init__(self):
15
+ self.processor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-large-xlsr-53")
16
+ self.model = AutoModelForPreTraining.from_pretrained("facebook/wav2vec2-large-xlsr-53")
17
+ self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
18
+ self.model.to(self.device)
19
+ self.model.eval()
20
+
21
+ def get_speaker_embedding(self, wav: np.ndarray, sampling_rate: Optional[int] = None) -> np.ndarray:
22
+ # audio file is decoded on the fly
23
+ if sampling_rate != self.processor.sampling_rate:
24
+ wav = librosa.resample(wav, orig_sr=sampling_rate, target_sr=self.processor.sampling_rate)
25
+ inputs = self.processor(wav, sampling_rate=self.processor.sampling_rate, return_tensors="pt")
26
+ with torch.no_grad():
27
+ outputs = self.model(**{k: v.to(self.device) for k, v in inputs.items()})
28
+ return outputs.projected_states.mean(1).cpu().numpy()[0]
29
+ # return outputs.projected_quantized_states.mean(1).cpu().numpy()[0]
test.py CHANGED
@@ -3,26 +3,34 @@ from model_clap import ClapSE
3
  from model_meta_voice import MetaVoiceSE
4
  from model_pyannote_embedding import PyannoteSE
5
  from model_w2v_bert import W2VBertSE
 
6
 
7
 
8
  def test():
9
  wav, sr = librosa.load("sample.wav")
 
 
 
 
 
 
 
10
  print("CLAP")
11
  model = ClapSE()
12
  v = model.get_speaker_embedding(wav, sr)
13
  print(v.shape)
14
- # print("MetaVoiceSE")
15
- # model = MetaVoiceSE()
16
- # v = model.get_speaker_embedding(wav, sr)
17
- # print(v.shape)
18
- # print("PyannoteSE")
19
- # model = PyannoteSE()
20
- # v = model.get_speaker_embedding(wav, sr)
21
- # print(v.shape)
22
- # print("W2VBertSE")
23
- # model = W2VBertSE()
24
- # v = model.get_speaker_embedding(wav, sr)
25
- # print(v.shape)
26
 
27
 
28
  if __name__ == '__main__':
 
3
  from model_meta_voice import MetaVoiceSE
4
  from model_pyannote_embedding import PyannoteSE
5
  from model_w2v_bert import W2VBertSE
6
+ from model_xls import XLSRSE
7
 
8
 
9
  def test():
10
  wav, sr = librosa.load("sample.wav")
11
+ print("XLS-R")
12
+ model = XLSRSE()
13
+ v = model.get_speaker_embedding(wav, sr)
14
+ print(v.shape)
15
+ model = ClapSE()
16
+ v = model.get_speaker_embedding(wav, sr)
17
+ print(v.shape)
18
  print("CLAP")
19
  model = ClapSE()
20
  v = model.get_speaker_embedding(wav, sr)
21
  print(v.shape)
22
+ print("MetaVoiceSE")
23
+ model = MetaVoiceSE()
24
+ v = model.get_speaker_embedding(wav, sr)
25
+ print(v.shape)
26
+ print("PyannoteSE")
27
+ model = PyannoteSE()
28
+ v = model.get_speaker_embedding(wav, sr)
29
+ print(v.shape)
30
+ print("W2VBertSE")
31
+ model = W2VBertSE()
32
+ v = model.get_speaker_embedding(wav, sr)
33
+ print(v.shape)
34
 
35
 
36
  if __name__ == '__main__':