File size: 1,246 Bytes
c530eab f9c363c e92594f f9c363c e92594f 43dd132 1ea05ae c530eab 43dd132 c530eab 43dd132 c530eab 51ee97f c530eab 43dd132 c530eab 43dd132 c530eab 43dd132 c530eab 43dd132 c530eab 43dd132 c530eab 43dd132 c530eab 43dd132 c530eab 43dd132 c530eab 43dd132 bcbf81f 51ee97f 97f935d 43dd132 e6ed952 43dd132 ccff0d7 51ee97f 0124f66 c530eab 43dd132 c530eab 43dd132 c530eab 1ea05ae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- transcribed_calls
model-index:
- name: wav2vec2-base-wonders-phonemes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-wonders-phonemes
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the transcribed_calls dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 48
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1
|