Datasets:

Formats:
parquet
Libraries:
Datasets
Dask
roshansh's picture
Upload README.md with huggingface_hub
0ade6f1
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: validation_tts
path: data/validation_tts-*
- split: test
path: data/test-*
- split: test_tts
path: data/test_tts-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 7058957907
num_examples: 281241
- name: validation
num_bytes: 79544090
num_examples: 5406
- name: validation_tts
num_bytes: 39772045
num_examples: 2703
- name: test
num_bytes: 39828951
num_examples: 2620
- name: test_tts
num_bytes: 39828951
num_examples: 2620
download_size: 620258987
dataset_size: 7257931944
---
# Dataset Card for "librispeech960-encodec1024_asr_tokenized_final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)