aapot commited on
Commit
851cc66
1 Parent(s): 9746c0b

Upload run-finnish-asr-models.ipynb

Browse files
Files changed (1) hide show
  1. run-finnish-asr-models.ipynb +1 -0
run-finnish-asr-models.ipynb ADDED
@@ -0,0 +1 @@
 
 
1
+ {"cells":[{"cell_type":"markdown","metadata":{},"source":["# Run Finnish ASR models\n","Below you can see example code using Huggingface's `transformers` and `datasets` libraries to run our Finnish ASR models released at Huggingface model hub.\n","\n","On Common Voice 7.0 Finnish test dataset, our best model is [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) which is quite large model having 1B parameters. We also have smaller 300M parameter version which is not as good on the Common Voice test but still quite usable: [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm)\n","\n","Because those models are rather large, running the tests using GPU is highly recommended so you should enable the free GPU accelerator in Kaggle or Colab if you are running this notebook on those services. It's also possible to run the model testing with CPU but it will be a lot slower with large test datasets."]},{"cell_type":"markdown","metadata":{},"source":["# 1. Install libraries"]},{"cell_type":"code","execution_count":null,"metadata":{"_cell_guid":"b1076dfc-b9ad-4769-8c92-a6c4dae69d19","_uuid":"8f2839f25d086af736a60e9eeb907d3b93b6e0e5","execution":{"iopub.execute_input":"2022-02-12T15:15:54.843567Z","iopub.status.busy":"2022-02-12T15:15:54.842929Z","iopub.status.idle":"2022-02-12T15:18:01.307337Z","shell.execute_reply":"2022-02-12T15:18:01.306491Z","shell.execute_reply.started":"2022-02-12T15:15:54.843469Z"},"trusted":true},"outputs":[],"source":["!pip install -U transformers[torch-speech]==4.16.2 datasets[audio]==1.18.3 huggingface_hub==0.4.0 librosa==0.9.0 torchaudio==0.10.2 jiwer==2.3.0 requests==2.27.1 https://github.com/kpu/kenlm/archive/master.zip"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:18:01.309757Z","iopub.status.busy":"2022-02-12T15:18:01.309361Z","iopub.status.idle":"2022-02-12T15:18:09.694185Z","shell.execute_reply":"2022-02-12T15:18:09.693462Z","shell.execute_reply.started":"2022-02-12T15:18:01.309722Z"},"trusted":true},"outputs":[],"source":["import os\n","import re\n","import requests\n","import torch\n","from transformers import AutoModelForCTC, AutoProcessor, AutoConfig, pipeline\n","from datasets import load_dataset, Audio, load_metric\n","from huggingface_hub import notebook_login"]},{"cell_type":"markdown","metadata":{},"source":["# 2. Create test dataset\n","We'll use Huggingface's `datasets` library to create test dataset which offers easy methods for resampling audio data etc.\n","Basically, you have two options to create the test dataset:\n","1. Use ready dataset available at Huggingface's dataset hub (like Mozilla's Common Voice 7.0)\n","2. Load your own custom dataset from local audio files\n","\n","Below you can see examples of both methods for creating the test dataset."]},{"cell_type":"markdown","metadata":{},"source":["## Option 1: Use ready dataset from Huggingface dataset hub\n","Let's load Mozilla's Common Voice 7.0 from hub: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0\n","\n","Note: loading Common Voice 7.0 requires that you have a Huggingface user account (it's free) and that you have clicked \"Access repository\" on the dataset hub page: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0\n","\n","After clicked \"Access repository\" you need to also do the Huggingface hub notebook login and paste your Huggingface access token available in your Huggingace account settings: https://huggingface.co/settings/token\n","\n","This is not neccessary for the most datasets available at Huggingface hub but for Common Voice 7.0 it is"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:27:49.158403Z","iopub.status.busy":"2022-02-12T15:27:49.158139Z","iopub.status.idle":"2022-02-12T15:27:49.24121Z","shell.execute_reply":"2022-02-12T15:27:49.240526Z","shell.execute_reply.started":"2022-02-12T15:27:49.158373Z"},"trusted":true},"outputs":[],"source":["# do huggingface hub notebook login to be able to access the Common Voice 7.0 dataset\n","notebook_login()"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:28:10.252092Z","iopub.status.busy":"2022-02-12T15:28:10.251254Z","iopub.status.idle":"2022-02-12T15:28:37.518904Z","shell.execute_reply":"2022-02-12T15:28:37.51814Z","shell.execute_reply.started":"2022-02-12T15:28:10.252049Z"},"trusted":true},"outputs":[],"source":["# load Common Voice 7.0 dataset from Huggingface with Finnish \"test\" split\n","test_dataset = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"fi\", split=\"test\", use_auth_token=True)"]},{"cell_type":"markdown","metadata":{},"source":["## Option 2: Load custom dataset from local audio files\n","We can also load our own custom dataset from local audio files with `datasets` library. Basically you need for example an Excel/CSV/Text file having two columns: one for the transcription texts and one for the audio filepaths. You can read more about loading local data from datasets' documentation: https://huggingface.co/docs/datasets/loading.html#local-and-remote-files"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:18:09.695994Z","iopub.status.busy":"2022-02-12T15:18:09.695383Z","iopub.status.idle":"2022-02-12T15:22:03.838648Z","shell.execute_reply":"2022-02-12T15:22:03.837895Z","shell.execute_reply.started":"2022-02-12T15:18:09.695954Z"},"trusted":true},"outputs":[],"source":["# Let's download a small Finnish parliament session 2 dataset (147 audio samples) to demonstrate ASR dataset creation with custom audio files\n","# It's available here https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4\n","\n","parliament_dataset_download_path = \"./parliament_session_2\"\n","\n","os.mkdir(parliament_dataset_download_path)\n","\n","parliament_files = [\"%.2d\" % i for i in range(1, 148)]\n","\n","for file in parliament_files:\n"," url = f\"https://b2share.eudat.eu/api/files/027d2358-f28d-4f73-8a51-c174989388f9/session_2_SEG_{file}.wav\"\n"," response = requests.get(url)\n"," file_name = url.split('/')[-1]\n"," file = open(os.path.join(parliament_dataset_download_path, file_name), \"wb\")\n"," file.write(response.content)\n"," file.close()\n","\n","url = \"https://b2share.eudat.eu/api/files/027d2358-f28d-4f73-8a51-c174989388f9/session_2.trn.trn\"\n","response = requests.get(url)\n","file = open(os.path.join(parliament_dataset_download_path, \"transcript.csv\"), \"wb\")\n","file.write(response.content)\n","file.close()"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:22:03.840742Z","iopub.status.busy":"2022-02-12T15:22:03.840504Z","iopub.status.idle":"2022-02-12T15:22:04.326044Z","shell.execute_reply":"2022-02-12T15:22:04.325335Z","shell.execute_reply.started":"2022-02-12T15:22:03.840709Z"},"trusted":true},"outputs":[],"source":["# Let's load the local transcript CSV file so that it will have transcriptions in \"sentence\" column and audio file paths in \"audio\" column\n","test_dataset = load_dataset(\"csv\", data_files=[os.path.join(parliament_dataset_download_path, \"transcript.csv\")], delimiter=\"(\", column_names=[\"sentence\", \"audio\"], split=\"train\", encoding=\"latin-1\")"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:22:04.327596Z","iopub.status.busy":"2022-02-12T15:22:04.327173Z","iopub.status.idle":"2022-02-12T15:22:04.387944Z","shell.execute_reply":"2022-02-12T15:22:04.387282Z","shell.execute_reply.started":"2022-02-12T15:22:04.327556Z"},"trusted":true},"outputs":[],"source":["# We need to fix the audio filepaths so that they match with the local directory paths because they are a bit different than the original paths\n","def fix_parliament_audio_paths(batch):\n"," batch[\"audio\"] = os.path.join(parliament_dataset_download_path, batch[\"audio\"].split(\")\")[0]+\".wav\")\n"," batch[\"sentence\"] = batch[\"sentence\"].strip()\n"," return batch\n","\n","test_dataset = test_dataset.map(fix_parliament_audio_paths)"]},{"cell_type":"markdown","metadata":{},"source":["## Process audio files into numerical arrays inside the dataset\n","Note: this is needed for the dataset loaded from own local files. For Common Voice 7.0 loaded from the Huggingface model hub, this has already been done automatically for you"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:30:51.825342Z","iopub.status.busy":"2022-02-12T15:30:51.824758Z","iopub.status.idle":"2022-02-12T15:30:52.117907Z","shell.execute_reply":"2022-02-12T15:30:52.117042Z","shell.execute_reply.started":"2022-02-12T15:30:51.825302Z"},"trusted":true},"outputs":[],"source":["# Let's check one example of the test_dataset\n","# You should see \"sentence\" key having the transcription text and \"audio\" key having the path to the audio file\n","test_dataset[0]"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:31:12.966402Z","iopub.status.busy":"2022-02-12T15:31:12.965876Z","iopub.status.idle":"2022-02-12T15:31:12.978413Z","shell.execute_reply":"2022-02-12T15:31:12.977716Z","shell.execute_reply.started":"2022-02-12T15:31:12.966366Z"},"trusted":true},"outputs":[],"source":["# Let's decode audio files into arrays inside the dataset\n","# Documentation about audio processing: https://huggingface.co/docs/datasets/audio_process.html\n","test_dataset = test_dataset.cast_column(\"audio\", Audio())"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:31:15.336629Z","iopub.status.busy":"2022-02-12T15:31:15.336061Z","iopub.status.idle":"2022-02-12T15:31:15.357479Z","shell.execute_reply":"2022-02-12T15:31:15.356736Z","shell.execute_reply.started":"2022-02-12T15:31:15.33659Z"},"trusted":true},"outputs":[],"source":["# Let's check one example of the test_dataset\n","# You should see \"array\" and \"sampling_rate\" keys inside the \"audio\" dict\n","test_dataset[0]"]},{"cell_type":"markdown","metadata":{},"source":["# 3. Load Finnish ASR model for testing\n","We'll use Huggingface's `transformers` library to easily load and use models available at Huggingface's model hub\n"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:22:05.334043Z","iopub.status.busy":"2022-02-12T15:22:05.33375Z","iopub.status.idle":"2022-02-12T15:22:05.339128Z","shell.execute_reply":"2022-02-12T15:22:05.338305Z","shell.execute_reply.started":"2022-02-12T15:22:05.334004Z"},"trusted":true},"outputs":[],"source":["# Hugginface model hub's model ID\n","# e.g. \"Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2\" for the best 1B parameter model\n","# e.g. \"Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm\" for the smaller 300M parameter model\n","asr_model_name = \"Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2\""]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:22:05.343404Z","iopub.status.busy":"2022-02-12T15:22:05.342747Z","iopub.status.idle":"2022-02-12T15:22:15.977782Z","shell.execute_reply":"2022-02-12T15:22:15.977025Z","shell.execute_reply.started":"2022-02-12T15:22:05.343365Z"},"trusted":true},"outputs":[],"source":["# load model's processor\n","processor = AutoProcessor.from_pretrained(asr_model_name)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# OPTIONAL: change decoder's default alpha and beta parameters for language model decoding\n","# Check this video for learning more about those parameters: https://youtu.be/mp7fHMTnK9A?t=1418\n","# TLDR: alpha is the weight of the LM so lower the alpha for LM to have less effect and higher the alpha to increase its effect\n","processor.decoder.reset_params(\n"," alpha=0.5, # 0.5 by default\n"," beta=1.5, # 1.5 by default\n",")"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:22:15.979402Z","iopub.status.busy":"2022-02-12T15:22:15.979151Z","iopub.status.idle":"2022-02-12T15:23:58.143001Z","shell.execute_reply":"2022-02-12T15:23:58.14223Z","shell.execute_reply.started":"2022-02-12T15:22:15.979367Z"},"trusted":true},"outputs":[],"source":["# load model and its config\n","model = AutoModelForCTC.from_pretrained(asr_model_name)\n","config = AutoConfig.from_pretrained(asr_model_name)"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:23:58.144695Z","iopub.status.busy":"2022-02-12T15:23:58.144215Z","iopub.status.idle":"2022-02-12T15:24:06.408931Z","shell.execute_reply":"2022-02-12T15:24:06.408187Z","shell.execute_reply.started":"2022-02-12T15:23:58.144661Z"},"trusted":true},"outputs":[],"source":["# Let's use Huggingface's easy-to-use ASR pipeline loaded with our model to transcribe our audio data\n","# To use GPU in the ASR pipeline, \"device\" needs to be 0, for CPU it should be -1\n","device = 0 if torch.cuda.is_available() else -1\n","asr = pipeline(\"automatic-speech-recognition\", model=model, config=config, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, decoder=processor.decoder, device=device)"]},{"cell_type":"markdown","metadata":{},"source":["# 4. Resample test dataset to the correct sampling rate required by the model\n","Our models are trained with audio data sampled at 16000 kHz so you need to use them with audio sampled at the same 16000 kHz. Luckily, Huggingface's `datasets` library offers easy ready method for resampling our testing dataset into correct sampling rate."]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:33:26.348746Z","iopub.status.busy":"2022-02-12T15:33:26.348448Z","iopub.status.idle":"2022-02-12T15:33:26.358524Z","shell.execute_reply":"2022-02-12T15:33:26.357815Z","shell.execute_reply.started":"2022-02-12T15:33:26.348717Z"},"trusted":true},"outputs":[],"source":["# Get the model's sampling rate (16000 with our models)\n","sampling_rate = processor.feature_extractor.sampling_rate\n","\n","# Resample our test dataset\n","test_dataset = test_dataset.cast_column(\"audio\", Audio(sampling_rate=sampling_rate))"]},{"cell_type":"markdown","metadata":{},"source":["# 5. Run test dataset through the model's ASR pipeline to get predicted transcriptions"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:24:06.42029Z","iopub.status.busy":"2022-02-12T15:24:06.419997Z","iopub.status.idle":"2022-02-12T15:24:06.432339Z","shell.execute_reply":"2022-02-12T15:24:06.431597Z","shell.execute_reply.started":"2022-02-12T15:24:06.420255Z"},"trusted":true},"outputs":[],"source":["# Test dataset's true target transcriptions can e.g. include special characters not relevant for ASR testing,\n","# so let's create target transcription text normalization function\n","def normalize_text(text: str) -> str:\n"," \"\"\"DO ADAPT FOR YOUR USE CASE. this function normalizes the target text transcription.\"\"\"\n","\n"," CHARS_TO_IGNORE = [\",\", \"?\", \"¿\", \".\", \"!\", \"¡\", \";\", \";\", \":\", '\"\"', \"%\", '\"', \"�\", \"ʿ\", \"·\", \"჻\", \"~\", \"՞\",\n"," \"؟\", \"،\", \"।\", \"॥\", \"«\", \"»\", \"„\", \"“\", \"”\", \"「\", \"」\", \"‘\", \"’\", \"《\", \"》\", \"(\", \")\", \"[\", \"]\",\n"," \"{\", \"}\", \"=\", \"`\", \"_\", \"+\", \"<\", \">\", \"…\", \"–\", \"°\", \"´\", \"ʾ\", \"‹\", \"›\", \"©\", \"®\", \"—\", \"→\", \"。\",\n"," \"、\", \"﹂\", \"﹁\", \"‧\", \"~\", \"﹏\", \",\", \"{\", \"}\", \"(\", \")\", \"[\", \"]\", \"【\", \"】\", \"‥\", \"〽\",\n"," \"『\", \"』\", \"〝\", \"〟\", \"⟨\", \"⟩\", \"〜\", \":\", \"!\", \"?\", \"♪\", \"؛\", \"/\", \"\\\\\", \"º\", \"−\", \"^\", \"ʻ\", \"ˆ\"]\n"," \n"," chars_to_remove_regex = f\"[{re.escape(''.join(CHARS_TO_IGNORE))}]\"\n","\n"," text = re.sub(chars_to_remove_regex, \"\", text.lower())\n"," text = re.sub(\"[-]\", \" \", text)\n","\n"," # In addition, we can normalize the target text, e.g. removing new lines characters etc...\n"," # note that order is important here!\n"," token_sequences_to_ignore = [\"\\n\\n\", \"\\n\", \" \", \" \"]\n","\n"," for t in token_sequences_to_ignore:\n"," text = \" \".join(text.split(t))\n","\n"," return text"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:24:06.434179Z","iopub.status.busy":"2022-02-12T15:24:06.433435Z","iopub.status.idle":"2022-02-12T15:24:06.440821Z","shell.execute_reply":"2022-02-12T15:24:06.440014Z","shell.execute_reply.started":"2022-02-12T15:24:06.434143Z"},"trusted":true},"outputs":[],"source":["# function used to get predicted transcriptions by the model and also do the target transcription normalization at the same time\n","def map_to_pred(batch):\n"," prediction = asr(batch[\"audio\"][\"array\"])\n"," # for very long audios (e.g. over 30 min) you may have to add audio chunking to avoid memory errors, read more here: https://huggingface.co/blog/asr-chunking\n"," # for example: prediction = asr(batch[\"audio\"][\"array\"], chunk_length_s=6, stride_length_s=(2, 2))\n","\n"," batch[\"prediction\"] = prediction[\"text\"]\n"," batch[\"target\"] = normalize_text(batch[\"sentence\"]) # normalize target text (e.g. make it lower case and remove punctuation)\n"," return batch"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:33:34.74169Z","iopub.status.busy":"2022-02-12T15:33:34.741424Z","iopub.status.idle":"2022-02-12T15:37:09.71817Z","shell.execute_reply":"2022-02-12T15:37:09.717482Z","shell.execute_reply.started":"2022-02-12T15:33:34.741661Z"},"trusted":true},"outputs":[],"source":["# Let's run our test dataset with the previosly defined function to get the results\n","# This can take some time with large test datasets or if you run with CPU\n","result = test_dataset.map(map_to_pred, remove_columns=test_dataset.column_names)"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:37:17.693528Z","iopub.status.busy":"2022-02-12T15:37:17.692981Z","iopub.status.idle":"2022-02-12T15:37:17.698636Z","shell.execute_reply":"2022-02-12T15:37:17.697998Z","shell.execute_reply.started":"2022-02-12T15:37:17.69349Z"},"trusted":true},"outputs":[],"source":["# Let's check one example of the results\n","# You should see \"prediction\" key having the model's transcription prediction and \"target\" key having the original target transcription\n","result[0]"]},{"cell_type":"markdown","metadata":{},"source":["# 6. Compute WER and CER metrics for the results\n","Let's use Huggingface's `datasets` library's standard WER (Word Error Rate) and CER (Character Error Rate) metric methods"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:37:23.308834Z","iopub.status.busy":"2022-02-12T15:37:23.30826Z","iopub.status.idle":"2022-02-12T15:37:24.637832Z","shell.execute_reply":"2022-02-12T15:37:24.637113Z","shell.execute_reply.started":"2022-02-12T15:37:23.308794Z"},"trusted":true},"outputs":[],"source":["# load ASR metrics from Huggingface's datasets library\n","wer = load_metric(\"wer\")\n","cer = load_metric(\"cer\")"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:37:25.893502Z","iopub.status.busy":"2022-02-12T15:37:25.892888Z","iopub.status.idle":"2022-02-12T15:37:26.0871Z","shell.execute_reply":"2022-02-12T15:37:26.086383Z","shell.execute_reply.started":"2022-02-12T15:37:25.893464Z"},"trusted":true},"outputs":[],"source":["# compute ASR metrics\n","wer_result = wer.compute(references=result[\"target\"], predictions=result[\"prediction\"])\n","cer_result = cer.compute(references=result[\"target\"], predictions=result[\"prediction\"])"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:37:27.263482Z","iopub.status.busy":"2022-02-12T15:37:27.26322Z","iopub.status.idle":"2022-02-12T15:37:27.270282Z","shell.execute_reply":"2022-02-12T15:37:27.269442Z","shell.execute_reply.started":"2022-02-12T15:37:27.263445Z"},"trusted":true},"outputs":[],"source":["# print metric results\n","result_str = f\"WER: {wer_result}\\n\" f\"CER: {cer_result}\"\n","print(result_str)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":[]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.7.12"}},"nbformat":4,"nbformat_minor":4}