{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "albert_glue_fine_tuning_tutorial", "provenance": [], "collapsed_sections": [], "toc_visible": true }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "accelerator": "TPU" }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "y8SJfpgTccDB", "colab_type": "text" }, "source": [ "\n", "" ] }, { "cell_type": "code", "metadata": { "id": "wHQH4OCHZ9bq", "colab_type": "code", "cellView": "form", "colab": {} }, "source": [ "# @title Copyright 2020 The ALBERT Authors. All Rights Reserved.\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# http://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License.\n", "# ==============================================================================" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "rkTLZ3I4_7c_", "colab_type": "text" }, "source": [ "# ALBERT End to End (Fine-tuning + Predicting) with Cloud TPU" ] }, { "cell_type": "markdown", "metadata": { "id": "1wtjs1QDb3DX", "colab_type": "text" }, "source": [ "## Overview\n", "\n", "ALBERT is \"A Lite\" version of BERT, a popular unsupervised language representation learning algorithm. ALBERT uses parameter-reduction techniques that allow for large-scale configurations, overcome previous memory limitations, and achieve better behavior with respect to model degradation.\n", "\n", "For a technical description of the algorithm, see our paper:\n", "\n", "https://arxiv.org/abs/1909.11942\n", "\n", "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut\n", "\n", "This Colab demonstates using a free Colab Cloud TPU to fine-tune GLUE tasks built on top of pretrained ALBERT models and \n", "run predictions on tuned model. The colab demonsrates loading pretrained ALBERT models from both [TF Hub](https://www.tensorflow.org/hub) and checkpoints.\n", "\n", "**Note:** You will need a GCP (Google Compute Engine) account and a GCS (Google Cloud \n", "Storage) bucket for this Colab to run.\n", "\n", "Please follow the [Google Cloud TPU quickstart](https://cloud.google.com/tpu/docs/quickstart) for how to create GCP account and GCS bucket. You have [$300 free credit](https://cloud.google.com/free/) to get started with any GCP product. You can learn more about Cloud TPU at https://cloud.google.com/tpu/docs.\n", "\n", "This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select **File > View on GitHub**." ] }, { "cell_type": "markdown", "metadata": { "id": "Ld-JXlueIuPH", "colab_type": "text" }, "source": [ "## Instructions" ] }, { "cell_type": "markdown", "metadata": { "id": "POkof5uHaQ_c", "colab_type": "text" }, "source": [ "