internetoftim
commited on
Commit
•
2b93394
1
Parent(s):
6081a1c
Update README.md
Browse files
README.md
CHANGED
@@ -1,18 +1,23 @@
|
|
1 |
-
|
2 |
|
3 |
-
|
4 |
|
5 |
-
|
6 |
-
|
7 |
-
**This model contains no model weights, only an IPUConfig.**
|
8 |
|
9 |
## Model description
|
10 |
|
11 |
-
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from
|
|
|
|
|
|
|
|
|
12 |
|
13 |
-
It was trained with two objectives in pretraining : Masked language modeling(MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
16 |
|
17 |
## Usage
|
18 |
|
|
|
1 |
+
# Graphcore/bert-base-ipu
|
2 |
|
3 |
+
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
|
4 |
|
5 |
+
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
|
|
|
|
|
6 |
|
7 |
## Model description
|
8 |
|
9 |
+
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
|
10 |
+
|
11 |
+
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
|
12 |
+
|
13 |
+
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
|
14 |
|
|
|
15 |
|
16 |
+
## Intended uses & limitations
|
17 |
+
|
18 |
+
This model contains just the `IPUConfig` files for running the BERT base model (e.g. [bert-base-uncased](https://huggingface.co/bert-base-uncased) or [bert-base-cased](https://huggingface.co/bert-base-cased)) on Graphcore IPUs.
|
19 |
+
|
20 |
+
**This model contains no model weights, only an IPUConfig.**
|
21 |
|
22 |
## Usage
|
23 |
|