File size: 4,823 Bytes
dbea7f4
 
808ebc9
 
 
 
 
901d938
dbea7f4
48633f2
72bd8da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0e08a4
72bd8da
 
 
 
 
 
 
 
 
 
 
 
 
 
2c27dae
72bd8da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48633f2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
---
library_name: keras-hub
license: apache-2.0
language:
- en
tags:
- text-classification
pipeline_tag: text-classification
---
### Model Overview
DistilBert is a set of language models published by HuggingFace. They are efficient, distilled version of BERT, and are intended for classification and embedding of text, not for text-generation. See the model card below for benchmarks, data sources, and intended use cases.

Weights and Keras model code are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).

## Links

* [DistilBert Quickstart Notebook](https://www.kaggle.com/code/matthewdwatson/distilbert-quickstart)
* [DistilBert API Documentation](https://keras.io/api/keras_hub/models/distil_bert/)
* [DistilBert Model Card](https://huggingface.co/distilbert/distilbert-base-uncased)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)

## Installation

Keras and KerasHub can be installed with:

```
pip install -U -q keras-hub
pip install -U -q keras>=3
```

Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instruction on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.

## Presets

The following model checkpoints are provided by the Keras team. Full code examples for each are available below.

| Preset name                 | Parameters | Description                                            |
|-----------------------------|------------|--------------------------------------------------------|
| distil_bert_base_en_uncased | 66.36M     | 6-layer model where all input is lowercased.           |
| distil_bert_base_en         | 65.19M     | 6-layer model where case is maintained.                |
| distil_bert_base_multi      | 134.73M    | 6-layer multi-linguage model where case is maintained. |

## Example Usage
```python
import keras
import keras_hub
import numpy as np
```

Raw string data.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]

# Use a shorter sequence length.
preprocessor = keras_hub.models.DistilBertPreprocessor.from_preset(
    "distil_bert_base_en_uncased",
    sequence_length=128,
)
# Pretrained classifier.
classifier = keras_hub.models.DistilBertClassifier.from_preset(
    "distil_bert_base_en_uncased",
    num_classes=4,
    preprocessor=preprocessor,
)
classifier.fit(x=features, y=labels, batch_size=2)

# Re-compile (e.g., with a new learning rate)
classifier.compile(
    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer=keras.optimizers.Adam(5e-5),
    jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)
```

Preprocessed integer data.
```python
features = {
    "token_ids": np.ones(shape=(2, 12), dtype="int32"),
    "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2)
}
labels = [0, 3]

# Pretrained classifier without preprocessing.
classifier = keras_hub.models.DistilBertClassifier.from_preset(
    "distil_bert_base_en_uncased",
    num_classes=4,
    preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)
```

## Example Usage with Hugging Face URI

```python
import keras
import keras_hub
import numpy as np
```

Raw string data.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]

# Use a shorter sequence length.
preprocessor = keras_hub.models.DistilBertPreprocessor.from_preset(
    "hf://keras/distil_bert_base_en_uncased",
    sequence_length=128,
)
# Pretrained classifier.
classifier = keras_hub.models.DistilBertClassifier.from_preset(
    "hf://keras/distil_bert_base_en_uncased",
    num_classes=4,
    preprocessor=preprocessor,
)
classifier.fit(x=features, y=labels, batch_size=2)

# Re-compile (e.g., with a new learning rate)
classifier.compile(
    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer=keras.optimizers.Adam(5e-5),
    jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)
```

Preprocessed integer data.
```python
features = {
    "token_ids": np.ones(shape=(2, 12), dtype="int32"),
    "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2)
}
labels = [0, 3]

# Pretrained classifier without preprocessing.
classifier = keras_hub.models.DistilBertClassifier.from_preset(
    "hf://keras/distil_bert_base_en_uncased",
    num_classes=4,
    preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)
```