MLX
English
mlx-llm
exbert
riccardomusmeci commited on
Commit
3dac88a
1 Parent(s): 937e259

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md CHANGED
@@ -1,3 +1,50 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ library_name: mlx-llm
4
+ language:
5
+ - en
6
+ tags:
7
+ - mlx
8
+ - exbert
9
+ datasets:
10
+ - bookcorpus
11
+ - wikipedia
12
  ---
13
+
14
+
15
+ # BERT large model (uncased) - MLX
16
+
17
+ Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
18
+ [this paper](https://arxiv.org/abs/1810.04805) and first released in
19
+ [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
20
+ between english and English.
21
+
22
+
23
+ ## Model description
24
+
25
+ Please, refer to the [original model card](https://huggingface.co/bert-large-uncased) for more details on bert-base-uncased.
26
+
27
+ ## Use it with mlx-llm
28
+
29
+ Install `mlx-llm` from GitHub.
30
+ ```bash
31
+ git clone https://github.com/riccardomusmeci/mlx-llm
32
+ cd mlx-llm
33
+ pip install .
34
+ ```
35
+
36
+ Run
37
+ ```python
38
+ from mlx_llm.model import create_model
39
+ from transformers import BertTokenizer
40
+ import mlx.core as mx
41
+
42
+ model = create_model("bert-large-uncased") # it will download weights from this repository
43
+ tokenizer = BertTokenizer.from_pretrained("bert-large-uncased")
44
+
45
+ batch = ["This is an example of BERT working on MLX."]
46
+ tokens = tokenizer(batch, return_tensors="np", padding=True)
47
+ tokens = {key: mx.array(v) for key, v in tokens.items()}
48
+
49
+ output, pooled = model(**tokens)
50
+ ```