Update README.md
Browse files
README.md
CHANGED
@@ -2,9 +2,23 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
CLIP model post-trained on 80M human face images.
|
6 |
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
```
|
9 |
from PIL import Image
|
10 |
import requests
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
CLIP model post-trained on 80M human face images.
|
6 |
|
7 |
|
8 |
+
Trained with [TencentPretrain](https://github.com/Tencent/TencentPretrain) framework on 8 * P40 GPUs:
|
9 |
+
```
|
10 |
+
python3 pretrain.py --dataset_path faceclip.pt \
|
11 |
+
--pretrained_model_path models/clip-b32.bin \
|
12 |
+
--output_model_path models/faceclip-b32.bin \
|
13 |
+
--config_path models/clip/base-32_config.json \
|
14 |
+
--vocab_path vocab.json --merges_path merges.txt --tokenizer clip \
|
15 |
+
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 --data_processor clip --accumulation_steps 8 --learning_rate 2e-5 \
|
16 |
+
--total_steps 200000 --save_checkpoint_steps 20000 --batch_size 160 --report_steps 500
|
17 |
+
```
|
18 |
+
|
19 |
+
|
20 |
+
|
21 |
+
How to use:
|
22 |
```
|
23 |
from PIL import Image
|
24 |
import requests
|