michaelfeil commited on
Commit
3d61b47
1 Parent(s): a8e4f3e

Update usage example with infinity

Browse files

```
docker run --gpus all -v $PWD/data:/app/.cache -p "7997":"7997" \
michaelf34/infinity:0.0.68 \
v2 --model-id Alibaba-NLP/gte-base-en-v1.5 --revision "main" --dtype bfloat16 --batch-size 32 --device cuda --engine torch --port 7997
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO 2024-11-12 23:40:58,030 infinity_emb INFO: infinity_server.py:89
Creating 1engines:
engines=['Alibaba-NLP/gte-base-en-v1.5']
INFO 2024-11-12 23:40:58,035 infinity_emb INFO: Anonymized telemetry.py:30
telemetry can be disabled via environment variable
`DO_NOT_TRACK=1`.
INFO 2024-11-12 23:40:58,042 infinity_emb INFO: select_model.py:64
model=`Alibaba-NLP/gte-base-en-v1.5` selected, using
engine=`torch` and device=`cuda`
INFO 2024-11-12 23:41:00,320 SentenceTransformer.py:216
sentence_transformers.SentenceTransformer
INFO: Load pretrained SentenceTransformer:
Alibaba-NLP/gte-base-en-v1.5
A new version of the following files was downloaded from https://huggingface.co/Alibaba-NLP/new-impl:
- configuration.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
A new version of the following files was downloaded from https://huggingface.co/Alibaba-NLP/new-impl:
- modeling.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.


INFO 2024-11-12 23:43:33,218 infinity_emb INFO: Adding acceleration.py:56
optimizations via Huggingface optimum.
The class `optimum.bettertransformers.transformation.BetterTransformer` is deprecated and will be removed in a future release.
WARNING 2024-11-12 23:43:33,220 infinity_emb WARNING: acceleration.py:67
BetterTransformer is not available for model: <class
'transformers_modules.Alibaba-NLP.new-impl.40ced75c3
017eb27626c9d4ea981bde21a2662f4.modeling.NewModel'>
Continue without bettertransformer modeling code.
INFO 2024-11-12 23:43:33,469 infinity_emb INFO: Getting select_model.py:97
timings for batch_size=32 and avg tokens per
sentence=1
3.29 ms tokenization
6.17 ms inference
0.14 ms post-processing
9.60 ms total
embeddings/sec: 3332.34
INFO 2024-11-12 23:43:33,674 infinity_emb INFO: Getting select_model.py:103
timings for batch_size=32 and avg tokens per
sentence=512
16.20 ms tokenization
71.20 ms inference
0.21 ms post-processing
87.61 ms total
embeddings/sec: 365.26
```

Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -2713,6 +2713,14 @@ const similarities = document_embeddings.map(x => 100 * dot(source_embeddings, x
2713
  console.log(similarities); // [34.504930869007296, 64.03973265120138, 19.520042686034362]
2714
  ```
2715
 
 
 
 
 
 
 
 
 
2716
  ## Training Details
2717
 
2718
  ### Training Data
 
2713
  console.log(similarities); // [34.504930869007296, 64.03973265120138, 19.520042686034362]
2714
  ```
2715
 
2716
+ Use with infinity:
2717
+ [Infinity](https://github.com/michaelfeil/infinity) is a MIT licensed server for OpenAI-compatible deployment.
2718
+ ```
2719
+ docker run --gpus all -v $PWD/data:/app/.cache -p "7997":"7997" \
2720
+ michaelf34/infinity:0.0.68 \
2721
+ v2 --model-id Alibaba-NLP/gte-base-en-v1.5 --revision "4c742dc2b781e4ab062a4a77f4f7cbad4bdee970" --dtype bfloat16 --batch-size 32 --device cuda --engine torch --port 7997
2722
+ ```
2723
+
2724
  ## Training Details
2725
 
2726
  ### Training Data