juliuslipp Xenova HF staff commited on
Commit
b4f2635
1 Parent(s): b85038e

Upload ONNX weights + add transformers.js code/tags (#2)

Browse files

- Upload ONNX weights (7a055cbd36c0b250e778017460b956afe6a15c83)
- Add transformers.js sample code + tags (affc1c96c21894a0ebf5ad0d68aa0544d0e02943)
- Update README.md (38287a9818172d5ebac83a92cd611b2252beea6b)


Co-authored-by: Joshua <Xenova@users.noreply.huggingface.co>

Files changed (3) hide show
  1. README.md +34 -0
  2. onnx/model.onnx +3 -0
  3. onnx/model_quantized.onnx +3 -0
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  tags:
3
  - mteb
 
4
  model-index:
5
  - name: mxbai-angle-large-v1
6
  results:
@@ -2703,6 +2704,39 @@ similarities = cos_sim(embeddings[0], embeddings[1:])
2703
  print('similarities:', similarities)
2704
  ```
2705
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2706
  ### Using API
2707
 
2708
  You’ll be able to use the models through our API as well. The API is coming soon and will have some exciting features. Stay tuned!
 
1
  ---
2
  tags:
3
  - mteb
4
+ - transformers.js
5
  model-index:
6
  - name: mxbai-angle-large-v1
7
  results:
 
2704
  print('similarities:', similarities)
2705
  ```
2706
 
2707
+ ### Transformers.js
2708
+
2709
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
2710
+ ```bash
2711
+ npm i @xenova/transformers
2712
+ ```
2713
+
2714
+ You can then use the model to compute embeddings like this:
2715
+
2716
+ ```js
2717
+ import { pipeline, cos_sim } from '@xenova/transformers';
2718
+
2719
+ // Create a feature extraction pipeline
2720
+ const extractor = await pipeline('feature-extraction', 'mixedbread-ai/mxbai-embed-large-v1', {
2721
+ quantized: false, // Comment out this line to use the quantized version
2722
+ });
2723
+
2724
+ // Generate sentence embeddings
2725
+ const docs = [
2726
+ 'Represent this sentence for searching relevant passages: A man is eating a piece of bread',
2727
+ 'A man is eating food.',
2728
+ 'A man is eating pasta.',
2729
+ 'The girl is carrying a baby.',
2730
+ 'A man is riding a horse.',
2731
+ ]
2732
+ const output = await extractor(docs, { pooling: 'cls' });
2733
+
2734
+ // Compute similarity scores
2735
+ const [source_embeddings, ...document_embeddings ] = output.tolist();
2736
+ const similarities = document_embeddings.map(x => cos_sim(source_embeddings, x));
2737
+ console.log(similarities); // [0.7919578577247139, 0.6369278664248345, 0.16512018371357193, 0.3620778366720027]
2738
+ ```
2739
+
2740
  ### Using API
2741
 
2742
  You’ll be able to use the models through our API as well. The API is coming soon and will have some exciting features. Stay tuned!
onnx/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adb53ed475faa339bfad3bd2bdb7e6a30b4f47280ade9811f81bef7953f9ab77
3
+ size 1336854282
onnx/model_quantized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11bda26d2ee754b20d46c90d0fae7eb5a71e0f947e74261afd6ad640ebbcfa7f
3
+ size 336983163