bwang0911 commited on
Commit
7b3c4a6
1 Parent(s): c508578

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -25,15 +25,15 @@ license: apache-2.0
25
 
26
  ## Intented Usage & Model Info
27
 
28
- `jina-embedding-t-en-v1` is a language model that has been trained using Jina AI's Linnaeus-Clean dataset.
29
  This dataset consists of 380 million pairs of sentences, which include both query-document pairs.
30
  These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
31
  The Linnaeus-Full dataset, from which the Linnaeus-Clean dataset is derived, originally contained 1.6 billion sentence pairs.
32
 
33
  The model has a range of use cases, including information retrieval, semantic textual similarity, text reranking, and more.
34
 
35
- With a compact size of just 14 million parameters,
36
- the model enables lightning-fast inference while still delivering impressive performance.
37
  Additionally, we provide the following options:
38
 
39
  - `jina-embedding-t-en-v1`: 14 million parameters **(you are here)**.
 
25
 
26
  ## Intented Usage & Model Info
27
 
28
+ `jina-embedding-t-en-v1` is a tiny small language model that has been trained using Jina AI's Linnaeus-Clean dataset.
29
  This dataset consists of 380 million pairs of sentences, which include both query-document pairs.
30
  These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
31
  The Linnaeus-Full dataset, from which the Linnaeus-Clean dataset is derived, originally contained 1.6 billion sentence pairs.
32
 
33
  The model has a range of use cases, including information retrieval, semantic textual similarity, text reranking, and more.
34
 
35
+ With a tiny small parameter size of just 14 million parameters,
36
+ the model enables lightning-fast inference on CPU, while still delivering impressive performance.
37
  Additionally, we provide the following options:
38
 
39
  - `jina-embedding-t-en-v1`: 14 million parameters **(you are here)**.