DunnBC22 commited on
Commit
d28ab38
1 Parent(s): 86b4bed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -4
README.md CHANGED
@@ -6,6 +6,11 @@ tags:
6
  - sentence-similarity
7
  language:
8
  - en
 
 
 
 
 
9
  ---
10
 
11
  # DunnBC22/sentence-t5-base-FT-Quora_Sentence_Similarity-LG
@@ -33,12 +38,32 @@ embeddings = model.encode(sentences)
33
  print(embeddings)
34
  ```
35
 
36
-
37
-
38
  ## Evaluation Results
39
 
40
  For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=DunnBC22/sentence-t5-base-FT-Quora_Sentence_Similarity-LG)
41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
  ## Training
44
  The model was trained with the parameters:
@@ -79,7 +104,7 @@ One way to improve the results of this model is to use a larger checkpoint of T5
79
  The larger checkpoints are:
80
 
81
  | Checkpoint | # of Train Params |
82
- |:----------:|:-----------------:|
83
  | T5-Base | 220 Million* |
84
  | T5-Large | 770 Million |
85
  | T5-3B | 3 Billion |
@@ -98,4 +123,4 @@ SentenceTransformer(
98
 
99
  ## Citing & Authors
100
 
101
- <!--- Describe where people can find more information -->
 
6
  - sentence-similarity
7
  language:
8
  - en
9
+ metrics:
10
+ - accuracy
11
+ - f1
12
+ - recall
13
+ - precision
14
  ---
15
 
16
  # DunnBC22/sentence-t5-base-FT-Quora_Sentence_Similarity-LG
 
38
  print(embeddings)
39
  ```
40
 
 
 
41
  ## Evaluation Results
42
 
43
  For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=DunnBC22/sentence-t5-base-FT-Quora_Sentence_Similarity-LG)
44
 
45
+ | Metric | Measure | Value | Notes |
46
+ | :--------: | :--------: | :--------: | :--------: |
47
+ | Accuracy | **Cosine-Similarity** | 85.93 | Threshold: 0.8320 |
48
+ | F1 | Cosine-Similarity | 82.89 | Threshold: 0.8178 |
49
+ | Precision | Cosine-Similarity | 77.43 | - |
50
+ | Recall | Cosine-Similarity | 89.18 | - |
51
+ | Average Precision | Cosine-Similarity | 87.13 | - |
52
+ | Accuracy | **Manhattan-Distance** | 85.95 | Threshold: 12.7721 |
53
+ | F1 | Manhattan-Distance | 82.89 | Threshold: 13.5008 |
54
+ | Precision | Manhattan-Distance | 76.91 | - |
55
+ | Recall | Manhattan-Distance | 89.89 | - |
56
+ | Average Precision | Manhattan-Distance | 87.13 | - |
57
+ | Accuracy | **Euclidean-Distance** | 85.93 | Threshold: 0.5797 |
58
+ | F1 | Euclidean-Distance | 82.89 | Threshold: 0.6037 |
59
+ | Precision | Euclidean-Distance | 77.43 | - |
60
+ | Recall | Euclidean-Distance | 89.18 | - |
61
+ | Average Precision | Euclidean-Distance | 87.13 | - |
62
+ | Accuracy | **Dot-Product** | 85.93 | Threshold: 0.8320 |
63
+ | F1 | Dot-Product | 82.89 | Threshold: 0.8178 |
64
+ | Precision | Dot-Product | 77.43 | - |
65
+ | Recall | Dot-Product | 89.18 | - |
66
+ | Average Precision | Dot-Product | 87.14 | - |
67
 
68
  ## Training
69
  The model was trained with the parameters:
 
104
  The larger checkpoints are:
105
 
106
  | Checkpoint | # of Train Params |
107
+ | :--------: | :---------------: |
108
  | T5-Base | 220 Million* |
109
  | T5-Large | 770 Million |
110
  | T5-3B | 3 Billion |
 
123
 
124
  ## Citing & Authors
125
 
126
+ Dataset Source: https://www.kaggle.com/datasets/quora/question-pairs-dataset