KireetiKunam commited on
Commit
1418072
·
verified ·
1 Parent(s): 3c63b18

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,625 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:164
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: What significant multi-modal models were released in 2024
13
+ sentences:
14
+ - 'In 2024, almost every significant model vendor released multi-modal models. We
15
+ saw the Claude 3 series from Anthropic in March, Gemini 1.5 Pro in April (images,
16
+ audio and video), then September brought Qwen2-VL and Mistral’s Pixtral 12B and
17
+ Meta’s Llama 3.2 11B and 90B vision models. We got audio input and output from
18
+ OpenAI in October, then November saw SmolVLM from Hugging Face and December saw
19
+ image and video models from Amazon Nova.
20
+
21
+ In October I upgraded my LLM CLI tool to support multi-modal models via attachments.
22
+ It now has plugins for a whole collection of different vision models.'
23
+ - 'When @v0 first came out we were paranoid about protecting the prompt with all
24
+ kinds of pre and post processing complexity.
25
+
26
+ We completely pivoted to let it rip. A prompt without the evals, models, and especially
27
+ UX is like getting a broken ASML machine without a manual'
28
+ - 'Terminology aside, I remain skeptical as to their utility based, once again,
29
+ on the challenge of gullibility. LLMs believe anything you tell them. Any systems
30
+ that attempts to make meaningful decisions on your behalf will run into the same
31
+ roadblock: how good is a travel agent, or a digital assistant, or even a research
32
+ tool if it can’t distinguish truth from fiction?
33
+
34
+ Just the other day Google Search was caught serving up an entirely fake description
35
+ of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined
36
+ movie listing from a fan fiction wiki.'
37
+ - source_sentence: What is the advantage of a 64GB Mac for running models
38
+ sentences:
39
+ - 'The boring yet crucial secret behind good system prompts is test-driven development.
40
+ You don’t write down a system prompt and find ways to test it. You write down
41
+ tests and find a system prompt that passes them.
42
+
43
+
44
+ It’s become abundantly clear over the course of 2024 that writing good automated
45
+ evals for LLM-powered systems is the skill that’s most needed to build useful
46
+ applications on top of these models. If you have a strong eval suite you can adopt
47
+ new models faster, iterate better and build more reliable and useful product features
48
+ than your competition.
49
+
50
+ Vercel’s Malte Ubl:'
51
+ - 'On paper, a 64GB Mac should be a great machine for running models due to the
52
+ way the CPU and GPU can share the same memory. In practice, many models are released
53
+ as model weights and libraries that reward NVIDIA’s CUDA over other platforms.
54
+
55
+ The llama.cpp ecosystem helped a lot here, but the real breakthrough has been
56
+ Apple’s MLX library, “an array framework for Apple Silicon”. It’s fantastic.
57
+
58
+ Apple’s mlx-lm Python library supports running a wide range of MLX-compatible
59
+ models on my Mac, with excellent performance. mlx-community on Hugging Face offers
60
+ more than 1,000 models that have been converted to the necessary format.'
61
+ - 'OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely
62
+ available from its launch in June. This was a momentus change, because for the
63
+ previous year free users had mostly been restricted to GPT-3.5 level models, meaning
64
+ new users got a very inaccurate mental model of what a capable LLM could actually
65
+ do.
66
+
67
+ That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT
68
+ Pro. This $200/month subscription service is the only way to access their most
69
+ capable model, o1 Pro.
70
+
71
+ Since the trick behind the o1 series (and the future models it will undoubtedly
72
+ inspire) is to expend more compute time to get better results, I don’t think those
73
+ days of free access to the best available models are likely to return.'
74
+ - source_sentence: What is the main innovation discussed in the context regarding
75
+ model scaling?
76
+ sentences:
77
+ - 'The biggest innovation here is that it opens up a new way to scale a model: instead
78
+ of improving model performance purely through additional compute at training time,
79
+ models can now take on harder problems by spending more compute on inference.
80
+
81
+ The sequel to o1, o3 (they skipped “o2” for European trademark reasons) was announced
82
+ on 20th December with an impressive result against the ARC-AGI benchmark, albeit
83
+ one that likely involved more than $1,000,000 of compute time expense!
84
+
85
+ o3 is expected to ship in January. I doubt many people have real-world problems
86
+ that would benefit from that level of compute expenditure—I certainly don’t!—but
87
+ it appears to be a genuine next step in LLM architecture for taking on much harder
88
+ problems.'
89
+ - Meanwhile, it’s increasingly common for end users to develop wildly inaccurate
90
+ mental models of how these things work and what they are capable of. I’ve seen
91
+ so many examples of people trying to win an argument with a screenshot from ChatGPT—an
92
+ inherently ludicrous proposition, given the inherent unreliability of these models
93
+ crossed with the fact that you can get them to say anything if you prompt them
94
+ right.
95
+ - 'I think this means that, as individual users, we don’t need to feel any guilt
96
+ at all for the energy consumed by the vast majority of our prompts. The impact
97
+ is likely neglible compared to driving a car down the street or maybe even watching
98
+ a video on YouTube.
99
+
100
+ Likewise, training. DeepSeek v3 training for less than $6m is a fantastic sign
101
+ that training costs can and should continue to drop.
102
+
103
+ For less efficient models I find it useful to compare their energy usage to commercial
104
+ flights. The largest Llama 3 model cost about the same as a single digit number
105
+ of fully loaded passenger flights from New York to London. That’s certainly not
106
+ nothing, but once trained that model can be used by millions of people at no extra
107
+ training cost.'
108
+ - source_sentence: What new feature was introduced in ChatGPT's voice mode in December?
109
+ sentences:
110
+ - 'Nothing yet from Anthropic or Meta but I would be very surprised if they don’t
111
+ have their own inference-scaling models in the works. Meta published a relevant
112
+ paper Training Large Language Models to Reason in a Continuous Latent Space in
113
+ December.
114
+
115
+ Was the best currently available LLM trained in China for less than $6m?
116
+
117
+ Not quite, but almost! It does make for a great attention-grabbing headline.
118
+
119
+ The big news to end the year was the release of DeepSeek v3—dropped on Hugging
120
+ Face on Christmas Day without so much as a README file, then followed by documentation
121
+ and a paper the day after that.'
122
+ - 'Then in December, the Chatbot Arena team introduced a whole new leaderboard for
123
+ this feature, driven by users building the same interactive app twice with two
124
+ different models and voting on the answer. Hard to come up with a more convincing
125
+ argument that this feature is now a commodity that can be effectively implemented
126
+ against all of the leading models.
127
+
128
+ I’ve been tinkering with a version of this myself for my Datasette project, with
129
+ the goal of letting users use prompts to build and iterate on custom widgets and
130
+ data visualizations against their own data. I also figured out a similar pattern
131
+ for writing one-shot Python programs, enabled by uv.'
132
+ - The most recent twist, again from December (December was a lot) is live video.
133
+ ChatGPT voice mode now provides the option to share your camera feed with the
134
+ model and talk about what you can see in real time. Google Gemini have a preview
135
+ of the same feature, which they managed to ship the day before ChatGPT did.
136
+ - source_sentence: Why is it important to learn how to work with unreliable technology
137
+ like LLMs?
138
+ sentences:
139
+ - 'Longer inputs dramatically increase the scope of problems that can be solved
140
+ with an LLM: you can now throw in an entire book and ask questions about its contents,
141
+ but more importantly you can feed in a lot of example code to help the model correctly
142
+ solve a coding problem. LLM use-cases that involve long inputs are far more interesting
143
+ to me than short prompts that rely purely on the information already baked into
144
+ the model weights. Many of my tools were built using this pattern.'
145
+ - 'There’s a flipside to this too: a lot of better informed people have sworn off
146
+ LLMs entirely because they can’t see how anyone could benefit from a tool with
147
+ so many flaws. The key skill in getting the most out of LLMs is learning to work
148
+ with tech that is both inherently unreliable and incredibly powerful at the same
149
+ time. This is a decidedly non-obvious skill to acquire!
150
+
151
+ There is so much space for helpful education content here, but we need to do do
152
+ a lot better than outsourcing it all to AI grifters with bombastic Twitter threads.
153
+
154
+ Knowledge is incredibly unevenly distributed
155
+
156
+ Most people have heard of ChatGPT by now. How many have heard of Claude?'
157
+ - 'I think people who complain that LLM improvement has slowed are often missing
158
+ the enormous advances in these multi-modal models. Being able to run prompts against
159
+ images (and audio and video) is a fascinating new way to apply these models.
160
+
161
+ Voice and live camera mode are science fiction come to life
162
+
163
+ The audio and live video modes that have started to emerge deserve a special mention.
164
+
165
+ The ability to talk to ChatGPT first arrived in September 2023, but it was mostly
166
+ an illusion: OpenAI used their excellent Whisper speech-to-text model and a new
167
+ text-to-speech model (creatively named tts-1) to enable conversations with the
168
+ ChatGPT mobile apps, but the actual model just saw text.'
169
+ pipeline_tag: sentence-similarity
170
+ library_name: sentence-transformers
171
+ metrics:
172
+ - cosine_accuracy@1
173
+ - cosine_accuracy@3
174
+ - cosine_accuracy@5
175
+ - cosine_accuracy@10
176
+ - cosine_precision@1
177
+ - cosine_precision@3
178
+ - cosine_precision@5
179
+ - cosine_precision@10
180
+ - cosine_recall@1
181
+ - cosine_recall@3
182
+ - cosine_recall@5
183
+ - cosine_recall@10
184
+ - cosine_ndcg@10
185
+ - cosine_mrr@10
186
+ - cosine_map@100
187
+ model-index:
188
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
189
+ results:
190
+ - task:
191
+ type: information-retrieval
192
+ name: Information Retrieval
193
+ dataset:
194
+ name: Unknown
195
+ type: unknown
196
+ metrics:
197
+ - type: cosine_accuracy@1
198
+ value: 0.875
199
+ name: Cosine Accuracy@1
200
+ - type: cosine_accuracy@3
201
+ value: 0.9583333333333334
202
+ name: Cosine Accuracy@3
203
+ - type: cosine_accuracy@5
204
+ value: 1.0
205
+ name: Cosine Accuracy@5
206
+ - type: cosine_accuracy@10
207
+ value: 1.0
208
+ name: Cosine Accuracy@10
209
+ - type: cosine_precision@1
210
+ value: 0.875
211
+ name: Cosine Precision@1
212
+ - type: cosine_precision@3
213
+ value: 0.3194444444444444
214
+ name: Cosine Precision@3
215
+ - type: cosine_precision@5
216
+ value: 0.20000000000000004
217
+ name: Cosine Precision@5
218
+ - type: cosine_precision@10
219
+ value: 0.10000000000000002
220
+ name: Cosine Precision@10
221
+ - type: cosine_recall@1
222
+ value: 0.875
223
+ name: Cosine Recall@1
224
+ - type: cosine_recall@3
225
+ value: 0.9583333333333334
226
+ name: Cosine Recall@3
227
+ - type: cosine_recall@5
228
+ value: 1.0
229
+ name: Cosine Recall@5
230
+ - type: cosine_recall@10
231
+ value: 1.0
232
+ name: Cosine Recall@10
233
+ - type: cosine_ndcg@10
234
+ value: 0.9455223360506796
235
+ name: Cosine Ndcg@10
236
+ - type: cosine_mrr@10
237
+ value: 0.9270833333333334
238
+ name: Cosine Mrr@10
239
+ - type: cosine_map@100
240
+ value: 0.9270833333333334
241
+ name: Cosine Map@100
242
+ ---
243
+
244
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
245
+
246
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
247
+
248
+ ## Model Details
249
+
250
+ ### Model Description
251
+ - **Model Type:** Sentence Transformer
252
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
253
+ - **Maximum Sequence Length:** 512 tokens
254
+ - **Output Dimensionality:** 1024 dimensions
255
+ - **Similarity Function:** Cosine Similarity
256
+ <!-- - **Training Dataset:** Unknown -->
257
+ <!-- - **Language:** Unknown -->
258
+ <!-- - **License:** Unknown -->
259
+
260
+ ### Model Sources
261
+
262
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
263
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
264
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
265
+
266
+ ### Full Model Architecture
267
+
268
+ ```
269
+ SentenceTransformer(
270
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
271
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
272
+ (2): Normalize()
273
+ )
274
+ ```
275
+
276
+ ## Usage
277
+
278
+ ### Direct Usage (Sentence Transformers)
279
+
280
+ First install the Sentence Transformers library:
281
+
282
+ ```bash
283
+ pip install -U sentence-transformers
284
+ ```
285
+
286
+ Then you can load this model and run inference.
287
+ ```python
288
+ from sentence_transformers import SentenceTransformer
289
+
290
+ # Download from the 🤗 Hub
291
+ model = SentenceTransformer("KireetiKunam/legal-ft-2")
292
+ # Run inference
293
+ sentences = [
294
+ 'Why is it important to learn how to work with unreliable technology like LLMs?',
295
+ 'There’s a flipside to this too: a lot of better informed people have sworn off LLMs entirely because they can’t see how anyone could benefit from a tool with so many flaws. The key skill in getting the most out of LLMs is learning to work with tech that is both inherently unreliable and incredibly powerful at the same time. This is a decidedly non-obvious skill to acquire!\nThere is so much space for helpful education content here, but we need to do do a lot better than outsourcing it all to AI grifters with bombastic Twitter threads.\nKnowledge is incredibly unevenly distributed\nMost people have heard of ChatGPT by now. How many have heard of Claude?',
296
+ 'I think people who complain that LLM improvement has slowed are often missing the enormous advances in these multi-modal models. Being able to run prompts against images (and audio and video) is a fascinating new way to apply these models.\nVoice and live camera mode are science fiction come to life\nThe audio and live video modes that have started to emerge deserve a special mention.\nThe ability to talk to ChatGPT first arrived in September 2023, but it was mostly an illusion: OpenAI used their excellent Whisper speech-to-text model and a new text-to-speech model (creatively named tts-1) to enable conversations with the ChatGPT mobile apps, but the actual model just saw text.',
297
+ ]
298
+ embeddings = model.encode(sentences)
299
+ print(embeddings.shape)
300
+ # [3, 1024]
301
+
302
+ # Get the similarity scores for the embeddings
303
+ similarities = model.similarity(embeddings, embeddings)
304
+ print(similarities.shape)
305
+ # [3, 3]
306
+ ```
307
+
308
+ <!--
309
+ ### Direct Usage (Transformers)
310
+
311
+ <details><summary>Click to see the direct usage in Transformers</summary>
312
+
313
+ </details>
314
+ -->
315
+
316
+ <!--
317
+ ### Downstream Usage (Sentence Transformers)
318
+
319
+ You can finetune this model on your own dataset.
320
+
321
+ <details><summary>Click to expand</summary>
322
+
323
+ </details>
324
+ -->
325
+
326
+ <!--
327
+ ### Out-of-Scope Use
328
+
329
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
330
+ -->
331
+
332
+ ## Evaluation
333
+
334
+ ### Metrics
335
+
336
+ #### Information Retrieval
337
+
338
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
339
+
340
+ | Metric | Value |
341
+ |:--------------------|:-----------|
342
+ | cosine_accuracy@1 | 0.875 |
343
+ | cosine_accuracy@3 | 0.9583 |
344
+ | cosine_accuracy@5 | 1.0 |
345
+ | cosine_accuracy@10 | 1.0 |
346
+ | cosine_precision@1 | 0.875 |
347
+ | cosine_precision@3 | 0.3194 |
348
+ | cosine_precision@5 | 0.2 |
349
+ | cosine_precision@10 | 0.1 |
350
+ | cosine_recall@1 | 0.875 |
351
+ | cosine_recall@3 | 0.9583 |
352
+ | cosine_recall@5 | 1.0 |
353
+ | cosine_recall@10 | 1.0 |
354
+ | **cosine_ndcg@10** | **0.9455** |
355
+ | cosine_mrr@10 | 0.9271 |
356
+ | cosine_map@100 | 0.9271 |
357
+
358
+ <!--
359
+ ## Bias, Risks and Limitations
360
+
361
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
362
+ -->
363
+
364
+ <!--
365
+ ### Recommendations
366
+
367
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
368
+ -->
369
+
370
+ ## Training Details
371
+
372
+ ### Training Dataset
373
+
374
+ #### Unnamed Dataset
375
+
376
+ * Size: 164 training samples
377
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
378
+ * Approximate statistics based on the first 164 samples:
379
+ | | sentence_0 | sentence_1 |
380
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
381
+ | type | string | string |
382
+ | details | <ul><li>min: 3 tokens</li><li>mean: 15.43 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 130.65 tokens</li><li>max: 204 tokens</li></ul> |
383
+ * Samples:
384
+ | sentence_0 | sentence_1 |
385
+ |:----------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
386
+ | <code>What key themes were identified in the review of LLMs in 2024?</code> | <code>Things we learned about LLMs in 2024<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Things we learned about LLMs in 2024<br>31st December 2024<br>A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.<br>This is a sequel to my review of 2023.<br>In this article:</code> |
387
+ | <code>What pivotal moments in the field of LLMs were highlighted in the article?</code> | <code>Things we learned about LLMs in 2024<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Things we learned about LLMs in 2024<br>31st December 2024<br>A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.<br>This is a sequel to my review of 2023.<br>In this article:</code> |
388
+ | <code>What advancements have been made in multimodal vision technology?</code> | <code>The GPT-4 barrier was comprehensively broken<br>Some of those GPT-4 models run on my laptop<br>LLM prices crashed, thanks to competition and increased efficiency<br>Multimodal vision is common, audio and video are starting to emerge<br>Voice and live camera mode are science fiction come to life<br>Prompt driven app generation is a commodity already<br>Universal access to the best models lasted for just a few short months<br>“Agents” still haven’t really happened yet<br>Evals really matter<br>Apple Intelligence is bad, Apple’s MLX library is excellent<br>The rise of inference-scaling “reasoning” models<br>Was the best currently available LLM trained in China for less than $6m?<br>The environmental impact got better<br>The environmental impact got much, much worse</code> |
389
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
390
+ ```json
391
+ {
392
+ "loss": "MultipleNegativesRankingLoss",
393
+ "matryoshka_dims": [
394
+ 768,
395
+ 512,
396
+ 256,
397
+ 128,
398
+ 64
399
+ ],
400
+ "matryoshka_weights": [
401
+ 1,
402
+ 1,
403
+ 1,
404
+ 1,
405
+ 1
406
+ ],
407
+ "n_dims_per_step": -1
408
+ }
409
+ ```
410
+
411
+ ### Training Hyperparameters
412
+ #### Non-Default Hyperparameters
413
+
414
+ - `eval_strategy`: steps
415
+ - `per_device_train_batch_size`: 10
416
+ - `per_device_eval_batch_size`: 10
417
+ - `num_train_epochs`: 10
418
+ - `multi_dataset_batch_sampler`: round_robin
419
+
420
+ #### All Hyperparameters
421
+ <details><summary>Click to expand</summary>
422
+
423
+ - `overwrite_output_dir`: False
424
+ - `do_predict`: False
425
+ - `eval_strategy`: steps
426
+ - `prediction_loss_only`: True
427
+ - `per_device_train_batch_size`: 10
428
+ - `per_device_eval_batch_size`: 10
429
+ - `per_gpu_train_batch_size`: None
430
+ - `per_gpu_eval_batch_size`: None
431
+ - `gradient_accumulation_steps`: 1
432
+ - `eval_accumulation_steps`: None
433
+ - `torch_empty_cache_steps`: None
434
+ - `learning_rate`: 5e-05
435
+ - `weight_decay`: 0.0
436
+ - `adam_beta1`: 0.9
437
+ - `adam_beta2`: 0.999
438
+ - `adam_epsilon`: 1e-08
439
+ - `max_grad_norm`: 1
440
+ - `num_train_epochs`: 10
441
+ - `max_steps`: -1
442
+ - `lr_scheduler_type`: linear
443
+ - `lr_scheduler_kwargs`: {}
444
+ - `warmup_ratio`: 0.0
445
+ - `warmup_steps`: 0
446
+ - `log_level`: passive
447
+ - `log_level_replica`: warning
448
+ - `log_on_each_node`: True
449
+ - `logging_nan_inf_filter`: True
450
+ - `save_safetensors`: True
451
+ - `save_on_each_node`: False
452
+ - `save_only_model`: False
453
+ - `restore_callback_states_from_checkpoint`: False
454
+ - `no_cuda`: False
455
+ - `use_cpu`: False
456
+ - `use_mps_device`: False
457
+ - `seed`: 42
458
+ - `data_seed`: None
459
+ - `jit_mode_eval`: False
460
+ - `use_ipex`: False
461
+ - `bf16`: False
462
+ - `fp16`: False
463
+ - `fp16_opt_level`: O1
464
+ - `half_precision_backend`: auto
465
+ - `bf16_full_eval`: False
466
+ - `fp16_full_eval`: False
467
+ - `tf32`: None
468
+ - `local_rank`: 0
469
+ - `ddp_backend`: None
470
+ - `tpu_num_cores`: None
471
+ - `tpu_metrics_debug`: False
472
+ - `debug`: []
473
+ - `dataloader_drop_last`: False
474
+ - `dataloader_num_workers`: 0
475
+ - `dataloader_prefetch_factor`: None
476
+ - `past_index`: -1
477
+ - `disable_tqdm`: False
478
+ - `remove_unused_columns`: True
479
+ - `label_names`: None
480
+ - `load_best_model_at_end`: False
481
+ - `ignore_data_skip`: False
482
+ - `fsdp`: []
483
+ - `fsdp_min_num_params`: 0
484
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
485
+ - `fsdp_transformer_layer_cls_to_wrap`: None
486
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
487
+ - `deepspeed`: None
488
+ - `label_smoothing_factor`: 0.0
489
+ - `optim`: adamw_torch
490
+ - `optim_args`: None
491
+ - `adafactor`: False
492
+ - `group_by_length`: False
493
+ - `length_column_name`: length
494
+ - `ddp_find_unused_parameters`: None
495
+ - `ddp_bucket_cap_mb`: None
496
+ - `ddp_broadcast_buffers`: False
497
+ - `dataloader_pin_memory`: True
498
+ - `dataloader_persistent_workers`: False
499
+ - `skip_memory_metrics`: True
500
+ - `use_legacy_prediction_loop`: False
501
+ - `push_to_hub`: False
502
+ - `resume_from_checkpoint`: None
503
+ - `hub_model_id`: None
504
+ - `hub_strategy`: every_save
505
+ - `hub_private_repo`: None
506
+ - `hub_always_push`: False
507
+ - `gradient_checkpointing`: False
508
+ - `gradient_checkpointing_kwargs`: None
509
+ - `include_inputs_for_metrics`: False
510
+ - `include_for_metrics`: []
511
+ - `eval_do_concat_batches`: True
512
+ - `fp16_backend`: auto
513
+ - `push_to_hub_model_id`: None
514
+ - `push_to_hub_organization`: None
515
+ - `mp_parameters`:
516
+ - `auto_find_batch_size`: False
517
+ - `full_determinism`: False
518
+ - `torchdynamo`: None
519
+ - `ray_scope`: last
520
+ - `ddp_timeout`: 1800
521
+ - `torch_compile`: False
522
+ - `torch_compile_backend`: None
523
+ - `torch_compile_mode`: None
524
+ - `dispatch_batches`: None
525
+ - `split_batches`: None
526
+ - `include_tokens_per_second`: False
527
+ - `include_num_input_tokens_seen`: False
528
+ - `neftune_noise_alpha`: None
529
+ - `optim_target_modules`: None
530
+ - `batch_eval_metrics`: False
531
+ - `eval_on_start`: False
532
+ - `use_liger_kernel`: False
533
+ - `eval_use_gather_object`: False
534
+ - `average_tokens_across_devices`: False
535
+ - `prompts`: None
536
+ - `batch_sampler`: batch_sampler
537
+ - `multi_dataset_batch_sampler`: round_robin
538
+
539
+ </details>
540
+
541
+ ### Training Logs
542
+ | Epoch | Step | cosine_ndcg@10 |
543
+ |:------:|:----:|:--------------:|
544
+ | 1.0 | 17 | 0.9382 |
545
+ | 2.0 | 34 | 0.9161 |
546
+ | 2.9412 | 50 | 0.9270 |
547
+ | 3.0 | 51 | 0.9270 |
548
+ | 4.0 | 68 | 0.9283 |
549
+ | 5.0 | 85 | 0.9437 |
550
+ | 5.8824 | 100 | 0.9455 |
551
+ | 6.0 | 102 | 0.9455 |
552
+ | 7.0 | 119 | 0.9455 |
553
+ | 8.0 | 136 | 0.9455 |
554
+ | 8.8235 | 150 | 0.9455 |
555
+ | 9.0 | 153 | 0.9455 |
556
+ | 10.0 | 170 | 0.9455 |
557
+
558
+
559
+ ### Framework Versions
560
+ - Python: 3.11.11
561
+ - Sentence Transformers: 3.4.1
562
+ - Transformers: 4.48.3
563
+ - PyTorch: 2.5.1+cu124
564
+ - Accelerate: 1.3.0
565
+ - Datasets: 3.3.1
566
+ - Tokenizers: 0.21.0
567
+
568
+ ## Citation
569
+
570
+ ### BibTeX
571
+
572
+ #### Sentence Transformers
573
+ ```bibtex
574
+ @inproceedings{reimers-2019-sentence-bert,
575
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
576
+ author = "Reimers, Nils and Gurevych, Iryna",
577
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
578
+ month = "11",
579
+ year = "2019",
580
+ publisher = "Association for Computational Linguistics",
581
+ url = "https://arxiv.org/abs/1908.10084",
582
+ }
583
+ ```
584
+
585
+ #### MatryoshkaLoss
586
+ ```bibtex
587
+ @misc{kusupati2024matryoshka,
588
+ title={Matryoshka Representation Learning},
589
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
590
+ year={2024},
591
+ eprint={2205.13147},
592
+ archivePrefix={arXiv},
593
+ primaryClass={cs.LG}
594
+ }
595
+ ```
596
+
597
+ #### MultipleNegativesRankingLoss
598
+ ```bibtex
599
+ @misc{henderson2017efficient,
600
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
601
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
602
+ year={2017},
603
+ eprint={1705.00652},
604
+ archivePrefix={arXiv},
605
+ primaryClass={cs.CL}
606
+ }
607
+ ```
608
+
609
+ <!--
610
+ ## Glossary
611
+
612
+ *Clearly define terms in order to be accessible across audiences.*
613
+ -->
614
+
615
+ <!--
616
+ ## Model Card Authors
617
+
618
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
619
+ -->
620
+
621
+ <!--
622
+ ## Model Card Contact
623
+
624
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
625
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eac41d81d8e5754a813ef1c52453826a737362b1b61a376c4268ce8404b7ba6f
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff