bababababooey commited on
Commit
3be96ad
1 Parent(s): 7343926

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -277,12 +277,10 @@ extra_gated_button_content: Submit
277
  extra_gated_eu_disallowed: true
278
  ---
279
 
280
- test of swapping the language model in 3.2. i used v000000/L3-8B-Stheno-v3.2-abliterated
281
-
282
- ---
283
-
284
  ## Model Information
285
 
 
 
286
  The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.
287
 
288
  **Model Developer**: Meta
 
277
  extra_gated_eu_disallowed: true
278
  ---
279
 
 
 
 
 
280
  ## Model Information
281
 
282
+ *test of swapping the language model in 3.2. i used v000000/L3-8B-Stheno-v3.2-abliterated*
283
+
284
  The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.
285
 
286
  **Model Developer**: Meta