variante commited on
Commit
a31f4be
1 Parent(s): f9cc6f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -28,7 +28,7 @@ LLaRA is an open-source visuomotor policy trained by fine-tuning [LLaVA-7b-v1.5]
28
  For the conversion code, please refer to [convert_vima.ipynb](https://github.com/LostXine/LLaRA/blob/main/datasets/convert_vima.ipynb)
29
 
30
  **Model date:**
31
- llava-1.5-7b-llara-D-inBC-Aux-D-VIMA-80k was trained in June 2024.
32
 
33
  **Paper or resources for more information:**
34
  https://github.com/LostXine/LLaRA
@@ -38,7 +38,7 @@ https://github.com/LostXine/LLaRA/issues
38
 
39
  ## Intended use
40
  **Primary intended uses:**
41
- The primary use of LLaRA is research on large multimodal models and chatbots.
42
 
43
  **Primary intended users:**
44
- The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
 
28
  For the conversion code, please refer to [convert_vima.ipynb](https://github.com/LostXine/LLaRA/blob/main/datasets/convert_vima.ipynb)
29
 
30
  **Model date:**
31
+ llava-1.5-7b-llara-D-inBC-Aux-B-VIMA-80k was trained in June 2024.
32
 
33
  **Paper or resources for more information:**
34
  https://github.com/LostXine/LLaRA
 
38
 
39
  ## Intended use
40
  **Primary intended uses:**
41
+ The primary use of LLaRA is research on large multimodal models for robotics.
42
 
43
  **Primary intended users:**
44
+ The primary intended users of the model are researchers and hobbyists in robotics, computer vision, natural language processing, machine learning, and artificial intelligence.