qnguyen3 commited on
Commit
c35fe3b
·
1 Parent(s): f4a0293

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -36,7 +36,7 @@ USER:{user's question}<|endoftext|>ASSISTANT:
36
  ## Model Description
37
  - **Developed by:** [https://www.tii.ae](https://www.tii.ae);
38
  - **Finetuned by:** [Virtual Interactive](https://vilm.org);
39
- - **Language(s) (NLP):** English, German, Spanish, French, Portugese, Russian, Italian, Vietnamese, Indonesian, Chinese, Japanese and Chinese;
40
  - **Training Time:** 3,000 A100 Hours
41
 
42
  ### Out-of-Scope Use
@@ -53,7 +53,7 @@ We recommend users of Vulture-40B to consider finetuning it for the specific set
53
 
54
  ## How to Get Started with the Model
55
 
56
- To run inference with the model in full `bfloat16` precision you need approximately 8xA100 80GB or equivalent.
57
 
58
  ```python
59
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
36
  ## Model Description
37
  - **Developed by:** [https://www.tii.ae](https://www.tii.ae);
38
  - **Finetuned by:** [Virtual Interactive](https://vilm.org);
39
+ - **Language(s) (NLP):** English, German, Spanish, French, Portugese, Russian, Italian, Vietnamese, Indonesian, Chinese, Japanese and Chinese
40
  - **Training Time:** 3,000 A100 Hours
41
 
42
  ### Out-of-Scope Use
 
53
 
54
  ## How to Get Started with the Model
55
 
56
+ To run inference with the model in full `bfloat16` precision you need approximately 4xA100 80GB or equivalent.
57
 
58
  ```python
59
  from transformers import AutoTokenizer, AutoModelForCausalLM