qnguyen3 commited on
Commit
30dbf29
·
1 Parent(s): d32b36d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md CHANGED
@@ -1,3 +1,83 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - ru
5
+ - en
6
+ - de
7
+ - es
8
+ - it
9
+ - ja
10
+ - vi
11
+ - zh
12
+ - fr
13
+ - pt
14
+ - id
15
+ - ko
16
+ pipeline_tag: text-generation
17
  ---
18
+ # 🌍 Vulture-40B
19
+ ***Vulture-40B*** is a further fine-tuned causal Decoder-only LLM built by Virtual Interactive (VILM), on top of the famous **Falcon-40B** by [TII](https://www.tii.ae). We collected a new dataset from news articles and Wikipedia's pages of **12 languages** (Total: **80GB**) and continue the pretraining process of Falcon-40B. Finally, we construct a multilingual instructional dataset following **Alpaca**'s techniques.
20
+
21
+ While ***Vulture-40B*** is an adapter freely usable under **APACHE-2.0**, **Falcon-40B** itself remains available only under the **[Falcon-40B TII License](https://huggingface.co/spaces/tiiuae/falcon-40B-license/blob/main/LICENSE.txt) and [Acceptable Use Policy](https://huggingface.co/spaces/tiiuae/falcon-40B-license/blob/main/ACCEPTABLE_USE_POLICY.txt)**. Users should ensure any commercial applications based on ***Vulture-40B*** comply with the restrictions on **Falcon-40B**'s use.
22
+
23
+ *Technical Report coming soon* 🤗
24
+
25
+ ## Prompt Format
26
+
27
+ The reccomended model usage is:
28
+
29
+ ```
30
+ A chat between a curious user and an artificial intelligence assistant.
31
+
32
+ USER:{user's question}<|endoftext|>ASSISTANT:
33
+ ```
34
+
35
+ # Model Details
36
+ ## Model Description
37
+ - **Developed by:** [https://www.tii.ae](https://www.tii.ae);
38
+ - **Finetuned by:** [Virtual Interactive](https://vilm.org);
39
+ - **Language(s) (NLP):** English, German, Spanish, French, Portugese, Russian, Italian, Vietnamese, Indonesian, Chinese, Japanese and Chinese;
40
+ - **Training Time:** 3,000 A100 Hours
41
+
42
+ ### Out-of-Scope Use
43
+
44
+ Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
45
+
46
+ ## Bias, Risks, and Limitations
47
+
48
+ Vulture-40B is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
49
+
50
+ ### Recommendations
51
+
52
+ We recommend users of Vulture-40B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
53
+
54
+ ## How to Get Started with the Model
55
+
56
+ To run inference with the model in full `bfloat16` precision you need approximately 8xA100 80GB or equivalent.
57
+
58
+ ```python
59
+ from transformers import AutoTokenizer, AutoModelForCausalLM
60
+ import transformers
61
+ import torch
62
+ from peft import PeftModel
63
+
64
+ model = "tiiuae/falcon-40B"
65
+ adapters_name = 'vilm/vulture-40B'
66
+
67
+ tokenizer = AutoTokenizer.from_pretrained(model)
68
+ m = AutoModelForCausalLM.from_pretrained(model, torch_dtype=torch.bfloat16, device_map="auto" )
69
+ m = PeftModel.from_pretrained(m, adapters_name)
70
+
71
+ prompt = "A chat between a curious user and an artificial intelligence assistant.\n\nUSER:Thành phố Hồ Chí Minh nằm ở đâu?<|endoftext|>ASSISTANT:"
72
+
73
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
74
+
75
+ output = m.generate(input_ids=inputs["input_ids"],
76
+ attention_mask=inputs["attention_mask"],
77
+ do_sample=True,
78
+ temperature=0.6,
79
+ top_p=0.9,
80
+ max_new_tokens=50,)
81
+ output = output[0].to("cpu")
82
+ print(tokenizer.decode(output))
83
+ ```