jpacifico commited on
Commit
b69cfc9
1 Parent(s): 14be45d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -1
README.md CHANGED
@@ -32,7 +32,27 @@ It can be used on a CPU device, compatible with llama.cpp and LM Studio
32
 
33
  ### Usage
34
 
35
- Test notebook coming soon, follow me so as not to miss the updates ^^
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ### Limitations
38
 
 
32
 
33
  ### Usage
34
 
35
+ ```python
36
+
37
+ model_id = "jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0"
38
+ model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})
39
+ tokenizer = AutoTokenizer.from_pretrained(model_id, add_eos_token=True, padding_side='left')
40
+ streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
41
+
42
+ def stream_frenchalpaca(user_prompt):
43
+ runtimeFlag = "cuda:0"
44
+ system_prompt = 'Tu trouveras ci-dessous une instruction qui décrit une tâche. Rédige une réponse qui complète de manière appropriée la demande.\n\n'
45
+ B_INST, E_INST = "### Instruction:\n", "### Response:\n"
46
+ prompt = f"{system_prompt}{B_INST}{user_prompt.strip()}\n\n{E_INST}"
47
+ inputs = tokenizer([prompt], return_tensors="pt").to(runtimeFlag)
48
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
49
+ _ = model.generate(**inputs, streamer=streamer, max_new_tokens=500)
50
+
51
+ stream_frenchalpaca("your prompt here")
52
+ ```
53
+
54
+ Colab Notebook available on my Github:
55
+ https://github.com/jpacifico/French-Alpaca/blob/main/French_Alpaca_Llama3_inference_test_colab.ipynb
56
 
57
  ### Limitations
58