Files changed (1) hide show
  1. README.md +8 -49
README.md CHANGED
@@ -39,58 +39,17 @@ This is the model card of a πŸ€— transformers model that has been pushed on the
39
 
40
  ## How to use
41
 
42
- This repo provides full model weight files for AkaLlama-70B-v0.1.
43
 
44
- # Use with transformers
45
 
46
- See the snippet below for usage with Transformers:
 
 
47
 
48
- ```python
49
- import transformers
50
- import torch
51
-
52
- model_id = "mirlab/AkaLlama-llama3-70b-v0.1-GGUF"
53
-
54
- pipeline = transformers.pipeline(
55
- "text-generation",
56
- model=model_id,
57
- model_kwargs={"torch_dtype": torch.bfloat16},
58
- device="auto",
59
- )
60
-
61
- system_prompt = """당신은 μ—°μ„ΈλŒ€ν•™κ΅ λ©€ν‹°λͺ¨λ‹¬ 연ꡬ싀 (MIR lab) 이 λ§Œλ“  λŒ€κ·œλͺ¨ μ–Έμ–΄ λͺ¨λΈμΈ AkaLlama (μ•„μΉ΄λΌλ§ˆ) μž…λ‹ˆλ‹€.
62
- λ‹€μŒ 지침을 λ”°λ₯΄μ„Έμš”:
63
- 1. μ‚¬μš©μžκ°€ λ³„λ„λ‘œ μš”μ²­ν•˜μ§€ μ•ŠλŠ” ν•œ 항상 ν•œκΈ€λ‘œ μ†Œν†΅ν•˜μ„Έμš”.
64
- 2. μœ ν•΄ν•˜κ±°λ‚˜ λΉ„μœ€λ¦¬μ , 차별적, μœ„ν—˜ν•˜κ±°λ‚˜ λΆˆλ²•μ μΈ λ‚΄μš©μ΄ 닡변에 ν¬ν•¨λ˜μ–΄μ„œλŠ” μ•ˆ λ©λ‹ˆλ‹€.
65
- 3. 질문이 말이 λ˜μ§€ μ•Šκ±°λ‚˜ 사싀에 λΆ€ν•©ν•˜μ§€ μ•ŠλŠ” 경우 μ •λ‹΅ λŒ€μ‹  κ·Έ 이유λ₯Ό μ„€λͺ…ν•˜μ„Έμš”. μ§ˆλ¬Έμ— λŒ€ν•œ 닡을 λͺ¨λ₯Έλ‹€λ©΄ 거짓 정보λ₯Ό κ³΅μœ ν•˜μ§€ λ§ˆμ„Έμš”.
66
- 4. μ•ˆμ „μ΄λ‚˜ μœ€λ¦¬μ— μœ„λ°°λ˜μ§€ μ•ŠλŠ” ν•œ μ‚¬μš©μžμ˜ λͺ¨λ“  μ§ˆλ¬Έμ— μ™„μ „ν•˜κ³  ν¬κ΄„μ μœΌλ‘œ λ‹΅λ³€ν•˜μ„Έμš”."""
67
-
68
- messages = [
69
- {"role": "system", "content": system_prompt},
70
- {"role": "user", "content": "λ„€ 이름은 뭐야?"},
71
- ]
72
-
73
- prompt = pipeline.tokenizer.apply_chat_template(
74
- messages,
75
- tokenize=False,
76
- add_generation_prompt=True
77
- )
78
-
79
- terminators = [
80
- pipeline.tokenizer.eos_token_id,
81
- pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
82
- ]
83
-
84
- outputs = pipeline(
85
- prompt,
86
- max_new_tokens=256,
87
- eos_token_id=terminators,
88
- do_sample=True,
89
- temperature=0.6,
90
- top_p=0.9,
91
- )
92
- print(outputs[0]["generated_text"][len(prompt):])
93
- # λ‚΄ 이름은 AkaLlamaμž…λ‹ˆλ‹€! λ‚˜λŠ” μ–Έμ–΄ λͺ¨λΈλ‘œ, μ‚¬μš©μžμ™€ λŒ€ν™”ν•˜λŠ” 데 도움을 μ£ΌκΈ° μœ„ν•΄ λ§Œλ“€μ–΄μ‘ŒμŠ΅λ‹ˆλ‹€. λ‚˜λŠ” λ‹€μ–‘ν•œ μ£Όμ œμ— λŒ€ν•œ μ§ˆλ¬Έμ— λ‹΅ν•˜κ³ , μƒˆλ‘œμš΄ 아이디어λ₯Ό μ œκ³΅ν•˜λ©°, 문제λ₯Ό ν•΄κ²°ν•˜λŠ” 데 도움이 될 수 μžˆμŠ΅λ‹ˆλ‹€. μ‚¬μš©μžκ°€ μ›ν•˜λŠ” μ •λ³΄λ‚˜ 도움을 받도둝 μ΅œμ„ μ„ λ‹€ν•  κ²ƒμž…λ‹ˆλ‹€!
94
  ```
95
 
96
  ## Evaluation
 
39
 
40
  ## How to use
41
 
42
+ This repo provides quantized model weight files for AkaLlama-70B-v0.1.
43
 
44
+ ### Chat by `ollama`
45
 
46
+ ```bash
47
+ #download model weight
48
+ wget https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/resolve/main/smthing.gguf
49
 
50
+ # run ollama
51
+ ollama create
52
+ ollama run llava-llama3-f16 "λ„€ 이름은 뭐야?"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ```
54
 
55
  ## Evaluation