|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation |
|
base_model: JackFram/llama-160m |
|
datasets: |
|
- ehartford/wizard_vicuna_70k_unfiltered |
|
- totally-not-an-llm/EverythingLM-data-V3 |
|
- Open-Orca/SlimOrca-Dedup |
|
- databricks/databricks-dolly-15k |
|
- THUDM/webglm-qa |
|
widget: |
|
- messages: |
|
- role: system |
|
content: You are a helpful assistant, who answers with empathy. |
|
- role: user |
|
content: Got a question for you! |
|
- role: assistant |
|
content: "Sure! What's it?" |
|
- role: user |
|
content: Why do you love cats so much!? ๐ |
|
- messages: |
|
- role: system |
|
content: "You are a helpful assistant who answers user's questions with empathy." |
|
- role: user |
|
content: Who is Mona Lisa? |
|
- messages: |
|
- role: system |
|
content: You are a helpful assistant who provides concise responses. |
|
- role: user |
|
content: Heya! |
|
- role: assistant |
|
content: Hi! How may I help you today? |
|
- role: user |
|
content: I need to build a simple website. Where should I start learning about web development? |
|
- messages: |
|
- role: user |
|
content: Invited some friends to come home today. Give me some ideas for games to play with them! |
|
- messages: |
|
- role: system |
|
content: "You are a helpful assistant who answers user's questions with details and curiosity." |
|
- role: user |
|
content: What are some potential applications for quantum computing? |
|
- messages: |
|
- role: system |
|
content: You are a helpful assistant who gives creative responses. |
|
- role: user |
|
content: Write the specs of a game about mages in a fantasy world. |
|
- messages: |
|
- role: system |
|
content: "You are a helpful assistant who answers user's questions with details." |
|
- role: user |
|
content: Tell me about the pros and cons of social media. |
|
- messages: |
|
- role: system |
|
content: "You are a helpful assistant who answers user's questions with confidence." |
|
- role: user |
|
content: What is a dog? |
|
- role: assistant |
|
content: 'A dog is a four-legged, domesticated animal that is a member of the class Mammalia, |
|
which includes all mammals. Dogs are known for their loyalty, playfulness, and |
|
ability to be trained for various tasks. They are also used for hunting, herding, |
|
and as service animals.' |
|
- role: user |
|
content: What is the color of an apple? |
|
inference: |
|
parameters: |
|
max_new_tokens: 250 |
|
penalty_alpha: 0.5 |
|
top_k: 4 |
|
repetition_penalty: 1.01 |
|
model-index: |
|
- name: Llama-160M-Chat-v1 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 24.74 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 35.29 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 26.13 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 44.16 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 51.3 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 0.0 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
# A Llama Chat Model of 160M Parameters |
|
|
|
- Base model: [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) |
|
- Datasets: |
|
- [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered) |
|
- [totally-not-an-llm/EverythingLM-data-V3](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V3) |
|
- [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup) |
|
- [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) |
|
- [THUDM/webglm-qa](https://huggingface.co/datasets/THUDM/webglm-qa) |
|
- Availability in other ML formats: |
|
- GGUF: [Felladrin/gguf-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/gguf-Llama-160M-Chat-v1) |
|
- ONNX: [Felladrin/onnx-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/onnx-Llama-160M-Chat-v1) |
|
- MLC: [Felladrin/mlc-q4f16-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/mlc-q4f16-Llama-160M-Chat-v1) |
|
- MLX: [mlx-community/Llama-160M-Chat-v1-4bit-mlx](https://huggingface.co/mlx-community/Llama-160M-Chat-v1-4bit-mlx) |
|
|
|
## Recommended Prompt Format |
|
|
|
``` |
|
<|im_start|>system |
|
{system_message}<|im_end|> |
|
<|im_start|>user |
|
{user_message}<|im_end|> |
|
<|im_start|>assistant |
|
``` |
|
|
|
## Recommended Inference Parameters |
|
|
|
```yml |
|
penalty_alpha: 0.5 |
|
top_k: 4 |
|
repetition_penalty: 1.01 |
|
``` |
|
|
|
## Usage Example |
|
|
|
```python |
|
from transformers import pipeline |
|
|
|
generate = pipeline("text-generation", "Felladrin/Llama-160M-Chat-v1") |
|
|
|
messages = [ |
|
{ |
|
"role": "system", |
|
"content": "You are a helpful assistant who answers user's questions with details and curiosity.", |
|
}, |
|
{ |
|
"role": "user", |
|
"content": "What are some potential applications for quantum computing?", |
|
}, |
|
] |
|
|
|
prompt = generate.tokenizer.apply_chat_template( |
|
messages, tokenize=False, add_generation_prompt=True |
|
) |
|
|
|
output = generate( |
|
prompt, |
|
max_new_tokens=1024, |
|
penalty_alpha=0.5, |
|
top_k=4, |
|
repetition_penalty=1.01, |
|
) |
|
|
|
print(output[0]["generated_text"]) |
|
``` |
|
|
|
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
|
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Felladrin__Llama-160M-Chat-v1) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |30.27| |
|
|AI2 Reasoning Challenge (25-Shot)|24.74| |
|
|HellaSwag (10-Shot) |35.29| |
|
|MMLU (5-Shot) |26.13| |
|
|TruthfulQA (0-shot) |44.16| |
|
|Winogrande (5-shot) |51.30| |
|
|GSM8k (5-shot) | 0.00| |
|
|