You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.


EXAONE-3.0-7.8B-Instruct

πŸ‘‹πŸ‘‹ We have revised our license for revitalizing the research ecosystem.πŸ‘‹πŸ‘‹

Introduction

We introduce EXAONE-3.0-7.8B-Instruct, a pre-trained and instruction-tuned bilingual (English and Korean) generative model with 7.8 billion parameters. The model was pre-trained with 8T curated tokens and post-trained with supervised fine-tuning and direct preference optimization. It demonstrates highly competitive benchmark performance against other state-of-the-art open models of similar size.

For more details, please refer to our technical report, blog and GitHub.

Quickstart

We recommend to use transformers v4.41 or later.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct",
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct")

# Choose your prompt
prompt = "Explain who you are"  # English example
prompt = "λ„ˆμ˜ μ†Œμ›μ„ 말해봐"   # Korean example

messages = [
    {"role": "system", 
     "content": "You are EXAONE model from LG AI Research, a helpful assistant."},
    {"role": "user", "content": prompt}
]
input_ids = tokenizer.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_tensors="pt"
)

output = model.generate(
    input_ids.to("cuda"),
    eos_token_id=tokenizer.eos_token_id,
    max_new_tokens=128
)
print(tokenizer.decode(output[0]))

Note

The EXAONE 3.0 instruction-tuned language model was trained to utilize the system prompt, so we highly recommend using the system prompts provided in the code snippet above.

Evaluation

We compared EXAONE-3.0-7.8B-Instruct with similar-sized instruction-tuned LLMs. To verify the performance of real-world use cases, we measured benchmarks that have a high correlation with LMSYS Chatbot Arena. Some experimental results are shown below. The full evaluation results can be found in the technical report.

Language Benchmark EXAONE 3.0
7.8B Inst.
Llama 3.1
8B Inst.
Gemma 2
9B Inst.
QWEN 2
7B Inst.
Phi 3
7B Inst.
Mistral 7B
Inst.
English MT-Bench 9.01 7.95 8.52 8.41 8.52 7.72
Arena-Hard-v0.1 46.8 28.0 42.1 21.7 29.1 16.2
WildBench 48.2 34.5 41.5 34.9 32.8 29.0
AlpacaEval 2.0 LC 45.0 31.5 47.5 24.5 37.1 31.0
Korean KoMT-Bench[1] 8.92 6.06 7.92 7.69 4.87 5.20
LogicKor 8.62 5.40 8.07 6.12 3.76 3.42
  • [1] KoMT-Bench is a dataset created by translating MT-Bench into Korean; see README for more details.

Limitation

The EXAONE language model has certain limitations and may occasionally generate inappropriate responses. The language model generates responses based on the output probability of tokens, and it is determined during learning from training data. While we have made every effort to exclude personal, harmful, and biased information from the training data, some problematic content may still be included, potentially leading to undesirable responses. Please note that the text generated by EXAONE language model does not reflects the views of LG AI Research.

  • Inappropriate answers may be generated, which contain personal, harmful or other inappropriate information.
  • Biased responses may be generated, which are associated with age, gender, race, and so on.
  • The generated responses rely heavily on statistics from the training data, which can result in the generation of semantically or syntactically incorrect sentences.
  • Since the model does not reflect the latest information, the responses may be false or contradictory.

LG AI Research strives to reduce potential risks that may arise from EXAONE language model. Users are not allowed to engage in any malicious activities (e.g., keying in illegal information) that may induce the creation of inappropriate outputs violating LG AI’s ethical principles when using EXAONE language model.

License

The model is licensed under EXAONE AI Model License Agreement 1.1 - NC

Citation

@article{exaone-3.0-7.8B-instruct,
  title={EXAONE 3.0 7.8B Instruction Tuned Language Model},
  author={LG AI Research},
  journal={arXiv preprint arXiv:2408.03541},
  year={2024}
}

Contact

LG AI Research Technical Support: contact_us@lgresearch.ai

Downloads last month
46,289
Safetensors
Model size
7.82B params
Tensor type
F32
Β·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct

Adapters
1 model
Finetunes
3 models
Quantizations
8 models

Spaces using LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct 7

Collection including LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct