Edit model card

image/jpeg

Shining Valiant 2 is a chat model built on Llama 3.1 70b, finetuned on our data for friendship, insight, knowledge and enthusiasm.

Version

This is the 2024-10-30 release of Shining Valiant 2 for Llama 3.1 70b.

This release uses our newest datasets, open-sourced for everyone's use, including our expanded science-instruct dataset. This release features improvements in logical thinking and structured reasoning as well as physics, chemistry, biology, astronomy, Earth science, computer science, and information theory.

Future upgrades will continue to expand Shining Valiant's technical knowledge base.

Help us and recommend Shining Valiant 2 to your friends!

Prompting Guide

Shining Valiant 2 uses the Llama 3.1 Instruct prompt format. The example script below can be used as a starting point for general chat:

import transformers
import torch

model_id = "ValiantLabs/Llama3.1-70B-ShiningValiant2"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are an AI assistant."},
    {"role": "user", "content": "What is the role of lysosomal enzymes in the regulation of cellular processes?"}
]

outputs = pipeline(
    messages,
    max_new_tokens=2048,
)

print(outputs[0]["generated_text"][-1])

The Model

Shining Valiant 2 is built on top of Llama 3.1 70b Instruct.

The current version of Shining Valiant 2 is trained on technical knowledge using sequelbox/Celestia, complex reasoning using sequelbox/Spurline, and general chat capability using sequelbox/Supernova.

We're super excited that Shining Valiant's dataset has been fully open-sourced! She's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical.

image/jpeg

Shining Valiant 2 is created by Valiant Labs.

Check out our HuggingFace page for our open-source Build Tools models, including the newest version of code-specialist Enigma!

Follow us on X for updates on our models!

We care about open source. For everyone to use.

We encourage others to finetune further from our models.

Downloads last month
316
GGUF
Model size
70.6B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for mav23/Llama3.1-70B-ShiningValiant2-GGUF

Quantized
(85)
this model

Datasets used to train mav23/Llama3.1-70B-ShiningValiant2-GGUF

Evaluation results