Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Description

SPARK-mini-instruct is an instruction tuned version of SPARK-mini-base, a 3.8B parameter, domain specific, language model trained on an extensive dataset curated from within the nuclear power domain.

The model was developed by continuously-pretraining Microsoft's Phi-3-mini-4k-instruct with over 35B tokens of high quality data curated from millions of public documents originating within nuclear power domain, and then performing an SFT step by fine-tuning on a curated set of ~1000 in-domain instructions and ~9000 general instructions. SPARK-mini-instruct was trained by Nuclearn AI, and is released as a research artifact, demonstration tool, and domain specific base LLM for further fine tuning by downstream practitioners working within or tangential to the nuclear industry domain.

SPARK-mini-instruct is instruct SFT training of SPARK-mini-base - it will respond to zero shot prompts in a chat-like format.

Uses

Nuclearn targets a few specific use cases with this open-source model release:

  1. Accelerating the work of technical staff at national research labs or regulatory agencies by providing a domain specific language model from which futher use cases can be fine tuned.
  2. Improving the performance of systems deployed in the Nuclear industry that currently utilize language models as feature extractors or model trunks in predictive AI systems.
  3. Accessibilty for practitioners without hardware accelerator or cloud connection capablities.

Direct Use

SPARK-mini-instruct is heavily adapted to the nuclear domain and as such will respond with an induced bias.

License

License: CC-BY-NC with exceptions made below for unrestricted use.

We further modify the CC-by-NC to permit free open, free and unrestricted use by a limited number of commercial entities including:

  1. Operating nuclear utilties
  2. Regulatory Bodies (Commercial or Government)
  3. Research Labs and Research Focused groups (e.g. National Laboratories and Electric Power Specific Research Groups)

Please contact us for further information or a written release.

Bias, Risks, and Limitations

  • This model has been trained extensively on Nuclear Power related information, but like every LM, still makes factual and logical mistakes.
  • The model should not be used for production use cases without futher training or applicable guardrails.
  • Intentional bias has been trained into the model for steerability.
  • SPARK-mini-instruct will emit incorrect responses and incorrect facts - as noted by the original Phi3 developers, 3.8B parameters is not sufficient for complete fact memorization. Application guardrails should be provided by end use developer.
  • There have been numerous issues reported in the LM community of Phi-3 prompt templating being incorrect or difficult to use. Model generation may not stop as expected. It is recommended to add additional stopping criteria for <|end|> and <|end_of_text|> special tokens.

How to Get Started with the Model

Prompting should follow the approach outlined in the original Phi3-mini-4k model with the same chat template.

# Requires transformers 4.41 for Phi3 compatibility
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "nuclearnai/SPARK-mini-instruct"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
).to("cuda")

tokenizer = AutoTokenizer.from_pretrained(
    model_name,
)

# Generate using min_p sampling
prompt = [
    {
        "role": "user",
        "content": "What are the tech specs appliable to diesel generators?",
    }
]

input_ids = tokenizer.apply_chat_template(
    prompt,
    return_tensors="pt",
).to("cuda")

output = model.generate(
    input_ids=input_ids,
    min_p=0.2,
    temperature=1.0,
    do_sample=True,
    max_new_tokens=100,
)

print(tokenizer.decode(output[0], skip_special_tokens=True))

Output

What are the tech specs appliable to diesel generators? The technical specifications applicable to diesel generators can vary depending on the specific reactor design and the licensing basis. However, some common technical specifications for diesel generators include: 1. Diesel generator operability requirements, such as minimum required fuel levels, starting air pressure, and surveillance testing intervals. 2. Limitations on the amount of time the diesel generators can be inoperable before the reactor must be shut down. 3.[continued...]

Training Details

Training Data

All training data for SPARK-mini-instruct is obtained from publically available sources, but is not being released.

Specific details on the training data, or sharing the training data will be made available on a case by case basis by contacting Nuclearn at contact@nuclearn.ai

Training Procedure

The model was trained in bf16 using DeepSpeed Zero3 on a multinode, private A100 server cluster.

Evaluation

SPARK-mini-instruct was evaluated on a set of private benchmarks created specifically for testing specific Nuclear Industry knowledge.

Completions (HellaSWAG for Nuclear)

  • Modeled after the HellaSWAG Benchmark
  • Various completions of complex Nuclear plant operational scenarios and fact passages.

Multiple Choice QA (MMLU for Nuclear)

  • Modeled after the MMLU benchmark
  • Multiple Choice question and answer on Nuclear Plant Operations, Systems, Engineering, etc...

Environmental Impact

  • Hardware Type: A100-80GB SXM4
  • Cloud Provider: Nuclearn Training Cluster

Model Architecture and Objective

SPARK-mini-instruct is based on the Phi3 architecture.

Compute Infrastructure

SPARK-mini-instruct is trained on the Nuclearn Training cluster - a A100-80GB SXM server cluster with 800Gb/s Infiniband connectivity

Model Card Authors

Bradley Fox, Nuclearn Inc Jerrold Vincent, Nuclearn Inc Nate Irby, Nuclearn Inc

Downloads last month
10
Safetensors
Model size
3.82B params
Tensor type
FP16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NuclearnAI/SPARK-mini-instruct

Quantizations
2 models

Space using NuclearnAI/SPARK-mini-instruct 1