Edit model card

Model Card: Custom LLM with High-Rank Adapter

Model Overview

This model is a custom-trained language model based on the Meta-Llama-3.1-8B architecture. Unlike most instruction-tuned models, it was trained directly on a mixture of high-quality datasets for general text and code completion tasks, as well as instruction-following. A high-rank adapter (rank 128) is used to enhance learning capacity while mitigating catastrophic forgetting, which distinguishes this model from common low-rank fine-tuning methods.

  • Developer: Eric Florenzano
  • Model Type: Large Language Model (LLM)
  • Language(s): English, with a focus on Python for code-related tasks
  • License: Apache-2.0
  • Base Model: meta-llama/Meta-Llama-3.1-8B

Model Sources

Model Training and Approach

Unique Training Approach

Instead of fine-tuning an instruction-tuned model, the base Meta-Llama-3.1-8B model was trained with a diverse set of high-quality pretraining and instruction datasets. The training focused on both text completion/prediction and instruction-following tasks.

Key features of the training process:

  • Training Data: A blend of high-quality data sources, each serving different purposes:
    • FineTome-100k: High-quality instruction-tuned data for general language understanding and task completion.
    • dclm-baseline-1.0-parquet: A pretraining corpus from Apple, used for standard text completion/prediction tasks.
    • English Wikipedia: Used for text completion tasks with a focus on broad language understanding.
    • Starcoder: High-quality Python-focused code dataset used for code completion tasks.
  • Instruction-Tuning: The model alternates randomly between ChatML and the Llama Chat template during training to learn a general-purpose instruction-following format that is not restricted to one specific style.
  • Strata Information: Training data is prefixed with contextual information (e.g., URLs for Wikipedia articles) to address data imbalance, allowing the model to weigh different data sources appropriately. However, this prefixing is only used during training, with inference relying on the model's learned representations.
  • High-Rank Adapter: The model uses a high-rank adapter (rank 128) to learn more complex representations and reduce the risk of catastrophic forgetting, as opposed to the commonly used low-rank adaptation approach (LoRA).

Training Procedure

The model was trained for 650 steps using the datasets mentioned above. During this process, the focus was on ensuring a balanced learning process across different task types (text completion, code completion, instruction-following). The high-rank adapter plays a significant role in maintaining model capacity while reducing computational complexity.

Training Hyperparameters

  • Adapter Rank: 128
  • Training Steps: 650
  • Base Model: meta-llama/Meta-Llama-3.1-8B

Intended Uses

This model is designed for a variety of natural language processing tasks, including:

  • Text Completion and Generation: Generating and predicting text based on provided input.
  • Code Completion: Assisting with Python code generation and completion tasks.
  • Instruction Following: Capable of following complex instructions across multiple domains.
  • General Language Understanding: Leveraging its diverse training data for broad language comprehension tasks.

Out-of-Scope Use

  • Real-time Knowledge: The model does not have access to real-time data or events beyond its training period.
  • Harmful or Biased Content: Should not be used to generate harmful, biased, or misleading information.
  • Critical Decision-Making: Should not be relied upon for critical tasks that require human oversight and judgment.

Bias, Risks, and Limitations

While this model was trained on a mix of high-quality datasets, it may still exhibit biases present in the training data, especially in domains with limited or skewed representation. Users should:

  • Be aware of potential biases, particularly in sensitive domains.
  • Review model outputs for accuracy, especially for code generation and decision-making tasks.
  • Use the model as a tool to assist in human decision-making, not as a replacement.
  • Understand that performance may vary across different domains and task types.

Evaluation

Tasks Version Filter n-shot Metric Value Stderr
tinyBenchmarks N/A
- tinyArc 0 none 25 acc_norm ↑ 0.6056 ± N/A
- tinyGSM8k 0 flexible-extract 5 exact_match ↑ 0.4793 ± N/A
strict-match 5 exact_match ↑ 0.4793 ± N/A
- tinyHellaswag 0 none 10 acc_norm ↑ 0.8261 ± N/A
- tinyMMLU 0 none 0 acc_norm ↑ 0.6358 ± N/A
- tinyTruthfulQA 0 none 0 acc ↑ 0.5098 ± N/A
- tinyWinogrande 0 none 5 acc_norm ↑ 0.7447 ± N/A

Technical Specifications

Model Architecture

  • Base Model: meta-llama/Meta-Llama-3.1-8B
  • High-Rank Adapter: A rank 128 adapter used to learn more complex patterns while reducing catastrophic forgetting.
  • Objective: Multi-task learning across text completion, code completion, and instruction following.

Compute Infrastructure

Software

  • Library: PEFT 0.12.0 for efficient parameter fine-tuning.

Model Card Contact

For inquiries about this model, please contact Eric Florenzano through the model repository.

Downloads last month
47
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ericflo/Llama-3.1-8B-ContinuedTraining

Adapter
(117)
this model

Datasets used to train ericflo/Llama-3.1-8B-ContinuedTraining