atharvapawar's picture
v1
efe1985
metadata
title: Fine-tuned Model Card
authors:
  - Your Name
date: August 2023
tags:
  - Code Generation
  - Text2Text Generation
  - Python
  - Vulnerability Rule

Model Overview

  • Library: PEFT
  • Language: English (en)
  • Pipeline Tag: Text2Text Generation
  • Tags: Code Generation (cod)

Model Details

This model has been fine-tuned on the Llama-2 model using a dataset of Python code vulnerability rules.

Training Procedure

The model was trained with a quantization configuration using the bitsandbytes quantization method. Some key configurations include:

  • Quantization Method: bitsandbytes
  • Load in 8-bit: False
  • Load in 4-bit: True
  • LLM Int8 Threshold: 6.0
  • LLM Int8 Skip Modules: None
  • LLM Int8 Enable FP32 CPU Offload: False
  • LLM Int8 Has FP16 Weight: False
  • BNB 4-bit Quant Type: nf4
  • BNB 4-bit Use Double Quant: False
  • BNB 4-bit Compute Dtype: float16

Framework Versions

  • PEFT: 0.6.0.dev0

Model Details

This model card provides information about a fine-tuned model using the PEFT library. The model is designed for text-to-text generation tasks, particularly in the field of code generation and vulnerability rule detection.

Intended Use

The model is intended for generating text outputs based on text inputs. It has been fine-tuned specifically for code generation tasks and vulnerability rule detection. Users can input text descriptions, code snippets, or other relevant information to generate corresponding code outputs.

Limitations and Considerations

It's important to note that while the model has been fine-tuned for code generation, its outputs may still require human review and validation. It may not cover all possible code variations or edge cases. Users are advised to thoroughly review generated code outputs before deployment.

Training Data

The model was trained on a dataset of Python code vulnerability rules. The dataset includes examples of code patterns that could potentially indicate vulnerabilities or security risks.

Training Procedure

The model was trained using the PEFT library. The quantization method used was bitsandbytes, with specific configurations mentioned earlier. The model underwent multiple training epochs to optimize its performance on code generation tasks.

Model Evaluation

The model's performance has not been explicitly evaluated in this model card. Users are encouraged to evaluate the model's generated outputs for their specific use case and domain.

Framework Versions

  • PEFT: 0.6.0.dev0