Model Card for Model ID
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Andrew Chahnwoo Park
- Model type: LLaMA
- Language(s) (NLP): English
- License: apache-2.0
- Finetuned from model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
Model Sources
- Repository: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- GitHub: TinyLlama
Training Details
Training Data
DataBricks Instruction-Tuning Dataset (5% utilized)
Training Procedure
- Tokenize and label data
- Load LLM
- Apply Quantized Low-Rank Adaptation (QLoRA) to modules ["q_proj","k_proj","v_proj","o_proj"]
- Perform training with HuggingFace Trainer
- Use DataCollatorForSeq2Seq
- Note that this was data collator was chosen over the DataCollatorForLanguageModeling as the latter overwrites pre-defined "labels"
- This overwriting is done by the tf_mask_tokens and torch_mask_tokens functions for DataCollatorForLanguageModeling
Preprocessing
Utilized different instruction prompt templates for each category in the dataset.
open_qa
### Instruction:
Answer the question below. Be as specific and concise as possible.
### Question:
{instruction}
### Response:
{response}
general_qa
### Instruction:
Answer the question below to the best of your konwledge.
### Question:
{instruction}
### Response:
{response}
classification
### Instruction:
You will be given a question and a list of potential answers to that question. You are to select the correct answers out of the available choices.
### Question:
{instruction}
### Response:
{response}
closed_qa
### Instruction:
You will be given a question to answer and context that contains pertinent information. Provide a concise and accurate response to the question using the information provided in the context.
### Question:
{instruction}
### Context:
{context}
### Response:
{response}
brainstorming
### Instruction:
You will be given a question that does not have a correct answer. You are to brainstorm one possible answer to the provided question.
### Question:
{instruction}
### Response:
{response}
information_extraction
### Instruction:
You will be given a question or query and some context that can be used to answer it. You are to extract relevant information from the provided context to provide an accurate response to the given query.
### Question:
{instruction}
### Context:
{context}
### Response:
{response}
summarization
### Instruction:
You will be given a question or request and context that can be used for your response. You are to summarize the provided context to provide an answer to the question.
### Question:
{instruction}
### Context:
{context}
### Response:
{response}
creative_writing
### Instruction:
You will be given a prompt that you are to write about. Be creative.
### Prompt:
{instruction}
### Response:
{response}"""
Labelled Data Format
{
'input_ids' : List[int],
'attention_mask' : List[int],
'labels' : List[int]
}
Where labels were created by masking everything but the "response" with the mask token (-100)
Hardware
Fine-tuning performed on Google Colab on a single session (T4). Dataset not fully utilized due to limitations of free session.
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.