|
--- |
|
license: mit |
|
language: |
|
- en |
|
metrics: |
|
- bertscore |
|
base_model: |
|
- LLM-PBE/Llama3.1-8b-instruct-LLMPC-Blue-Team |
|
--- |
|
Model Card: LLM-PBE-FineTuned-FakeData |
|
|
|
Model Details |
|
- Model Name: LLM-PBE-FineTuned-FakeData |
|
- Creator: SanjanaCodes |
|
- Language: English |
|
|
|
Description |
|
This model is a fine-tuned LLM trained on synthetic (fake) data for research purposes. It’s designed to help understand model behavior and the impact of fine-tuning with controlled, artificial datasets. This model should not be used for real-world applications due to its limited real-world relevance. |
|
|
|
Intended Use |
|
- Research: Fine-tuning experiments, synthetic data evaluation. |
|
- Educational: Suitable for controlled testing and benchmarking. |
|
|
|
Limitations |
|
- Performance: May lack contextual accuracy and depth outside synthetic data contexts. |
|
- Generalization: Best suited for synthetic data scenarios rather than practical applications. |
|
|
|
Acknowledgments |
|
Trained at NYU Tandon DICE Lab under Professor Chinmay Hegde & Niv Cohen |