license: mit | |
datasets: | |
- iamtarun/python_code_instructions_18k_alpaca | |
language: | |
- en | |
base_model: | |
- meta-llama/Llama-3.1-8B-Instruct | |
## finetune_DSA is a finetuned model able to solve DSA question with python. | |
## I finetuned llama 3.1 8B with aplaca dataset with 18 k rows . |