|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
datasets: |
|
- m-a-p/COIG-CQIA |
|
language: |
|
- zh |
|
pipeline_tag: text-generation |
|
inference: false |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This model is fine-tuned on the Qwen1.5-4B-Chat model and COIG-CQIA/ruozhiba dataset by QLoRA. |
|
|
|
Be noted that the model file contains the adapter LoRA weights only, so you are suggested to merge adapters with the base model for inference usage. Check the official reference [here](https://huggingface.co/docs/peft/main/en/developer_guides/lora#merge-adapters) |
|
|
|
This whole training process is running on Google Colab with free computing resources, detail can be accessed via [link](https://colab.research.google.com/drive/1GiI8drsinxhFdprWbqlXtN0DqbHHs1fe?hl=en#scrollTo=5o3OgCMdRGgp) |
|
|
|
This project is for the demonstration only of course DSAA5009, HKUST Guangzhou |