Used scraped data as input, ChatGPT result as output and limited types of instruction to fine-tune flan-t5-base. Used declare-lab repo. Refer to https://github.com/declare-lab/flan-alpaca Epoch set to 1 Input token max 512 Output token max 512 Trained on Free Google Colab T4 Training time ~ 40 mins
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.