metadata
datasets:
- danielpark/gorani-100k-llama2-13b-instruct
language:
- en
library_name: transformers
pipeline_tag: text-generation
I am looking for a company that can support multi-A100 GPUs (a minimum of 4 or more). If your company can provide resources that I can handle through SSH tunneling or the cloud, please contact me at parkmiwnoo1991@gmail.com. Your company will have access to records of my dataset and training code. Thank you.
The project is currently in progress. Please refrain from using weights and datasets.
Status: 19.7k check point weights open, waiting for the results on the LLM leaderboard.
Update Schedule | Task Description | Status |
---|---|---|
23-10-05 | Completed training - 19.7k 13b weight | Done |
23-10-06 | Submitted hf model weights (REV 01) | Done |
23-10-20 | Q.C | On Process |
23-10-13 | Completed training - 50k 13b weight | |
23-10-14 | Submitted hf model weights | |
23-10-18 | Completed training - 100k 13b weight | |
23-10-20 | Q.A | |
23-10-21 | Official weight release |
GORANI 100k
- Model: danielpark/gorani-100k-llama2-13b-instruct
- Dataset: danielpark/gorani-100k
Template
I use llama2-13b with LFM, but I have used it without a default system message. If a system message is specified in some datasets, I use that content.
### System:
{System}
### User:
{New_User_Input}
### Input:
{New User Input}
### Response:
{New_Assistant_Answer}
Caution
The model weights and dataset have not been properly curated yet and are strictly prohibited for use under any license. In relation to this, the developers do not assume any responsibility, either implicitly or explicitly.
Updates
Revision | Commit Hash | Updated | Train Process | Status |
---|---|---|---|---|
Revision 01 | 6d30494fa8da84128499d55075eef57094336d03 | 23.10.04 | 19,740/100,000 | On Training |