danielpark's picture
Update README.md
3dcdd6b
|
raw
history blame
4.27 kB
metadata
datasets:
  - danielpark/gorani-100k-llama2-13b-instruct
language:
  - en
library_name: >-
  bitsandbytes, transformers, peft, accelerate, bitsandbytes, datasets,
  deepspeed, trl
pipeline_tag: text-generation

GORANI 100k


The project is currently in progress. Please refrain from using weights and datasets.

KORANI is derived from GORANI, a project within llama2 that experiments with the distribution of appropriate datasets to transfer or distill knowledge based on English datasets. Officially, it's called Grid Of Ranvier Node In llama2 (GORANI), based on the biological term Ranvier Node, and aims to explore the optimal dataset for transferring knowledge in various languages and specific domains. Due to strict licensing issues with English datasets, gorani is primarily for research purposes. Therefore, we are refining and training a commercially usable Korean dataset on top of llama2, based on the experimental results of the GORANI project, and this project is named KORANI (Korean GORANI).

  • We are currently conducting experiments using various techniques such as max sequence length, rope scaling, attention sinks, and flash attention 2.
  • Please do not use the current model weights as they are not useful. The most stringent non-commercial use license (CC-BY-NC-4.0) among the licenses of the datasets used for training is also applied to the model weights.
  • Once the training is complete, we will provide information about the datasets used along with the official release.
  • For GORANI, it is intended for research purposes, and for the Korean language model, KORANI, it can be used under a commercial use license.

Template

I use llama2-13b with LFM, but I have used it without a default system message. If a system message is specified in some datasets, I use that content.

### System:
{System}

### User:
{New_User_Input}

### Input:
{New User Input}

### Assistant:
{New_Assistant_Answer}

Update

  • Since we cannot control resources, we will record the schedule retrospectively.
Update Schedule Task Description Status
23-10-05 Completed training - 19.7k 13b weight (specific data) Done
23-10-06 Submitted hf model weights (REV 01) Done
23-10-20 Q.C On Process
23-10- Completed training - 50k 13b weight
23-10- Q.C
23-10- Submitted hf model weights
23-10- Completed training - 100k 13b weight
23-10- Q.C
23-10- Q.A
23-11- Official weight release

Caution

The model weights and dataset have not been properly curated yet and are strictly prohibited for use under any license. In relation to this, the developers do not assume any responsibility, either implicitly or explicitly.

Revisions

Revision Commit Hash Updated Train Process Status
Revision 01 6d30494fa8da84128499d55075eef57094336d03 23.10.04 19,740/100,000 On Training