yi6B_Vicuna / README.md
lorinma's picture
Update README.md
c49e8ec
|
raw
history blame
717 Bytes
metadata
datasets:
  - anon8231489123/ShareGPT_Vicuna_unfiltered
language:
  - zh
  - en

*TODO:Upload pending, training is finished. still testing. *Update: Having a bit issue, still figuring things out.

Reproduce Vicuna, but based on yi-6B.

We can see from some preliminary results, that the unfiltering seems to be working!

Heads up some examples are unsafe and inappropriate, this is entirely for the purpose of testing how un-aligned SFT data affect LLM's final output.

image/png

image/png