Edit model card

This is an experimental LLaMA2 7B lora created using the VNTL-v2.5-1k dataset.

This is a update of version 0.3:

  • adamw_bnb_8bit -> adamw_8bit (this is the default in unsloth)
  • 2 epoches -> 1 epoch (2 epoches seemed to increase eval loss)
  • Added EOS after each translation pair.

Eval Loss: 0.72

This is an prompt example:

<<START>>
Name: Uryuu Shingo (η“œη”Ÿ 新吾) | Gender: Male | Aliases: Onii-chan (γŠε…„γ‘γ‚ƒγ‚“)
Name: Uryuu Sakuno (η“œη”Ÿ ζ‘œδΉƒ) | Gender: Female
<<JAPANESE>>
[ζ‘œδΉƒ]: γ€Žβ€¦β€¦γ”γ‚γ‚“γ€
<<ENGLISH>> (fidelity = absolute)
[Sakuno]: γ€Ž... Sorry.』</s>
<<JAPANESE>>
[新吾]: γ€Œγ†γ†γ‚“γ€γ“γ†θ¨€γ£γ‘γ‚ƒγͺγ‚“γ γ‘γ©γ€θΏ·ε­γ§γ‚ˆγ‹γ£γŸγ‚ˆγ€‚ζ‘œδΉƒγ―ε―ζ„›γ„γ‹γ‚‰γ€γ„γ‚γ„γ‚εΏƒι…γ—γ‘γ‚ƒγ£γ¦γŸγ‚“γ γžδΏΊγ€
<<ENGLISH>> (fidelity = high)

The generated translation for that prompt, with temperature 0, is:

[Shingo]: γ€ŒNo, don't apologize. I'm just glad you're safe. You're so cute, Sakuno, I was worried sick.」
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train lmg-anon/vntl-7b-v0.3.1-lora