Trained for 2 epochs on NilanE/ParallelFiction-Ja_En-100k using QLoRA. CPO tune is in-progress.
Input should be 500-1000 tokens long. Make sure to set 'do_sample = False' if using HF transformers for inference, or otherwise set temperature to 0 for deterministic outputs.
Prompt format:
Translate this from Japanese to English:
### JAPANESE:
{source_text}
### ENGLISH:
Footnote:
This is an independantly-developed project. If anyone is interested in sponsoring further research please contact nilandekanayake@gmail.com. Questions about model usage can be asked in the disscussion tab.
- Downloads last month
- 21
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for NilanE/tinyllama-en_ja-translation-v3
Base model
NilanE/tinyllama-relora-merge