Lorastral-7B-2024-02-exp (Experimental)
Lorastral-7B-2024-02-exp is an experimental child-friendly AI model designed to provide bias-reduced, age-appropriate language for educational and storytelling applications. Fine-tuned on a carefully curated dataset with input from educators, LORA aims to make STEM learning engaging and inclusive for young learners.
π Features
- Bias-Reduced: Designed to minimize gender and cultural stereotypes.
- Optimized for Kids: Uses child-appropriate language and educational content.
- Interactive Storytelling: Supports engaging, personalized narratives for learning.
π Benchmark Performance
LORA already outperforms leading models in readability benchmarks for eplaining german terms, ensuring content is more accessible to young learners:
Model | Flesch Reading Ease β | Wiener Sachtextformel β | Avg Sentence Length β | Avg Word Length β |
---|---|---|---|---|
Lorastral-8B (LORA) | 80.24 | 2.70 | 9.06 | 1.39 |
Mistral-8B | 71.70 | 4.22 | 14.92 | 1.42 |
GPT-4o | 77.17 | 3.09 | 13.89 | 1.37 |
Gemini 1.5 Pro | 80.36 | 2.73 | 12.94 | 1.34 |
Claude 3.5 Sonnet | 44.34 | 8.83 | 42.29 | 1.41 |
Higher Flesch Reading Ease and lower Wiener Sachtextformel scores confirm LORAβs superior readability for children.
β οΈ Experimental Status
This is an early version and may still exhibit biases or limitations, it may also producte non-coherent output. We welcome feedback from educators, parents, and researchers to improve future iterations.
π Get Involved
We encourage community contributions! If you have insights, feedback, or dataset recommendations, please reach out.
- Downloads last month
- 5