"Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!"

— # Leroy Dyer (1972-Present)

“Epochs are the key to effective training, rather than merely mass dumping examples—unless those examples are interconnected within a single or multiple conversations that teach through dialogue.”

Xtra Allignment - All Models in the series will be merged to a single model ! : After they are benchmarked !

Please be patient as i am waiting to be able to post this series on the leader board ! I think i cracked the missing elements in the scoring :

These models are merge alligned , with the past best models : this may also mean that i cannpt pass my own personal best ! ...

The models have been massivly trained on many facets and i do not like to merge them but with the leaderboard model comparer i can see where the mistakes or low areas of the model are and refocus on these tasks the task only need to be generalized and aligned after merging ... to keep the model focused !

After this benching process i will return and re insert the main prompt for the model , as it gives the model a more Detatild response and flexablity to generate personality, which may not be prudent for testing ! hence the prompt removal !

This is the current latest model !

With these extra allignments , after the merges the model will find its feet again and plateau ... This model will be tested / bechmarked by leader board soon !

  • Developed by: LeroyDyer
  • License: apache-2.0
  • Finetuned from model : SpydazWeb_AI_HumanAI_012_INSTRUCT_XA

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

[](https://github.com/unslothai/unsloth

@misc{open-llm-leaderboard-v2, author = {Clémentine Fourrier and Nathan Habib and Alina Lozovskaya and Konrad Szafer and Thomas Wolf}, title = {Open LLM Leaderboard v2}, year = {2024}, publisher = {Hugging Face}, howpublished = "\url{https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard}", }

@software{eval-harness, author = {Gao, Leo and Tow, Jonathan and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and McDonell, Kyle and Muennighoff, Niklas and Phang, Jason and Reynolds, Laria and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy}, title = {A framework for few-shot language model evaluation}, month = sep, year = 2021, publisher = {Zenodo}, version = {v0.0.1}, doi = {10.5281/zenodo.5371628}, url = {https://doi.org/10.5281/zenodo.5371628}, }

@misc{zhou2023instructionfollowingevaluationlargelanguage, title={Instruction-Following Evaluation for Large Language Models}, author={Jeffrey Zhou and Tianjian Lu and Swaroop Mishra and Siddhartha Brahma and Sujoy Basu and Yi Luan and Denny Zhou and Le Hou}, year={2023}, eprint={2311.07911}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2311.07911}, }

@misc{suzgun2022challengingbigbenchtaskschainofthought, title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them}, author={Mirac Suzgun and Nathan Scales and Nathanael Schärli and Sebastian Gehrmann and Yi Tay and Hyung Won Chung and Aakanksha Chowdhery and Quoc V. Le and Ed H. Chi and Denny Zhou and Jason Wei}, year={2022}, eprint={2210.09261}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2210.09261}, }

@misc{hendrycks2021measuringmathematicalproblemsolving, title={Measuring Mathematical Problem Solving With the MATH Dataset}, author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, year={2021}, eprint={2103.03874}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2103.03874}, }

@misc{rein2023gpqagraduatelevelgoogleproofqa, title={GPQA: A Graduate-Level Google-Proof Q&A Benchmark}, author={David Rein and Betty Li Hou and Asa Cooper Stickland and Jackson Petty and Richard Yuanzhe Pang and Julien Dirani and Julian Michael and Samuel R. Bowman}, year={2023}, eprint={2311.12022}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2311.12022}, }

@misc{sprague2024musrtestinglimitschainofthought, title={MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning}, author={Zayne Sprague and Xi Ye and Kaj Bostrom and Swarat Chaudhuri and Greg Durrett}, year={2024}, eprint={2310.16049}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2310.16049}, }

@misc{wang2024mmluprorobustchallengingmultitask, title={MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark}, author={Yubo Wang and Xueguang Ma and Ge Zhang and Yuansheng Ni and Abhranil Chandra and Shiguang Guo and Weiming Ren and Aaran Arulraj and Xuan He and Ziyan Jiang and Tianle Li and Max Ku and Kai Wang and Alex Zhuang and Rongqi Fan and Xiang Yue and Wenhu Chen}, year={2024}, eprint={2406.01574}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2406.01574}, }

@misc{open-llm-leaderboard-v1, author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf}, title = {Open LLM Leaderboard (2023-2024)}, year = {2023}, publisher = {Hugging Face}, howpublished = "\url{https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard}" }

Downloads last month
73
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for LeroyDyer/SpydazWeb_AI_HumanAI_012_INSTRUCT_XA

Finetunes
1 model
Merges
2 models
Quantizations
2 models