Evaluation Results (Pass@1)

  • HumanEval: 30.49 %
  • MBPP: 34.00 %
  • MultiPL-E (C++): 23.60 %
  • MultiPL-E (JS): 18.63 %
  • Average: 26.68 %

Model Details

This PEFT adapter has been trained by using Flower, a friendly federated AI framework.

The adapter and benchmark results have been submitted to the FlowerTune LLM Code Leaderboard.

Please check the following GitHub project for details on how to reproduce training and evaluation steps:

FlowerTune-LLM-Labs

Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ethicalabs/Flwr-SmolLM2-1.7B-Instruct-Coding-PEFT

Adapter
(10)
this model

Dataset used to train ethicalabs/Flwr-SmolLM2-1.7B-Instruct-Coding-PEFT

Collection including ethicalabs/Flwr-SmolLM2-1.7B-Instruct-Coding-PEFT