Edit model card

Model Card for Model neovalle/H4rmoniousBreeze

image/jpeg

Model Details

Model Description

This is model is a version of HuggingFaceH4/zephyr-7b-beta fine-tuned via Autotrain Reward Model, using the H4rmony dataset, which aims to better align the model with ecological values through the use of ecolinguistics principles.

  • Developed by: Jorge Vallego
  • Funded by : Neovalle Ltd.
  • Shared by : airesearch@neovalle.co.uk
  • Model type: mistral
  • Language(s) (NLP): Primarily English
  • License: MIT
  • Finetuned from model: HuggingFaceH4/zephyr-7b-beta

Uses

Intended as PoC to show the effects of H4rmony dataset.

Direct Use

For testing purposes to gain insight in order to help with the continous improvement of the H4rmony dataset.

Downstream Use

Its direct use in applications is not recommended as this model is under testing for a specific task only (Ecological Alignment)

Out-of-Scope Use

Not meant to be used other than testing and evaluation of the H4rmony dataset and ecological alignment.

Bias, Risks, and Limitations

This model might produce biased completions already existing in the base model, and others unintentionally introduced during fine-tuning.

How to Get Started with the Model

It can be loaded and run in a Colab instance with High RAM. Code to load base and finetuned models to compare outputs:

https://github.com/Neovalle/H4rmony/blob/main/H4rmoniousBreeze.ipynb

Training Details

Autotrained reward model

Training Data

H4rmony Dataset - https://huggingface.co/datasets/neovalle/H4rmony

Downloads last month
801
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train neovalle/H4rmoniousBreeze

Spaces using neovalle/H4rmoniousBreeze 6