Edit model card

experience-model-v1

This model is intended to detect the presence of a present-moment experience a human or animal is experiencing in a sentence.

Usage

Given a sentence, the model gives logits of whether or not that sentence contains a present-moment experience. Higher values correspond to the sentence having that experience.

model = transformers.AutoModelForSequenceClassification.from_pretrained('edmundmills/experience-model-v1')  # type: ignore
tokenizer = transformers.AutoTokenizer.from_pretrained('edmundmills/experience-model-v1', use_fast=False)  # type: ignore
sentence = "I am eating food."
tokenized = tokenizer([sentence], return_tensors='pt', return_attention_mask=True)
input_ids, masks = tokenized['input_ids'], tokenized['attention_mask']
with torch.inference_mode():
    out = model(input_ids, attention_mask=masks)
probs = out.logits.sigmoid().squeeze().item()
print(probs) # 0.92

Model description

This model was fine-tuned from 'microsoft/deberta-v3-large'.

Intended uses & limitations

More information needed

Training and evaluation data

This model was trained on 745 training samples, with ~10% of them containing present moment experiences.

Training procedure

The model was fine-tuned using https://github.com/AlignmentResearch/experience-model. It used BCE Loss.

Training hyperparameters

It used the following hyperparameters: learning_rate: 2.0 e-05 batch_size: 16 epochs: 200 weight_decay: 0.01

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
6
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.