Edit model card

Gbert finetuned on on-topic vs. off-topic sentences of the GLoHBCD Dataset (https://github.com/SelinaMeyer/GLoHBCD). The dataset leverages Motivational Interviewing client behaviour codes to evaluate user utterances across different dimensions and gauge user's stance and thoughts about behaviour change in the context of weight loss.

This model classifies German text around behaviour change as either "Follow/Neutral" (not related to behaviour change, 0) or "Change Related" (related to behaviour change, 1) and can be used as a prefilter to spot sentences which can be used to infer information for a user's thoughts about behaviour change (see selmey/behaviour-change-valence-german; selmey/behaviour-change-labels-german; and selmey/behaviour-change-sublabels-german) When applied to the GLoHBCD, it reaches a macro F1 score of 72.67% on the test set.

The model should be used in combination with the "deepset/gbert-base" tokenizer.

When using the model, please cite:

@InProceedings{meyer-elsweiler:2022:LREC,
author    = {Meyer, Selina  and  Elsweiler, David},
title     = {GLoHBCD: A Naturalistic German Dataset for Language of Health Behaviour Change on Online Support Forums},
booktitle      = {Proceedings of the Language Resources and Evaluation Conference},
month          = {June},
year           = {2022},
address        = {Marseille, France},
publisher      = {European Language Resources Association},
pages     = {2226--2235},
url       = {https://aclanthology.org/2022.lrec-1.239}}
Downloads last month
119
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.