This RoBERTa-based model ("MindMiner") can classify the degree of mind perception in English language text in 2 classes:

  • high mind perception πŸ‘©
  • low mind perception πŸ€–

The model was fine-tuned on 997 manually annotated open-ended survey responses. The hold-out accuracy is 75.5% (vs. a balanced 50% random-chance baseline).

Hartmann, J., Bergner, A., & Hildebrand, C. (2023). MindMiner: Uncovering Linguistic Markers of Mind Perception as a New Lens to Understand Consumer-Smart Object Relationships. Journal of Consumer Psychology, Forthcoming.

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using j-hartmann/MindMiner-Binary 1