This model is a fine-tunning of the bert-base-german-cased for predicting the usefulness of a review using the Bestande binary dataset (see jorgeortizv/Bestande). The purpose is to predict if a review found online for university courses would be considered useful for other users or not. Similar as what you can find in Stackoverflow, but removing the necesity of human anotators to evaluate reviews.

Details on training as well as a detailed explanation on the project can be found on: https://github.com/liamti5/UZH-Essentials-in-Text-and-Speech-Processing

Interpreting results:

  • Label 0 : neutral review
  • Label 1 : slightly useful review
  • Label 2 : useful review
  • Label 3 : extremely useful review
Downloads last month
192
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.