--- tags: - generated_from_trainer datasets: cmotions/NL_restaurant_reviews metrics: - accuracy - recall - precision - f1 widget: - text: Wat een geweldige ervaring. Wij gebruikte de lunch bij de Librije. 10 gangen met in overleg hierbij gekozen wijnen. Alles klopt. De aandacht, de timing, prachtige gerechtjes. En wat een smaaksensaties! Bediening met humor. Altijd daar wanneer je ze nodig hebt, maar nooit overdreven aanwezig. example_title: Michelin restaurant - text: Mooie locatie, aardige medewerkers. Maaltijdsalade helaas teleurstellend, zeer kleine portie voor 13,80. Jammer. example_title: Mooie locatie, matig eten model-index: - name: NL_BERT_michelin_finetuned results: [] --- # NL_BERT_michelin_finetuned This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on a [Dutch restaurant reviews dataset](https://huggingface.co/datasets/cmotions/NL_restaurant_reviews). Provide Dutch review text to the API on the right and receive a score that indicates whether this restaurant is eligible for a Michelin star ;) It achieves the following results on the evaluation set: - Loss: 0.0637 - Accuracy: 0.9836 - Recall: 0.5486 - Precision: 0.7914 - F1: 0.6480 - Mse: 0.0164 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 | Mse | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:| | 0.1043 | 1.0 | 3647 | 0.0961 | 0.9792 | 0.3566 | 0.7606 | 0.4856 | 0.0208 | | 0.0799 | 2.0 | 7294 | 0.0797 | 0.9803 | 0.4364 | 0.7415 | 0.5495 | 0.0197 | | 0.0589 | 3.0 | 10941 | 0.0637 | 0.9836 | 0.5486 | 0.7914 | 0.6480 | 0.0164 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1