Gemma-2 2B Verbalized Rebus Solver - GGUF Q8_0 ๐ฎ๐น
This model is a parameter-efficient fine-tuned version of Gemma-2 2B trained for verbalized rebus solving in Italian, as part of the release for our paper Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses. The task of verbalized rebus solving consists of converting an encrypted sequence of letters and crossword definitions into a solution phrase matching the word lengths specified in the solution key. An example is provided below.
The model was trained in 4-bit precision for 5070 steps on the verbalized subset of the EurekaRebus using QLora via Unsloth and TRL.This repository contains the GGUF exported checkpoint of the model in Q8_0
format, and the Modelfile
for usage with Ollama (see below).
We also provide adapter checkpoints through training and FP16 merged versions of this model for analysis and local execution.
Local usage with Ollama
A ready-to-use local version of this model is hosted on the Ollama Hub and can be used as follows:
ollama run gsarti/gemma2-2b-rebus-solver "Rebus: [Materiale espulso dai vulcani] R O [Strumento del calzolaio] [Si trovano ai lati del bacino] C I [Si ingrassano con la polenta] E I N [Contiene scorte di cibi] B [Isola in francese]\nChiave risolutiva: 1 ' 5 6 5 3 3 1 14"
# EXAMPLE GENERATION:
#
# Procediamo alla risoluzione del rebus passo per passo:
# - [Materiale espulso dai vulcani] = lava
# - R O = R O
# - [Strumento del calzolaio] = lesina
# - [Si trovano ai lati del bacino] = anche
# - C I = C I
# - [Si ingrassano con la polenta] = oche
# - E I N = E I N
# - [Contiene scorte di cibi] = dispensa
# - B = B
# - [Isola in francese] = ile
#
# Prima lettura: lava R O lesina anche C I oche E I N dispensa B ile
#
# Ora componiamo la soluzione seguendo la chiave risolutiva:
# 1 = L
# ' = '
# 5 = avaro
# 6 = lesina
# 5 = anche
# 3 = ciรฒ
# 3 = che
# 1 = รจ
# 14 = indispensabile
#
# Soluzione: L'avaro lesina anche ciรฒ che รจ indispensabile
Limitations
Lexical overfitting: As remarked in the related publication, the model overfitted the set of definitions/answers for first pass words. As a result, words that were explicitly witheld from the training set cause significant performance degradation when used as solutions for verbalized rebuses' definitions. You can compare model performances between in-domain and out-of-domain test examples to verify this limitation.
Model curators
For problems or updates on this model, please contact gabriele.sarti996@gmail.com.
Citation Information
If you use this model in your work, please cite our paper as follows:
@article{sarti-etal-2024-rebus,
title = "Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses",
author = "Sarti, Gabriele and Caselli, Tommaso and Nissim, Malvina and Bisazza, Arianna",
journal = "ArXiv",
month = jul,
year = "2024",
volume = {abs/2408.00584},
url = {https://arxiv.org/abs/2408.00584},
}
Acknowledgements
We are grateful to the Associazione Culturale "Biblioteca Enigmistica Italiana - G. Panini" for making its rebus collection freely accessible on the Eureka5 platform.
- Downloads last month
- 2
Model tree for gsarti/gemma-2-2b-rebus-solver-Q8_0-GGUF
Dataset used to train gsarti/gemma-2-2b-rebus-solver-Q8_0-GGUF
Collection including gsarti/gemma-2-2b-rebus-solver-Q8_0-GGUF
Evaluation results
- First Pass Exact Match on EurekaRebustest set self-reported0.430
- Solution Exact Match on EurekaRebustest set self-reported0.360