|
--- |
|
library_name: adapter-transformers |
|
base_model: openchat/openchat_3.5 |
|
license: mit |
|
datasets: |
|
- declare-lab/MELD |
|
metrics: |
|
- f1 |
|
tags: |
|
- MELD |
|
- Trigger |
|
- 7B |
|
- LoRA |
|
- llama2 |
|
language: |
|
- en |
|
pipeline_tag: text-classification |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
The model identfies the trigger for the emotion flip of the last utterance in multi party conversations. |
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
The model presented here is tailored for the EDiReF shared task at SemEval 2024, specifically addressing Emotion Flip Reasoning (EFR) in English multi-party conversations. |
|
|
|
The model utilizes the strengths of large language models (LLMs) pre-trained on extensive textual data, enabling it to capture complex linguistic patterns and relationships. To enhance its performance for the EFR task, the model has been finetuned using Quantized Low Rank Adaptation (QLoRA) on the dataset with strategic prompt engineering. This involves crafting input prompts that guide the model in identifying trigger utterances responsible for emotion-flips in multi-party conversations. |
|
|
|
In summary, this model excels in pinpointing trigger utterances for emotion-flips in English dialogues, showcasing the effectiveness of openchat, LLM capabilities, QLoRA and strategic prompt engineering. |
|
|
|
|
|
- **Developed by:** Hasan et al |
|
- **Model type:** LoRA Adapter for openchat_3.5 (Text classification) |
|
- **Language(s) (NLP):** English |
|
- **License:** MIT |
|
|
|
### Model Sources |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [Multi-Party-DialoZ](https://github.com/Zuhashaik/Multi-Party-DialoZ) |
|
- **Paper [Soon]:** [More Information Needed] |