File size: 3,304 Bytes
20b08c6 b89ce29 20b08c6 82eeb1e 20b08c6 82eeb1e 3f3b16c d11e0ff 82eeb1e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Finetuned Phi-3-Small-8K-Instruct model on interaction data from Zooniverse.
It achieves 0.735 accuracy on character_interaction dataset (test split), which surpasses gpt-4o-2024-05-13's 0.699 accuracy on same split.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Michael Xu]
- **Funded by:** [Andrew Piper]
- **Model type:** [Autoregressive language modeling]
- **Language(s) (NLP):** [English]
- **Finetuned from model:** [microsoft/Phi-3-small-8k-instruct]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
## How to Get Started with the Model
Use the code below to get started with the model.
```
model = AutoModelForCausalLM.from_pretrained(
"ChunB1/Phi-3-interact",
torch_dtype="auto",
trust_remote_code=True,
attn_implementation="flash_attention_2",
)
model.to("cuda")
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-small-8k-instruct")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
# Using passage (two sentences), char1 and char2 to classify the interaction type (No, Associating, Thinking, Touching, Observing, Communicating) for one datapoint.
example = {'book_name': '2013_O_Connell,Carol_ItHappensintheDark_MY',
'sentence_ID': 371,
'passage': 'Her smile was just a flash, a taste of things to come. He shot her a glance to beg, Play nice.',
'char1_COREF': 95,
'char2_COREF': 448,
'char1': 'He',
'char2': 'her',}
prompt_five_class_explained = """Communicating: char1 and char2 are engaged in some form of communication, such as speaking, writing, or signaling.
Associating: char1 and char2 are linked by a social or relational context, such as friendship, teamwork, or other associative bonds.
Observing: at least one character is observing or watching another one, without direct interaction.
Thinking: at least one character is thinking about or recalling memories of another one, without direct interaction.
Touching: char1 and char2 are engaged in physical touch or contact."""
prompt_base = "what kind of interaction between char1 and char2? Choose one of six options: No, Associating, Thinking, Touching, Observing, Communicating."
prompt = """Task Description: Classify the type of interaction between char1 and char2 in a given passage. There are six categories of interaction:
No interaction: Direct or indirect interaction does not occur between char1 and char2. Any imagination or assumption of interaction also counts as No.
""" + prompt_five_class_explained + prompt_base
prompt_suffix = "Only return the option and don't provide any extra information."
prompt_full = f"passage: {example['passage']}, char1: {example['char1']}, char2: {example['char2']}, " + prompt + prompt_suffix
messages = [{"role": "user", "content": prompt_full}]
generation_args = {
"max_new_tokens": 15,
"return_full_text": False,
"do_sample": False,
}
# Label will be "Communicating"
print(pipe(messages, **generation_args)[0]["generated_text"].strip())
``` |