Update README.md
Browse files
README.md
CHANGED
@@ -51,28 +51,66 @@ To utilize the LlamaLens model for inference, follow these steps:
|
|
51 |
Use the transformers library to load the LlamaLens model and its tokenizer:
|
52 |
|
53 |
```python
|
54 |
-
from transformers import
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
-
model_name = "QCRI/LlamaLens"
|
57 |
-
pipe = pipeline("text-generation", model=model_name)
|
58 |
```
|
59 |
3. **Prepare the Input:**:
|
60 |
Tokenize your input text:
|
61 |
```python
|
62 |
-
|
63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
messages = [
|
65 |
-
{"role": "system", "content":
|
66 |
-
{"role": "user", "content": input_text},
|
|
|
67 |
]
|
68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
|
70 |
```
|
71 |
4. **Generate the Output:**:
|
72 |
Generate a response using the model:
|
73 |
```python
|
74 |
-
|
75 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
```
|
77 |
|
78 |
## Results
|
|
|
51 |
Use the transformers library to load the LlamaLens model and its tokenizer:
|
52 |
|
53 |
```python
|
54 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
55 |
+
|
56 |
+
# Define model path
|
57 |
+
MODEL_PATH = "QCRI/LlamaLens"
|
58 |
+
|
59 |
+
# Load model and tokenizer
|
60 |
+
device_map = "auto"
|
61 |
+
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map=device_map)
|
62 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)
|
63 |
+
tokenizer.pad_token = tokenizer.eos_token
|
64 |
|
|
|
|
|
65 |
```
|
66 |
3. **Prepare the Input:**:
|
67 |
Tokenize your input text:
|
68 |
```python
|
69 |
+
# Define task and input text
|
70 |
+
task = "classification" # Change to "summarization" for summarization tasks
|
71 |
+
instruction = (
|
72 |
+
"Analyze the text and indicate if it shows an emotion, then label it as joy, love, fear,"
|
73 |
+
" anger, sadness, or surprise. Return only the label without any explanation, justification, or additional text."
|
74 |
+
)
|
75 |
+
input_text = "I am not creating anything I feel satisfied with."
|
76 |
+
output_prefix = "Summary: " if task == "summarization" else "Label: "
|
77 |
+
|
78 |
+
# Define messages for chat-based prompt format
|
79 |
messages = [
|
80 |
+
{"role": "system", "content": "You are a social media expert providing accurate analysis and insights."},
|
81 |
+
{"role": "user", "content": f"{instruction}\nInput: {input_text}"},
|
82 |
+
{"role": "assistant", "content": output_prefix}
|
83 |
]
|
84 |
|
85 |
+
# Tokenize input
|
86 |
+
input_ids = tokenizer.apply_chat_template(
|
87 |
+
messages,
|
88 |
+
add_generation_prompt=False,
|
89 |
+
continue_final_message=True,
|
90 |
+
tokenize=True,
|
91 |
+
padding=True,
|
92 |
+
return_tensors="pt"
|
93 |
+
).to(model.device)
|
94 |
+
|
95 |
+
|
96 |
|
97 |
```
|
98 |
4. **Generate the Output:**:
|
99 |
Generate a response using the model:
|
100 |
```python
|
101 |
+
# Generate response
|
102 |
+
outputs = model.generate(
|
103 |
+
input_ids,
|
104 |
+
max_new_tokens=128,
|
105 |
+
do_sample=False,
|
106 |
+
eos_token_id=tokenizer.eos_token_id,
|
107 |
+
pad_token_id=tokenizer.eos_token_id,
|
108 |
+
temperature=0.001
|
109 |
+
)
|
110 |
+
|
111 |
+
# Decode and print response
|
112 |
+
response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
|
113 |
+
print(response)
|
114 |
```
|
115 |
|
116 |
## Results
|